Conference Menu

Current Event 

Overview 
Short Courses
Register
Hotel & Travel
Sponsor 
Posters
Download Brochure
Press Pass 
Request Brochure 
Archive 

DVD logo

Cloud Computing

Please click here to download the following podcasts: 

1000 Genomes Project: Cancer, Genetic Variation, and Drug Response

Mapping Genomes in 3D

The Human Microbiome Project: Next-Generation Sequencing and Analysis of the Indigenous Microbiota

 

Applied Biosystems NEW

Roche

 

Agilent Technologies

Beckman Coulter Genomics

Bioteam

Caliper

Complete Genomics
Edge

Expression Analysis

Genologics
HP

Illumina

NanoString

 

Liquidia

 

Bio-IT-Word_logo

International_drug_disc

Nature_logo

Science AAAS logo

The Scientist

 

Pharm Cast

 

Sequencing Technologies - The Next Generation by Michael Metzker


Day 1  |  Day 2  |  Day 3  |  Short Courses 

Companion Meeting: Next-Generation Sequencing Data Management 

TUESDAY, SEPTEMBER 28, 2010


7:30 am Breakfast PresentationSponsored by
NanoString

 



NanoString’s nCounter Analysis System: High-Throughput NGS Data Validation for mRNA, microRNA, and Genomic Copy Number Variation Studies

Sean Ferree, Ph.D., Director of Product Development, NanoString Technologies, Inc.

The nCounter Analysis System is a rapid and cost effective method for direct, digital, highly multiplexed quantification of hundreds of different nucleic acid species in a single, amplification-free assay with sensitivity comparable to qPCR. The technology is the perfect companion for validation of whole genome deep sequencing studies. The assays are compatible with a wide variety of clinical sample types, including purified mRNA, miRNA or genomic DNA, crude cell and tissue lysates, blood collections, and FFPE-extracted material.

Coffee mug8:15 Successful Sequencing
Discussion Groups

Grab a cup of coffee and join a facilitated discussion group focused around specific themes. This unique session allows conference participants to exchange ideas, experiences, and develop future collaborations around a focused topic.

 

Strategies for NGS Data Retention
Paul Smith, Ph.D., Data Architect, Global Information Services, Illumina
Discussion topics include:
• Can we afford to keep all our data forever?
• How do we handle NGS output growing an order of magnitude faster than storage capacity?
• How do we chose what files to keep and for how long? What are the crown jewels and what can we recreate?
• How do we keep track of all this data and find what we no longer need?
• How do you automate the process of managing your data?
• Can we just keep the source data and the recipe?

Asking Physical Questions Using Sequencing as the Output
Erez Lieberman-Aiden, Ph.D., Fellow, Harvard Society of Fellows, Harvard University
Discussion topics include:
• How can sequencing best be applied to classical questions that may not have been amenable to molecular biological approaches in the past?
• Is there a niche for new types of sequencing technologies that could cater to these new experiments?

Defining and Assessing "Accuracy" and "Errors" in NGS Data
James Lyons-Weiler, Ph.D., Scientific Director, Senior Research Scientist, Genomics and Proteomics Core Laboratories, University of Pittsburgh
Discussion topics include:
• What measures of accuracy and error exist, and what do they really mean?
• Do quality scores correlate with sequencing accuracy?
• Which types of information are critical for assessing confidence in 'detected' variants?
• What biological complexities might reduce assessed accuracy?
• What sequencing technologies help overcome these complexities?

Strategies for Maintaining Large Scale Metagenomic Shotgun Data
Folker Meyer, Ph.D., Computational Biologist, Mathematics and Computer Science, Argonne National Laboratory
Discussion topics include:
• Can we afford redundant computational analysis efforts for metagenome data sets?
• Can we devise a community approach to maintaining computationally analyzed data sets?
• How can we devise the standardization required for this process?
• What are the community needs for a standardized analysis pipeline?
• What data should be included in this process?
• What is the governance model for this? Should it be placed under the auspices of the Genomics Standards Consortium?

 Managing Change in the NGS Pipeline
Gregg TeHennepe, Senior Manager, Research Liaison, Information Technology, The Jackson Laboratory
Discussion topics include:
• What are the most frequent causes of problems in the NGS pipeline?
• What NGS environments are most sensitive to the challenges of change in the pipeline?
• What sections of the NGS pipeline see the most change?

Taking NGS to the Clinic
Stephan Sanders, Ph.D., Postdoctoral Associate, Yale University 
Discussion topics include:
• What hurdles prevent sequencing results being used in routine clinical practice?
• Which specialties are likely to benefit first?
• How can clinicians be trained to understand the results?
• Which of the 3 million variants should be reported back?
• Where should the results be stored?
• Is the data quality good enough yet?

Computer Hardware, Software and Operating Systems – What do I really need to meet my needs (and those of others who rely on me)?
Stephanie Costello, Director, Sales, DNAStar
Farhan Quraishi, Technical Support Specialist, DNAStar

Discussion topics include:
• Desktop computer, in-house Linux cluster, cloud computing or something in between
• Operating system options – Win, Mac or Linux
• Commercial software, freeware or a combo

How to Make the Most out of Your NGS Data Using Bioinformatics: Service or Software?
Didier G. Perez, COO & CFO, Eureka Genomics
Discussion topics include:
• What are the essential tools that you need?
• Which is the most valuable?
• Criteria for selection of a bioinformatics software and/or service?

Estimating False Positive Rates of Variants
Victor Weigman, Ph.D., Computational Biologist, Expression Analysis
Discussion topics include:
• What effect does heterogeneous tissue (i.e tumor) have on determining alleles?
• What is the concordance between different variant-calling algorithms?
• How do low-depth regions affect calls?
• Are sample-pooling strategies effective?

Selecting the Right Tool for Validation of Next-Generation Sequencing Experiment
Sean Ferree, Ph.D., Director of Product Development, Nanostring Technologies
Discussion topics include:
• How can we accelerate the validation process by leveraging new technologies?
• What are the key parameters of a typical validation study? Where are the bottlenecks?
• What are the most important factors informing the selection of a tool for validation?

Upcoming Technical Challenges Brought About by Next-Generation Sequencing
W. Richard McCombie, Ph.D., Professor, Cold Spring Harbor Laboratory
Discussion topics include:
• How will next-gen sequencing data shape future experiments?
• How can those needed experiments be carried out?
• What problems can't be solved with current technology?
• What new technologies will need to be to maximize utility of next-gen sequencing data?

Challenges and Opportunities in Data Intensive Computing
Murali Ramanathan, Associate Professor of Pharmaceutical Sciences, University at Buffalo, SUNY
Discussion topics include:
• Architectures for data intensive computing
• Algorithms
• Data intensive computing resources

 


 Evolving Sequencing Methods to Enable Genomic Research

9:15 Chairperson’s Remarks

9:20 Third Generation Sequencing Pipeline for Cancer

Raphael Bueno, Ph.D., The Thoracic Surgery Oncology Laboratory and Division of Thoracic Surgery, Brigham and Women’s Hospital, Harvard Medical School

This presentation will discuss the use of next generation sequencing data for both transcriptome and the genome elucidation for solid tumors. It will also review and compare multiple different platforms for deep sequencing the tumor genomes and the identification of multiple types of mutations.

9:50 Comprehensive Mapping of Long-Range Interactions Reveals Folding Principles of the Human Genome

Job Dekker, Ph.D., Associate Professor, Gene Function and Expression, University of Massachusetts Medical School

To probe the spatial arrangement of genomes we developed Hi-C, a method that combines 3C and high-throughput sequencing to map three-dimensional chromatin interactions in an unbiased, genome-wide fashion. Application of Hi-C to the human genome revealed a novel folding principles of the human genome. For instance, open and closed chromatin are spatially segregated, forming two genome-wide compartments. The contents of the compartments are dynamic: changes in chromatin state and/or expression correlate with movement from one compartment to the other.

10:20 Closing the Gap in Time from Raw Data to Real Science - Science as a Service (ScaaS)
Justin H. Johnson, Director of Bioinformatics, Edge Bio
Next generation sequencing has drastically changed the traditional infrastructure within the sequencing community.  There are several technologies that show promise, but it is not always intuitive where to start. This uncertainty is compounded by the fact that commonly used bioinformatics tools are difficult to build and maintain as well as require vast amounts of compute resources.  Many solutions and platforms, such as cloud computing, offer promises that address one or a small number of these challenges, but the inherent challenges with these are rarely discussed. Edge Bio will present information, research and case studies on how they facilitate Science as a Service (ScaaS) to the community through a technology agnostic approach, taking each individual project from conception and design through informatics and analysis.

10:35 Networking Coffee Break, Poster and Exhibit Viewing

11:15 Generation Microarray and Sequencing Technologies for Genome-Wide Dosage Assays in Yeast and Man

Corey Nislow, Ph.D., Assistant Professor, Donnelly Centre for Cellular and Biomolecular Research, University of Toronto

We use comprehensive collections of cells in which the gene dosage is systematically altered. We have utilized the Yeast Knock Out collection (YKO) and combined it with a collection of yeast overexpressing each gene to screen thousands of bioactive small molecules genome-wide, I will present several compelling drug-target interactions that have been uncovered using a combination of these gene-dose screens as well as provide an overview of our efforts to develop novel microarray and next-generation sequencing technologies to accelerate the tempo of these chemogenomic assays.

11:45 Bind-n-Seq: High-Throughput Analysis of in vitro Protein-DNA Interactions Using Massively Parallel Sequencing

Artem Zykovich, Ph.D. Candidate, Genome Center/Pharmacology, University of California, Davis

Here we introduce Bind-n-Seq, a new high-throughput method for analyzing protein-DNA interactions in vitro, with several advantages over current methods. The procedure has three steps: (1) binding proteins to randomized oligonucleotide DNA targets, (2) sequencing the bound oligonucleotide with massively parallel technology, and (3) finding motifs among the sequences. These results present Bind-n-Seq as a highly rapid and parallel method for determining in vitro binding sites and relative affinities.

12:15 pm Close of Session


12:30 Luncheon Presentation
Enabling Efficient Next Generation Sequencing Based Research with a Versatile LIMS
Michael Kuzyk, Ph.D., Product Manager, Omics Labs
Sponsored by
Genologics
An in-depth look at UW – Northwest Genome Center and USC – Epigenome Center, both of whom considered building their own lab information management system and then chose to purchase a commercial solution instead. With very tight timelines building an in-house system was not an option for UW. They required exceptional sample traceability – while dealing with evolving protocols.  USC was challenged with managing the high volume of sample data being generated by numerous projects - they needed a centralized system to manage their genomics/next-gen lab efficiently. They also required a system that was versatile enough to integrate seamlessly with their complex custom data analysis pipeline.
 


 


Service Providers Share their Tips ‘N Tricks

An outsource provider allows immediate access to sequencing technologies that have taken years of resources and expertise to develop. The decision to use an outsource provider should be based upon factors such as speed of service, increased efficiency, accuracy, quality, reliability, reproducibility, and convenience. In this session, service providers share their tips ‘n tricks that enable their successful sequencing service projects.

2:00 Chairperson’s Remarks

2:05 Service Provider #1:BeckmanCoulterGenomics

 

 

SPRIworks - An Automated Sample Preparation System For All Second Generation Sequencing Platforms

William Donahue, Ph.D., Manager, Molecular Biology, Beckman Coulter Genomics

The emergence of second generation sequencing technologies has enabled scientists to generate vastly increased data sets from their sequencing experiments. The dramatic increase in throughput however is supported by workflows that are relatively complicated and labor intensive. The SPRIworks sample preparation systems I, II and III enable automated sample preparation for each platform. Data will be shown highlighting the capability of SPRIworks for sample multiplexing along with its applicability to other workflows such as RNA-Seq and ChIP-Seq.

2:35 Service Provider #2:Complete Genomics

 

Complete Human Genome Sequencing for Large-Scale Disease Studies

Steve Lincoln, Vice President, Scientific Applications, Complete Genomics

Complete Genomics’ sequencing platform is a combination of technology advancements in DNA library preparation, nanoarrays, sequencing assay chemistry, instruments and software. This affordable, turnkey sequencing service provides human disease researchers with the ability to conduct comprehensive genetic studies of human diseases, and allows complete human genome sequence data to inform new methods of disease prevention and treatment.

Sponsored by
Expression Analysis
3:05 Service Provider #3 
Covering your Bases: Cost Effectiveness of Targeted Re-sequencing with 2nd Generation Sequencers
Victor Weigman, Ph.D., Computational Biologist,  Expression Analysis
While the availability of deep-sequencing platforms is growing in step with cost reduction, full genomic analysis is still cost-prohibitive.  Expression Analysis is approaching this by offing varied genomic re-sequencing platforms along with multiplexing to allow for high resolution sequencing of your regions of interest for fractions of the cost.  We will discuss our experiences in optimizing sample barcoding along with an evaluation of various genome capture technologies.  Secondary to these issues are determining filtering for variant calling that leads to the highest true-positive rate.  We’ll discuss strategies for variant filtering along with prioritizing candidates for association follow-ups.

3:35 Coffee Break, Poster and Exhibit Viewing

 


NGS Expands Genomic Horizons from Prokaryotes to Eukaryotes

4:15 The Human Microbiome Project: Next-Generation Sequencing and Analysis of the Indigenous Microbiota

Vincent B. Young, M.D., Ph.D., Associate Professor, Microbiology and Immunology, University of Michigan

The goals of the NIH-initiated Human Microbiome Project are to generate resources enabling comprehensive characterization of the human microbiota and analyzing its role in human health and disease. The diversity of the indigenous microbiota and the shortcomings of traditional culture-based microbiologic methods have led to extensive use of molecular methods to characterize the extent, dynamics and function of the microbiome. We will use a Demonstration Project investigating the role of the gastrointestinal microbiota in the pathogenesis of ulcerative colitis to illustrate the use of next generation in the Human Microbiome Project.

4:45 Expanding Frontiers in Viral Genomics through the Application of Ultra-Deep Sequencing Technologies

Matthew R. Henn, Ph.D., Director, Viral Genomics, The Broad Institute of MIT and Harvard

Viral diseases such as HIV have an enormous impact on human health worldwide while phage play a critical role in shaping microbial populations as they influence both host mortality and horizontal gene transfer. The development of high-throughput sequencing and annotation strategies are transforming the ability to study at high resolution the diversity landscape of virus and their impact on disease and ecosystem function. Here we describe the application of high-throughput sequencing, assembly and population profiling strategies based on 454 and Illumina technologies that are tuned to the specific needs of viral and phage sequencing.

5:15 Lymphocyte Monitoring by High-Throughput DNA Sequencing

Scott Boyd, M.D., Ph.D., Assistant Professor, Pathology, Stanford University

The repertoire of immune receptors generated by genomic DNA rearrangements in B and T cells enables recognition of diverse threats to the host organism. Deep sequencing of immune receptor loci can provide direct detection and tracking of immune diversity and expanded clonal lymphocyte populations. We have applied this approach to monitor lymphoid malignancies following
therapy, and to study physiological immune responses to vaccination, as well as a variety of immune-mediated non-malignant disorders.

5:45 Close of Day and Pre-Conference and Short Course Registration

6:00-9:00 Dinner Short Course*

SC3: Cloud Computing

Click here for details 


*Separate Registration Required 

Day 1  |  Day 2  |  Day 3  |  Short Courses 

Companion Meeting: Next-Generation Sequencing Data Management