Conference Menu

Current Event 

Overview 
Short Courses
Register
Hotel & Travel
Sponsor 
Posters
Download Brochure
Press Pass 
Request Brochure 
Archive 

DVD logo

Cloud Computing

Please click here to download the following podcasts: 

1000 Genomes Project: Cancer, Genetic Variation, and Drug Response

Mapping Genomes in 3D

The Human Microbiome Project: Next-Generation Sequencing and Analysis of the Indigenous Microbiota

 

Applied Biosystems NEW

Roche

 

Agilent Technologies

Beckman Coulter Genomics

Bioteam

Caliper

Complete Genomics
Edge

Expression Analysis

Genologics
HP

Illumina

NanoString

 

Liquidia

 

Bio-IT-Word_logo

International_drug_disc

Nature_logo

Science AAAS logo

The Scientist

 

Pharm Cast

 

Sequencing Technologies - The Next Generation by Michael Metzker


NGX: EVOLUTION OF NEXT-GENERATION SEQUENCING - SHORT COURSES


Day 1  |  Day 2  |  Day 3  |  Short Courses

SUNDAY, SEPTEMBER 26 | 2:00 – 5:00 pm

Pre-Conference Short Course 1*

Making Sense of Next-Gen Sequencing Data

COURSE AGENDA

1:30 – 2:00 pm Short Course Registration

2:00 Opening Remarks
Stan Gloss, Founding Partner and Managing Director, BioTeam, Inc.

2:10 Next-Generation Sequencing Analysis and Beyond
Michele Clamp, Senior Consultant, BioTeam, Inc.
DNA sequencing technology is moving at a lightning pace. The last few years have seen the cost of sequencing drop by an order of magnitude, and the next two years seem likely to deliver another huge change. So much change in such a short period of time results in a need for radical change to our data storage, mining and analysis needs. This talk will include:
- Designing effective sequencing strategies
- Dealing with increasing amounts of sequencing data
- Efficient first pass analysis methods for read mapping and assembly
- Getting the most from RNAseq data
- Downstream annotation

2:45 Galaxy: Making NGS Analyses Accessible for All
Daniel Blankenberg, Ph.D., Postdoctoral Research Associate, Biochemistry & Molecular Biology, Pennsylvania State University
Recent rapid proliferation of DNA sequencing technology has enabled any investigator, for a modest cost, to produce enormous amounts of sequence data; however, working with this large-scale sequencing data yields significant challenges for even the largest institutions, let alone individual investigators and small labs. Here, we present Galaxy, an open-source analysis framework that is available as a free public service and able to be effortlessly deployed on both private hardware or cloud resources. The Galaxy platform empowers transparent and reproducible research by providing interactive access to popular tools, including those that allow manipulation of raw sequencing reads, mapping, peak calling, genomic interval operations, visualization at genome browsers and more, as well as a point-and-click workflow system. Using Galaxy, a user without computational expertise can, for example, perform a complete ChIP-Seq analysis beginning with raw sequencing reads and continuing through visualizing called peaks at reference or custom-built genome browsers, all without leaving the familiar interface of a web browser.

3:20 Refreshment Break

3:55 Practical Bioinformatics for an Academic Next-Gen Sequencing Core Lab
Stuart Brown, Ph.D., Associate Professor, Center for Health Informatics and Bioinformatics, New York University School of Medicine
In an academic biomedical research institution, the next-gen sequencing core lab often requires extensive bioinformatics support. Many investigators who are interested in using next-gen sequencing technology in their research projects do not have the necessary bioinformatics skills in their laboratory to analyze and interpret the data. At NYU, our Genome Sequence Informatics group participates in experimental design, grantwriting, data QC, primary data processing (basecalling, alignment to reference genomes, parsing and reformatting primary sequence data output files), and extensive downstream data analysis projects (ChIP-seq peak calling, variant discovery, differential gene expression, epigenomics, metagenomics, etc.). We also work closely with IT to develop data storage and delivery systems.

4:30 Closing Panel Discussion

5:00 pm End of Short Course


SUNDAY, SEPTEMBER 26 | 2:00 – 5:00 pm


Pre-Conference Short Course 2*

Sponsored by
agilent  
Target Enrichment for NGS

Next-generation sequencing technology has increased the ability to sequence DNA in a massively parallel manner. Nevertheless, routine genetic screens in large cohorts of individuals remain cost- prohibitive through whole genome sequencing approaches. Agilent Technologies has developed the SureSelect platform, a portfolio of sample-preparation products to enable next-generation sequencing users to focus analysis to particular genomic loci with substantial cost savings.

2:00 Opening Remarks

2:10 Exome Capture and Sequencing in Identifying Mutations Responsible for Rare Recessive Disorders
Jacek Majewski, Ph.D., Assistant Professor, Canada Research Chair, Human Genetics, McGill University and Genome Quebec Innovation Centre 
The 1980’s ushered in the era of gene mapping approaches in human genetics. Family-based studies, followed by positional cloning, identified genes and mutations responsible for many common Mendelian disorders. The task was long and laborious; it took nearly ten years between linkage analysis and identification of the gene responsible for Huntington disease. With technological advances, the process became easier. At the turn of the century, I became involved in mapping rare recessive disorders. In 2000 we began by using homozygosity mapping and microsatellite markers to find a mutation responsible for the Sohar syndrome [1]. We were eventually successful, but the process took over two years, a lot of manual labor, and a fair share of luck. More recently (2009), we used SNP microarrays to tackle the Van Den Ende-Gupta Syndrome [2]. We were able to find the gene in less than one year, but again a certain amount of luck and manual intervention – candidate gene selection – was necessary. Finally, in early 2010, we used exome capture followed by next generation sequencing to search for the gene involved in the Fowler syndrome [3]. From the reception of DNA from only two unrelated patients, with no known consanguinity, to identifying the gene responsible for the disease, we counted less than three weeks. The result is all the more formidable, since both patients were compound heterozygous for a total of 4 distinct mutations. I would like to use this historical journey to emphasize the immense potential of emerging technologies in human genetics.


2:50 Laboratory Methods and Analysis Considerations for Efficient Analysis of the Human Exome
Shawn Levy, Ph.D., Faculty Investigator, Hudson Alpha Institute for Biotechnology

The ability to isolate specific regions of the genome and then sequence those regions in a highly efficient manner allows an unprecedented ability to characterize major regions of the human genome.  These regions can include areas identified in linkage screens, whole-genome association studies or candidate gene approaches.  More recently, reagents have become available to allow an unbiased examination of the coding regions of the genome, also known as whole-exome analysis.  While full genome sequencing offers a truly unbiased view of the genome, it remains expensive compared to whole exome approaches that target a small percentage of the genome.  Although the methodologies and technologies used in whole-exome sequencing have become widely available and are beginning to become routinely used, there are a number of quality control and analysis considerations that are often overlooked or underappreciated at the experimental design phase.  This presentation will provide an overview of whole-exome sequencing with particular attention to critical steps and considerations during experimental design, sample preparation, quality control.  The presentation will also discuss the challenges associated with analyzing the resulting data and present a robust and efficient analysis pipeline for whole-exome and targeted sequencing that leverages several open source and academic software tools.  By carefully controlling both the laboratory and informatics methods, whole exome sequencing can be an efficient and effective tool to identify causative variants for a variety of phenotypes and disease.

3:30 Refreshment Break

3:55 Detection of Inherited Mutations for Breast and Ovarian Cancer Using Genomic Capture and Massively Parallel Sequencing
Tom Walsh, Ph.D., Research Assistant Professor, Medical Genetics, University of Washington

Inherited loss-of-function mutations in the tumor suppressor genes BRCA1, BRCA2, and multiple other genes, predispose to high risks of breast and/or ovarian cancer. Genetic testing for BRCA1 and BRCA2 mutations has become an integral part of clinical practice, but testing is generally limited to these two genes and to women with severe family histories of breast or ovarian cancer. To determine whether massively parallel sequencing would enable accurate, thorough, and cost-effective identification of inherited mutations for breast and ovarian cancer, we developed a genomic assay to capture, sequence, and detect all mutations in 21 breast and ovarian cancer genes.

4:35 Closing Panel Discussion


TUESDAY, SEPTEMBER 28 | 6:00 – 9:00 pm


Conference Short Course 3*

Cloud ComputingSponsored by
CycleComputing100px

This presentation will cover real-world use cases across drug discovery & design, collaboration, next generation sequencing, proteomics, software as a service, and bioinformatics, to explore how life sciences are using cloud computing, its challenges and effectiveness, how money can be saved by an organization, and regulatory compliance. Dinner will be served.

Agenda:

5:45 pm Buffet Dinner Selection

6:10 Opening Remarks: Life Sciences and Cloud 
Jason Stowe, CEO, Cycle Computing

6:20 High-Throughput Genome Sequence Analysis on the Cloud 
Toby Bloom, Ph.D., Director of Informatics, Genome Sequencing Center, Broad Institute
Next-gen sequencing technologies generate huge amounts of data. Analyzing that data requires a level of IT support not available in many small research facilities, and presents numerous challenges for collaborators needing to share that data across multiple research centers. We will discuss our efforts to address these challenges through use of cloud computing and our results thus far.

6:50 Molecular Modeling Research in the Cloud at Schrodinger 
Peter S. Shenkin, Ph.D., Vice President, Schrodinger
Schrodinger is a leading researcher in computational molecular modeling. As such, we develop software that runs on the Cloud and we also use the Cloud for internal research, drug-discovery collaborations with commercial partners, and infrastructure needs. In this talk, we describe our experience with scientific computing on the Cloud and present the pros and cons of moving collaboration applications like e-mail, documents, and calendaring to the Cloud.

7:20 Cloud Computing at U. Penn ITMAT Bioinformatics Facility 
Angel Pizarro, Director, University of Pennsylvania ITMAT Bioinformatics Facility
The cloud provides a valuable tool for doing bioinformatics research, and this presentation will discuss various operational issues in using Cloud Computing for aiding and abetting high-throughput biomedical research. The focus will be "what is the simplest way to get the job done", using recent projects involving next-generation sequencing and proteomics. We will cover a specific examples of bootstrapping a system to install, configure and start a simple Ruby-based Map-Reduce system, Cloud Crowd, as well how to get the data to and from the cloud.

7:50 NGS Workflows on the Cloud: From PoC to Production 
David Powers (formerly of Eli Lilly, Evangelist), Senior Analyst, Business Development, Cycle Computing
In this talk we will compare and contrast  real life workflows that have been moved to the cloud.  We will dive into the technical implementation details to explore the workflows, dealing with data synchronization, and how we leverage cloud services in AWS to process large data. Use cases will include secondary/tertiary- analysis, data archival, and re-analysis. Comparisons between internal clusters for small/large sites, and cloud-based methods will also be reviewed.

8:20 Using the Cloud for Interactive Data Analysis of Biopharma Market Data 
Pieter Sheth-Voss, Ph.D., Senior Research Director, Quintiles Market Intelligence and Analytics
In this talk, we describe our experiences developing Provenance®, a cloud-based platform for interactive data visualization and modeling developed within Quintiles Market Intelligence and Analytics for analysis of complex healthcare market data.  Specifically, discuss specific challenges we set out to address with the cloud.  These include (a) handling a diversity of data models, (b) provisioning a high-speed database, (c) ensuring security, (d) minimizing latency, (e) planning for scalability, and (f) synchronizing data with external sources.   We also discuss a few practical aspects within our subject domain that may be more broadly relevant.  Our goal is to provide practical insights for researchers seeking to use the cloud for their own compute-intensive end user applications. On the commercial side of biotech/pharmaceutical development, extensive analysis of market data is often required to identify and characterize the market opportunities to help guide clinical development decisions, such as trial design and research priorities. Market intelligence may include both “primary” data collected directly from physicians, patients, payers and other constituents, as well as “secondary” data including patient insurance claims, chart audits, electronic health records, and prescription data.

8:50 Closing Remarks 
Jason Stowe, CEO, Cycle Computing

9:00 pm End of Short Course

*Separate registration required



Day 1  |  Day 2  |  Day 3  |  Short Courses



By Series:
By Region: