Gatk4 on DNAnexus

Authors:

The Broad Institute’s Genome Analysis Toolkit (GATK) is one of the most popular and well regarded repositories of best practices variant calling workflows, and DNAnexus has consistently provided optimized support of these pipelines on our platform. Announced January 9th, GATK4 is the latest release of the toolkit, and this release is particularly significant. GATK has been completely re-architected and is available fully open source. The best practices workflows descriptions are now also explicitly specified and distributed in an open format called the Workflow Description Language (WDL).

At DNAnexus, we are excited about supporting open, portable, and reproducible ways to share not only these new best practices workflows, but also general bioinformatics workflows written in WDL.  As such, to execute explicit GATK4 workflow definitions written in WDL and maintained by the Broad, we use a new utility that we developed called dxWDL. With this tool, a GATK WDL workflow can be used just like any other workflow on the platform with all of the additional benefits our platform provides (e.g. provenance tracking, reproducibility, organization management, project collaboration, and security). As we did with GATK3, we are in the process of optimizing the performance of GATK4 on our platform and future posts will go into more detail about how it performs in terms of efficiency and accuracy. In the interim, we are pleased to announce the launch of the DNAnexus GATK4 Pilot Program, to be offered to a limited number of interested users, with broader access to the tool in the coming months. To request early access to GATK4 on DNAnexus please sign up here.

As an example of using GATK4 with dxWDL, we successfully ran a single sample haplotype workflow and Broad’s production germline variant calling workflow written in WDL on DNAnexus. For the production workflow, the version run on DNAnexus was modified slightly since Broad’s original pipeline has some Google cloud specific references.  The following figure shows what the haplotype workflow looks like on DNAnexus: 

After execution, the timeline of tasks can be easily visualized showing Broad’s more complex production germline variant calling workflow:

Using dxWDL for GATK4 marks a change in how we will be executing these and other workflows written to be portable across platforms.  In contrast to our previous approach of maintaining our own GATK applications, we will be directly supporting open and portable languages, such as WDL and CWL.  Portability through languages such as WDL not only enables research in our field to be better critiqued and improved upon, but it also significantly reduces friction when communicating method details to collaborators and regulatory agencies. Often times while the details of a specific method in a workflow are not changed, some subtleties of in the workflow definition do change leading to reproducibility challenges, such as the changes we needed to make in the production pipeline described above.  With our adoption of open workflow languages like WDL, we will more easily share these workflow-level differences with the community and work with one another towards a single representation that runs portably across a variety of execution platforms.

DNAnexus is proud to be one of the first genome informatics platforms to support WDL. As a member of the core team to govern future developments in WDL, we look forward to continuing work with the Broad and the broader community so that the the best practices WDL workflows can be run as efficiently and portably as possible.

 

CIO Webinar Series: Genomic Data Privacy in the Cloud

Join our two-part webinar series focusing on infrastructure requirements to scale geno-pheno analysis and realize genomic-based clinical trials. Can’t make it? Register anyway and we’ll send you the recording.

Advances in DNA sequencing have created tremendous volumes of whole-genome sequence and multi-omics data, creating new opportunities to explore how the genome plays a role in human disease. As the use of human genomic information becomes more prevalent in research and clinical care, it is important to understand the responsibilities for handling of data in these contexts. The inclusion of genomic information has also shown reduction in costs and time, and improved results of clinical trials. The reduction in sequencing costs and increasing value of NGS in clinical trials is leading some organizations to incorporate NGS into the majority of their trials.

Webinar 1: Understanding Security, Privacy, and the Regulatory Landscape for Genomics in Research and Clinical Settings
January 23rd, 2018
10:00am PST/1:00pm EST

Loren Buhle, PhD, VP Security, Quality & Compliance
Loren is a seasoned leader with over three decades of experience working in the regulated space of life sciences, clinical, and basic research. He brings an unusual combination of scientific, commercial, regulatory, quality, and IT disciplines to identify and manage security, quality, and compliance issues. 

 

Webinar 2: Major IT Considerations for Genomics in Healthcare
February 28th, 2018
10:00am PST/1:00pm EST

Omar Serang, Chief Cloud Officer
Omar has decades of experience building global operations teams and infrastructures, including cloud computing at Amazon Web Services, social web real-time analysis services at Topsy Labs, and messaging and messaging security services at Cloudmark and Critical Path. 

 

 

Hosted in partnership with Microsoft Azure

Evaluating DeepVariant: A New Deep Learning Variant Caller from the Google Brain Team

Yesterday, the Google Brain team released DeepVariant – an updated, open-source (github) deep learning based variant caller. A previous version of DeepVariant was first submitted to the DNAnexus-powered PrecisionFDA platform, winning the award for overall accuracy in SNP calling in the PrecisionFDA Truth Challenge. A manuscript describing DeepVariant has been on bioRxiv, giving the field an understanding of the application, but full peer-reviewed publication has presumably been waiting for this open-sourced version.

We’re excited  by this new method and are making it available to  our customers on the DNAnexus Platform.  We’ve done an evaluation of DeepVariant to assess its performance relative to other variant calling solutions. In this post, we will present that evaluation as well as a brief discussion of deep learning and the mechanics of DeepVariant.

We are pleased to announce the launch of the DeepVariant Pilot Program, to be offered to a limited number of interested users, with broader access to the tool in the coming months. To request access to DeepVariant on DNAnexus please signup here.

What is Deep Learning?

Recent advancements in computing power and data scale have allowed multi-layer, complex architecture – or “deep” Neural Networks to demonstrate that their “learning plateau” is significantly higher than other statistical methods that previously supplanted them.

Generally, deep learning networks are fed relatively raw data. Early layers in the network learn “coarse” features on their own (for example, edge detection in vision). Later layers contain abstract/higher-level information. The ability of these deep networks to perform well is highly dependent on the architecture of the neural network – only certain configurations allow information to combine in ways that build meaning.

Google,  has pushed the leading edge of deep learning, with its open-source framework, TensorFlow, and with powerful demonstrations ranging from machine translation to world champion level Go to  optimizing energy use in data centers.

What is DeepVariant?

DeepVariant applies the Inception TensorFlow framework, which was originally developed to perform image classification. DeepVariant converts a BAM into images similar to genome browser snapshots and then classifies the positions as variant or non-variant. Conceptually, it uses the idea that if a person can leverage a genome browser to determine if a call is real, a sufficiently smart framework should be able to make the same determination.

The first part is to make examples that represent candidate sites. This involves finding all of the positions that have even a small chance of being variants with a very sensitive caller. In addition, DeepVariant performs a type of local reassembly which serves as a more thorough version of indel realignment. Finally, multi-dimensional pileup images are produced for the image classifier.

The second part is to call variants using the TensorFlow framework. This passes the images through the Inception architecture that has been trained to recognize the signatures of SNP and Indel variant positions.

Both components are computationally intensive. Because care was taken to plug into the TensorFlow framework for GPU acceleration, the call variants step can be accomplished much faster if a GPU machine is available. When Google’s specially designed TPU hardware becomes available, this step may become dramatically faster and cheaper.

The make examples component uses several more traditional approaches, which are more difficult to accelerate, and also computationally intensive. As efficiency gains from GPU or TPU improve call variants, the make examples step may limit the ultimate speed and cost. However, given the attractiveness of a fully deep learning approach, the genomics team at Google Brain would not have included these steps lightly. The genomics team at Google Brain includes some of the pioneers (Mark DePristo and Ryan Poplin) of Indel Realignment and Haplotype construction from the development of GATK.

The Inception framework is a “heavy-weight” deep learning architecture, meaning it is computationally expensive to train and to apply. It should not be assumed that all problems in genomics will require the application of Inception. Currently in the field of deep learning, building customized architecture to solve a problem is challenging and time consuming – so the application of a proven architecture makes sense.  In the long term, custom-built architectures for genomics may become more prevalent.

Although the PrecisionFDA version of DeepVariant represents the first application of deep learning to SNP and Indel calling, the Campagne lab has recently uploaded a manuscript on bioRxiv detailing a framework to train SNP and Indel models. Jason Chin has also written an excellent tutorial with a demonstration framework.

How Accurate is DeepVariant?

To understand how DeepVariant performs on real samples, we compared it against several other methods in diverse WGS settings. To quickly summarize, its accuracy represents a significant improvement over current state of the art across a diverse set of tests.

Assessments on our standard benchmark sets

At DNAnexus, we have a standard benchmarking set on HG001, HG002, and HG005.  These are built from the Genome in a Bottle Truth Sets. We use this internally to assess methods and to make the best recommendations on tool selection and use for our customers. In each case, we assess on the confident regions for the respective genomes. The assessment is done via the same app as on PrecisionFDA, using hap.py from Illumina. In all cases, except where explicitly mentioned, the reads used represent 35X coverage WGS samples achieved through random downsampling.

The following charts show the number of SNP and Indel errors on several samples (lower numbers are better in these graphs). *Samtools not shown in Indel plots due to high indel error rate

DeepVariant dramatically outperforms other methods in SNPs on this sample, with almost a 10-fold error reduction. SNP F-measure is 0.9996. For indels, DeepVariant is also the clear winner.

When DeepVariant is applied to a different human genome – the Ashkenazim HG002 set from Genome in a Bottle – its performance is similarly strong.

Assessments on Diverse Benchmark Sets

Following our standard benchmarks, we sought to determine whether we could identify samples where DeepVariant would perform poorly. With machine-learning models, there is some concern that they may over-fit to their training conditions.

Early Garvan HiSeqX runs – In 2014, the Garvan Institute made the first public release of a HiSeqX Genome available through DNAnexus. As occurs with new sequencers, the first runs from HiSeqX machines were generally of lower quality compared to runs produced after years of improvements to experience, reagents, and process.  In 2016, Garvan produced a PCR-free HiSeqX run as a high-quality data set for the PrecisionFDA Consistency Challenge.

To better assess the performance of DeepVariant on samples of varying polish, we applied it and other open-source methods to each of these genomes.

In the 2014 Garvan HiSeqX run, DeepVariant retains a significant advantage in SNP calling. However, it performs worse in indel calling. Note that all callers had difficulty calling indels in this sample, with more than 100,000+ errors in each caller.

Low-Coverage NovaSeq Samples

To further challenge DeepVariant, we applied the method to data from the new NovaSeq instrument. We used the NA12878-I30 run publicly available from BaseSpace. The NovaSeq instrument uses aggressive binning of base quality values and its 2-color chemistry is a departure from the HiSeq2500 and HiSeqX. We made it harder and downsampled from 35X coverage to 19X coverage.

Even in a sample as exotic as low-coverage NovaSeq, DeepVariant outperforms other methods. At this point, DeepVariant has demonstrated superior performance (often by significant margins) across different human genomes, different machines and run qualities, as well as different coverages.

Other Samples

In addition to the benchmarks presented here, we also ran on: 35X NovaSeq data, the high-quality 2016 HiSeqX Garvan Sample, and our HG005 benchmark. In the interest of space, we will skip these charts here. Qualitatively, they are similar to the other graphs shown.

How Computationally Intensive is DeepVariant?

As previously discussed, DeepVariant’s superior accuracy comes at the price of computational intensity. When available, GPU (and someday TPU) machines may ease this burden, but it remains high.

The following charts capture the number of CPU hours to complete the HG001 sample running the pipeline without GPUs (lower numbers are better):

Fortunately, the DNAnexus Platform enables extensive parallelism to cloud resources at a much lower cost. Through the use of many machines, 830 core-hours can be completed in a few hours of wall-clock time. DeepVariant Pilot Program is currently offered to a limited number of interested users, with broader access to the tool in the coming months. To request access to DeepVariant on DNAnexus, please signup here.

In Conclusion

Experts have been refining approaches for the problem of SNP and Indel calling in NGS data for a decade. Through thoughtful application of a general deep learning framework, the authors of DeepVariant have managed to exceed the accuracy of traditional methods in only a few years time.

The true power of DeepVariant lies not in its ability to accurately call variants – the field is mature with solutions to do so. The true power is as a demonstration that with similar thoughtfulness, and some luck, we could rapidly achieve decades of similar progress in fields where the bioinformatics community is just beginning to focus effort.

We look forward to working with the field in this process, and hope to get the chance to collaborate with many of you along the way.