Evaluating DeepVariant: A New Deep Learning Variant Caller from the Google Brain Team

Yesterday, the Google Brain team released DeepVariant – an updated, open-source (github) deep learning based variant caller. A previous version of DeepVariant was first submitted to the DNAnexus-powered PrecisionFDA platform, winning the award for overall accuracy in SNP calling in the PrecisionFDA Truth Challenge. A manuscript describing DeepVariant has been on bioRxiv, giving the field an understanding of the application, but full peer-reviewed publication has presumably been waiting for this open-sourced version.

We’re excited  by this new method and are making it available to  our customers on the DNAnexus Platform.  We’ve done an evaluation of DeepVariant to assess its performance relative to other variant calling solutions. In this post, we will present that evaluation as well as a brief discussion of deep learning and the mechanics of DeepVariant.

We are pleased to announce the launch of the DeepVariant Pilot Program, to be offered to a limited number of interested users, with broader access to the tool in the coming months. To request access to DeepVariant on DNAnexus please signup here.

What is Deep Learning?

Recent advancements in computing power and data scale have allowed multi-layer, complex architecture – or “deep” Neural Networks to demonstrate that their “learning plateau” is significantly higher than other statistical methods that previously supplanted them.

Generally, deep learning networks are fed relatively raw data. Early layers in the network learn “coarse” features on their own (for example, edge detection in vision). Later layers contain abstract/higher-level information. The ability of these deep networks to perform well is highly dependent on the architecture of the neural network – only certain configurations allow information to combine in ways that build meaning.

Google,  has pushed the leading edge of deep learning, with its open-source framework, TensorFlow, and with powerful demonstrations ranging from machine translation to world champion level Go to  optimizing energy use in data centers.

What is DeepVariant?

DeepVariant applies the Inception TensorFlow framework, which was originally developed to perform image classification. DeepVariant converts a BAM into images similar to genome browser snapshots and then classifies the positions as variant or non-variant. Conceptually, it uses the idea that if a person can leverage a genome browser to determine if a call is real, a sufficiently smart framework should be able to make the same determination.

The first part is to make examples that represent candidate sites. This involves finding all of the positions that have even a small chance of being variants with a very sensitive caller. In addition, DeepVariant performs a type of local reassembly which serves as a more thorough version of indel realignment. Finally, multi-dimensional pileup images are produced for the image classifier.

The second part is to call variants using the TensorFlow framework. This passes the images through the Inception architecture that has been trained to recognize the signatures of SNP and Indel variant positions.

Both components are computationally intensive. Because care was taken to plug into the TensorFlow framework for GPU acceleration, the call variants step can be accomplished much faster if a GPU machine is available. When Google’s specially designed TPU hardware becomes available, this step may become dramatically faster and cheaper.

The make examples component uses several more traditional approaches, which are more difficult to accelerate, and also computationally intensive. As efficiency gains from GPU or TPU improve call variants, the make examples step may limit the ultimate speed and cost. However, given the attractiveness of a fully deep learning approach, the genomics team at Google Brain would not have included these steps lightly. The genomics team at Google Brain includes some of the pioneers (Mark DePristo and Ryan Poplin) of Indel Realignment and Haplotype construction from the development of GATK.

The Inception framework is a “heavy-weight” deep learning architecture, meaning it is computationally expensive to train and to apply. It should not be assumed that all problems in genomics will require the application of Inception. Currently in the field of deep learning, building customized architecture to solve a problem is challenging and time consuming – so the application of a proven architecture makes sense.  In the long term, custom-built architectures for genomics may become more prevalent.

Although the PrecisionFDA version of DeepVariant represents the first application of deep learning to SNP and Indel calling, the Campagne lab has recently uploaded a manuscript on bioRxiv detailing a framework to train SNP and Indel models. Jason Chin has also written an excellent tutorial with a demonstration framework.

How Accurate is DeepVariant?

To understand how DeepVariant performs on real samples, we compared it against several other methods in diverse WGS settings. To quickly summarize, its accuracy represents a significant improvement over current state of the art across a diverse set of tests.

Assessments on our standard benchmark sets

At DNAnexus, we have a standard benchmarking set on HG001, HG002, and HG005.  These are built from the Genome in a Bottle Truth Sets. We use this internally to assess methods and to make the best recommendations on tool selection and use for our customers. In each case, we assess on the confident regions for the respective genomes. The assessment is done via the same app as on PrecisionFDA, using hap.py from Illumina. In all cases, except where explicitly mentioned, the reads used represent 35X coverage WGS samples achieved through random downsampling.

The following charts show the number of SNP and Indel errors on several samples (lower numbers are better in these graphs). *Samtools not shown in Indel plots due to high indel error rate

DeepVariant dramatically outperforms other methods in SNPs on this sample, with almost a 10-fold error reduction. SNP F-measure is 0.9996. For indels, DeepVariant is also the clear winner.

When DeepVariant is applied to a different human genome – the Ashkenazim HG002 set from Genome in a Bottle – its performance is similarly strong.

Assessments on Diverse Benchmark Sets

Following our standard benchmarks, we sought to determine whether we could identify samples where DeepVariant would perform poorly. With machine-learning models, there is some concern that they may over-fit to their training conditions.

Early Garvan HiSeqX runs – In 2014, the Garvan Institute made the first public release of a HiSeqX Genome available through DNAnexus. As occurs with new sequencers, the first runs from HiSeqX machines were generally of lower quality compared to runs produced after years of improvements to experience, reagents, and process.  In 2016, Garvan produced a PCR-free HiSeqX run as a high-quality data set for the PrecisionFDA Consistency Challenge.

To better assess the performance of DeepVariant on samples of varying polish, we applied it and other open-source methods to each of these genomes.

In the 2014 Garvan HiSeqX run, DeepVariant retains a significant advantage in SNP calling. However, it performs worse in indel calling. Note that all callers had difficulty calling indels in this sample, with more than 100,000+ errors in each caller.

Low-Coverage NovaSeq Samples

To further challenge DeepVariant, we applied the method to data from the new NovaSeq instrument. We used the NA12878-I30 run publicly available from BaseSpace. The NovaSeq instrument uses aggressive binning of base quality values and its 2-color chemistry is a departure from the HiSeq2500 and HiSeqX. We made it harder and downsampled from 35X coverage to 19X coverage.

Even in a sample as exotic as low-coverage NovaSeq, DeepVariant outperforms other methods. At this point, DeepVariant has demonstrated superior performance (often by significant margins) across different human genomes, different machines and run qualities, as well as different coverages.

Other Samples

In addition to the benchmarks presented here, we also ran on: 35X NovaSeq data, the high-quality 2016 HiSeqX Garvan Sample, and our HG005 benchmark. In the interest of space, we will skip these charts here. Qualitatively, they are similar to the other graphs shown.

How Computationally Intensive is DeepVariant?

As previously discussed, DeepVariant’s superior accuracy comes at the price of computational intensity. When available, GPU (and someday TPU) machines may ease this burden, but it remains high.

The following charts capture the number of CPU hours to complete the HG001 sample running the pipeline without GPUs (lower numbers are better):

Fortunately, the DNAnexus Platform enables extensive parallelism to cloud resources at a much lower cost. Through the use of many machines, 830 core-hours can be completed in a few hours of wall-clock time. DeepVariant Pilot Program is currently offered to a limited number of interested users, with broader access to the tool in the coming months. To request access to DeepVariant on DNAnexus, please signup here.

In Conclusion

Experts have been refining approaches for the problem of SNP and Indel calling in NGS data for a decade. Through thoughtful application of a general deep learning framework, the authors of DeepVariant have managed to exceed the accuracy of traditional methods in only a few years time.

The true power of DeepVariant lies not in its ability to accurately call variants – the field is mature with solutions to do so. The true power is as a demonstration that with similar thoughtfulness, and some luck, we could rapidly achieve decades of similar progress in fields where the bioinformatics community is just beginning to focus effort.

We look forward to working with the field in this process, and hope to get the chance to collaborate with many of you along the way.

Comparison of Somatic Variant Calling Pipelines On DNAnexus

The detection of somatic mutations in sequenced cancer samples has become increasingly standard in research and clinical settings, as they provide insights into genomic regions which can be targeted by precision medicine therapies. Due to the heterogeneity of tumors, somatic variant calling is challenging, especially for variants at low allele frequencies. Researchers use common somatic variant call tools, including MuTect, MuSE, Strelka, and Somatic Sniper,  that detect somatic mutations by conducting paired comparisons between sequenced normal and tumorous tissue samples. Each of these variant callers differ in algorithms, filtering strategies, recommendations, and output. Thus we set out to compare how these individual apps perform on the DNAnexus Platform. Each app was evaluated for recall and precision, cost, and time to complete.  

To benchmark some of the common somatic variant calling tools available on the DNAnexus Platform, our team of scientists simulated synthetic cancer datasets at varying sequencing depths. DNA samples from the European Nucleotide Archive were obtained and mapped to the hs37d5 reference with the BWA-mem FASTQ read mapper on DNAnexus.

These samples were then merged into a single BAM file representing the normal sample. To obtain the tumor sample, synthetic variants were inserted into each individual sample with the BAMSurgeon app on DNAnexus. All simulated samples were then merged into one BAM file constituting the tumor sample. Both the synthetic tumor and normal BAM files had approximately 250X sequencing depth.The synthetic tumor BAM file was then downsampled into a range of sequencing depths. With the help of sambamba through the Swiss Army Knife application, these files were reduced to 5X, 10X, 15X, 20X, 30X, 40X, 50X, 60X, 90X, and 120X coverage files. The file representing the normal sample was downsampled into a 30X sequencing depth file.  Once the synthetic cancer dataset was created, the common somatic variant calling tools MuTect, MuSE, Strelka, and Somatic Sniper were run to detect single nucleotide variants. Upon completion, the high quality variants were filtered from each VCF.



MuTect performed the best at classifying correct variants followed by Strelka, MuSE, and Somatic Sniper. This was consistent across allele frequency thresholds of 01, 0.2, 0.3, 0.4, and 0.5.

Coverage and Recall

One interesting finding – for the callers investigated, the ability to recall variants at lower frequencies showed a similar pattern. Each of the callers discovers more of the variants before plateauing at a recall ceiling at a certain coverage. Lower allele frequencies require more coverage before saturating for recall at a caller. 30-fold coverage was required to reach the plateau of 0.5 allele frequency variants, while 40-fold coverage was required for 0.1 allele frequency variants. Reliable detection of lower frequency variants presumably require still more coverage to reach a recall plateu.


All tools performed well at identifying relevant variants (>95% precision) regardless of tumor sequencing depth.

To get a more accurate view of the interplay between precision and recall, the harmonic mean of precision and recall (F-score) was computed for each output VCF by depth. MuTect had the best performance overall, followed by Strelka, and then MuSE, and Somatic Sniper. Runtime & Cost

Out of all the apps, Strelka finished most rapidly for the lowest cost. Compared to MuTect, Strelka did not score as high for precision or recall, but completed the analysis of single nucleotide variants in a fraction of the time.

To get a more detailed comparison between MuTect and Strelka, this 3-way venn diagram compares these tools to the truth set. Note, the false negatives called by MuTect are likely due to noise in the dataset.

To better visualize the differences between the callers, we converted the output of each of the callers into high-dimensional vectors in which each variant call in any of the samples is one of the dimensions. This format allows us to calculate the distances between each of the programs and with the truth set. This also allows us to use standard methods such as Mulitdimensional Scaling to convert these distances into positions in 2-D space (axes units are arbitrary, only relative position matter is the graph below).

Valid variant calling results are crucial as next-generation sequencing data is increasingly applied to the development of targeted cancer therapeutics. Our analysis of MuTect, MuSe, Strelka, and Somatic Sniper found that the best results with respect to precision and recall can be achieved by using MuTect. Strelka was also a top performer, and simultaneously reduced runtime and cost.

Need to detect variants in your dataset? Get started using these tools on DNAnexus today.

This research was performed by Nicholas Hill and Victoria Wang as part of their internship with DNAnexus. The project was supervised by Naina Thangaraj, Arkarachai Fungtammasan, Yih-Chii Hwang, Steve Osazuwa, and Andrew Carroll.

Removing the NGS Analytics Data Bottleneck with Field-Programmable Gate Arrays (FPGA’s)

Edico Genome’s FPGA-backed DRAGEN Bio-IT Platform Now Available on DNAnexus

The following is a guest blog, written by our partners at Edico Genome.

With rapid adoption across a variety of practices, next-generation sequencing (NGS) is on track to become one of the largest producers of big data by 2025. While the integration of NGS poses exceptional breakthroughs in its applied practices, one major problem threatens its expansion: a lack of computing power to analyze the rapidly growing body of data.

Current projections calculate genomic data to continue doubling every seven months, a stark acceleration in comparison to Moore’s Law, which states CPU capabilities will double every two years. The void left in-between creates a bottleneck for genomics labs.

Designed to uncork this big data bottleneck, Edico Genome’s DRAGEN™ (Dynamic Read Analysis for Genomics) Platform leverages FPGA (Field-Programmable Gate Array) technology to provide customers with hardware-accelerated implementation of genome pipeline algorithms. Leveraging FPGAs, DRAGEN allows customers to analyze NGS data at unprecedented speeds with extremely high accuracy and unwavering dependability.

Uncorking the big data bottleneck with DRAGEN

In contrast to conventional CPU-based systems, which must execute lines of software code to perform an algorithmic function, FPGAs implement algorithms as logic circuits, providing an output almost instantaneously. By replicating these logic circuits thousands of times over, DRAGEN is able to achieve industry-leading speeds by allowing for massive parallelism, unlike CPUs, which are limited to running only one task per core. FPGAs are also fully reconfigurable, enabling customers to switch between functions and pipelines within seconds.

As a result, DRAGEN delivers high accuracy while functioning with industry-leading speed, efficiency, and parallelism. DRAGEN can process an entire human genome at 30x coverage in about 90 minutes, as compared to over 30 hours using a traditional CPU-based system, saving customers time and money. DRAGEN’s Genome Pipeline is now available on DNAnexus at a reduced trial rate until October 31, 2017. To sign up for exclusive promotional pricing, visit: https://www.dnanexus.com/edico-trial .