DNAnexus expands with Microsoft Azure Cloud Platform

At DNAnexus we focus on enabling scientific research and discovery by removing barriers such as data compliance and security concerns, allowing our global community of customers to focus on innovation, not logistics. DNAnexus is the only cloud-based translational informatics platform that offers regional services in China, North America, Europe, and Asia Pacific, and now, we are pleased to offer our customers a second region in Europe, Microsoft Azure’s West Europe.

Data generated in the EU is subject to strict European privacy regulations. DNAnexus has adopted policies and implemented new procedures that comply with European regulations. Consequently, DNAnexus users based in the European countries can upload genomic data from individuals in those countries to DNAnexus in compliance with European data privacy laws and regulations (e.g. GDPR) and stay in compliance with regulations regarding data localization.

The additional region in Europe will address the needs of the growing European BioPharma and Precision Medicine customers, allowing them to seamlessly access the Azure cloud platform via DNAnexus while complying with European regulations, including the GDPR and other data privacy regulations. Organizations will be able to collaborate with research centers in the EU and across the globe to analyze biomedical data on one common platform, while still maintaining compliance with European regulations.  The DNAnexus Platform allows data owners the ability to set up compliance policies and enforce local/regional restrictions around their data, eliminating the confusion and potential risks of collaborating globally.

Other security features enable researchers to log, monitor, and block any unauthorized data access and prevent users from downloading or copying data outside of their regional boundaries. These features, in addition to industry-leading data management, workflow and data provenance tracking features of the DNAnexus Platform, provide an environment to work with large-scale biomedical and genomics data that complies with EU GDPR and other data privacy regulations, enabling customers to focus on scientific discovery – not compliance.

You can find out more about DNAnexus expansion and offerings at Bio-IT World. Visit DNAnexus in booth #310 and learn more about our conference activities on our blog. You can also stop by our demo session at Microsoft booth #446 on Wednesday, May 16th at 3:30pm.

Amplifying Google’s DeepVariant

In this blog we quantify Google Brain’s recent improvements to DeepVariant – detailing significant improvement in both exomes and PCR genomes. We reflect on how this improvement  was achieved and what it suggests for deep learning within bioinformatics.

Introduction

The Google Brain Team released DeepVariant as an open-source GitHub repository in December 2017. Our initial evaluation of DeepVariant and our Readshift evaluation (blog) method (code) identified that while DeepVariant had the highest accuracy of all methods on the majority of samples, there were a few outliers with much higher Indel error rates.

Subsequently, we realized that these difficult samples had a common feature – they were all prepared with PCR. Also, the initial release of DeepVariant had not been trained on exome data. Based on this, DNAnexus provided exome and Garvan Institute PCR WGS samples to Google Brain to train improved DeepVariant models, which are now released. We also recognize Brad Chapman, who independently collaborated with Google Brain.

Improving Deep Learning Methods Compared to Improving Traditional Methods

Many factors make variant calling more complex in PCR+ samples and exomes. PCR amplification is biased by GC-content and other factors. Errors that accumulate in the PCR process are difficult to distinguish from true variants. Exomes have an additional complexity of uneven capture efficiency and coverage.

Knowledge of these facts is incorporated differently in human-programmed methods, traditional machine learning, and deep learning methods. Human-written methods require a programmer to carefully develop heuristics which capture the relevant properties without too much rigidity. Traditional machine learning approaches require a scientist to identify informative features that capture the information, encode those from the raw data, and train and apply models.

Deep learning based methods like DeepVariant attempt to represent the underlying data in as raw a manner as possible, relying on the deep learning models to learn the features. Here, the scientist’s role becomes to identify the examples that embody the diversity of conditions the tool must solve, and to organize how to present as many and varied of these examples to the training machinery as possible. How Google Brain made these improvements may be as interesting as the improvements themselves.

Exome-Trained Models Significantly Improve Performance

Google Brain’s initial release was DeepVariant v0.4. The v0.5 version added the ability to apply either a WGS model or an exome model. The v0.6 version specifically added PCR+ training for the first time.

All evaluations use the hs37d5 reference and v3.3.2 Truth Sets from NIST Genome in a Bottle. Evaluation is performed with Hap.py in the same method used in PrecisionFDA.  Google’s release notes indicate they never train on HG002 or chr20 of any sample.

On exomes, DeepVariant improved from approximate parity to a 2-fold error reduction relative to GATK. There are two ways to put this improvement in accuracy into context. DeepVariant has an absolute decrease of 231 errors and 121 false positives. For a callset of 500,000 exomes – as the UK BioBank will soon make available – the fact that false calls tend to distribute randomly while true variants are shared in the population could mean a reduction of 60 million false variants positions in the full callset (for reference, the 60,000 exome ExAC set contained 7 million variant positions).

Another way to frame the value is: does the improved accuracy allow similar performance as traditional methods at a lower coverage?Figure 2 demonstrates that DeepVariant on the same exome downsampled randomly to 50% coverage is more accurate than other methods at full coverage. This could allow a re-evaluation of required sequencing depth to either increase throughput, save cost, or refocus sequencing depth on complex but important regions.

Including PCR+ WGS Data Greatly Improves DeepVariant Performance

DeepVariant was not specifically trained on PCR-prepared WGS samples until the v0.6 release, when 5 PCR+ genomes were added to the training set. Even before this addition, DeepVariant was the most accurate SNP caller surveyed in our PCR+ data. However, as Figure 3A indicates, the inclusion of PCR training data improved SNP accuracy noticeably.

Indels in these samples were a problem for almost all of the callers. Indel error rates were so much higher in these samples that this is the dominant error mode across every method. The exception was Strelka2, which performed vastly better. Illumina clearly put great care into modeling PCR errors when developing Strelka2. Edico was also able to use this observation to improve their DRAGEN method, as detailed in our recent blog.

This observation was key to realizing that the performance penalty was due to PCR. When facing a problem with additional noise and complexity, it can be unclear whether the lack of signal makes the problem fundamentally less solvable or if the problem remains similarly tractable, but instead requires more effort. Strelka2’s performance was proof that current methods could improve significantly.

As Figure 3B shows, the inclusion of PCR+ data has a huge impact on the error rate of DeepVariant, with a 3-fold error reduction. Simply identifying the right training examples was sufficient to go from worse than GATK to 10% better than the prior leader, Strelka2.

How Do New Training Data Impact Other Applications?

By including PCR+ and exome data in training, DeepVariant is now being asked to encode more information within its networks. Asking methods to multi-task in this way can lead to interesting effects and can force trade-offs which decrease overall performance.

An interesting phenomenon observed with deep learning methods is that challenging them with diverse, hard problems can sometimes improve general performance. Ryan Poplin, one of the authors of DeepVariant, discusses this phenomenon in a recent ML/AI Podcast on his work on an image classification method for diabetic retinopathy.

This could be understood in a few ways. First, harder problems may create pressure for network weights to identify subtle but meaningful connections in data. For example, a strong deep learning approach trained on novice chess opponents would only be so good. Another is that training examples that depend differentially on various aspects of a problem helps resist overfitting, allowing the model to discover and preserve subtle signals it could not otherwise. The result is a better generalist model.

There are many variables changed in the progression of DeepVariant from v0.4 to v0.6 – more overall training data has been added and tensors are changed – but if DeepVariant were to benefit from the complexity, one might hypothesize this would manifest in a more pronounced change on Indel performance, as that area is the most impacted in both the exome and PCR data. To test this, we ran each version on a “standard” benchmark – PCR-Free WGS HG002.

The new releases of DeepVariant show a trajectory of improvement, which is impressive given that DeepVariant led our benchmarks in accuracy even in its first release. SNP error rate is about 10% improved. The Indel error rate is improved more substantially, at a 40% reduction. Also, the greatest impact occurs with the addition of the PCR training data in v0.6. With this sort of uncontrolled experiment, we can’t conclusively say that this occurs because DeepVariant is learning something general about Indel errors from the PCR data. However, the prospect is enticing.

What Will Deep Learning Bring for the Field?

As deep learning methods begin to enter the domain of bioinformatics, it is natural to wonder how skills in the field will shift in response. Some may fear these methods will obsolete programming expertise or domain knowledge in genomics and bioinformatics. In theory, the raw methods to train DeepVariant on these new data types, confirm the improvement, and prevent regressions can be shockingly automatic.

However, to reach this point, domain experts had to understand the concepts of PCR and exome sequencing and identify their relevance to errors in the sequencing process. They had to understand where to get valid, labeled data of sufficient quantity and quality. They had to determine the best ways to represent the underlying data in a trainable format.

The importance of domain expertise – what is sequencing and what are its nuances – will only grow. How it manifests may shift from extracting features and hand-crafting weights to identifying examples that fully capture that nuance.

We also see these methods as enabling, not deprecating, programming expertise – these frameworks depend on large, complex training and evaluation infrastructures. The Google Brain team recently released Nucleus, a framework for training genomics models. Nucleus contains converters between genomics formats like BAM and VCF and TensorFlow. This may allow developers to tap into deep learning methods as specialized modules of a broader bioinformatics solution.

We hope that amongst the detail, this blog has communicated that deep learning is not a magic box. It remains essential for a scientist to carefully consider what the nuances of a problem are; how to rigorously evaluate performance; where blind spots in a method are; and which data will shine light on them.

How to Train Your DRAGEN – Evaluating and Improving Edico Genome’s Rapid WGS Tools

Authors: 

In this blog post, we discuss Edico Genome’s DRAGEN Bio-IT Platform for rapid secondary analysis. We benchmark DRAGEN for speed and accuracy on diverse WGS datasets. Finally, we detail how Edico Genome and DNAnexus collaborated to improve the DRAGEN pipeline performance on noisy datasets and PCR-samples in the newest version.

Introduction – DRAGEN and FPGAs

Sequencing volume continues to grow exponentially, exceeding the increase in CPU speed. As this growth strains analytical capacity, it creates demand for fast and efficient analysis approaches (and platforms like DNAnexus to coordinate them). Edico Genome develops genomic pipelines that leverage specialized Field Programmable Gate Array (FPGAs) to dramatically increase analysis speed.

Each move along the spectrum of CPU -> GPU -> FPGA -> ASIC hardware trades programming/execution flexibility for speed and efficiency. With skill, FPGAs are reprogrammable, allowing Edico Genome to load its algorithms into compatible cloud hardware and to update this logic. DRAGEN is available on DNAnexus as a platform app.

Edico Genome’s speed has proven critical in time-sensitive applications, such as rapid diagnosis of newborns in the NICU at Rady Children’s Hospital – a mutual Edico Genome-DNAnexus customer.

Notes on Proper Training and Evaluation of Your DRAGEN

DNAnexus recently released a method to generate real, noisy NGS data called Readshift (blog)(code). Edico Genome has produced a new version of their WGS tools, labeled DRAGEN V2+, which we evaluate on Readshift.

First, we present evaluations on an HG002 benchmark dataset which was never made available to Edico Genome to ensure that the improvements apply generally.  This set of benchmarks use 35X PCR-free WGS data with the hs37d5 reference. Evaluation is performed using the same methods as used on precisionFDA. We compare DRAGEN V2 (the prior version) to DRAGEN V2+ to demonstrate Edico Genome’s rapid improvement.

DRAGEN-Scale:  How Fast is DRAGEN?

Figure 1 compares execution speed of DRAGEN relative to popular pipelines executed on DNAnexus. For many of these apps (e.g. GATK3), DNAnexus has applied additional optimizations to improve parallelism, meaning they are faster than they would be on local infrastructure. When time is critical, DRAGEN V2 and V2+ are the clear leader.

DRAGEN accelerates both the mapping process and variant calling, which can be run independently. This allows users to mix-and-match if a specific variant caller is required. Upstream of the secondary analysis, Edico Genome makes an accelerated BCL2FASTQ tool which greatly improves speed and efficiency while producing identical FASTQs.

How Accurate is DRAGEN?

Figures 2 and 3 demonstrate that DRAGEN’s speed does not come at the expense of accuracy. The newest version of DRAGEN achieves a ~40% reduction in SNP Error rate and ~50% reduction in Indel Error rate. Edico Genome was also recently designated as one of the winners of the precisionFDA Hidden Treasures Challenge.

How to Train Your DRAGEN – New Improvements in DRAGEN V2+

The recent DNAnexus Readshift blog post compared pipeline performance in a number of noisy conditions. After this blog, DNAnexus and Edico Genome discussed improvements based on the findings of Readshift. Edico Genome rapidly iterated several new development versions which we built as DNAnexus Platform apps, evaluated, and discussed with Edico Genome.

These benchmarks are conducted with the same methods as on precisionFDA. Here we use the hg38 reference. For DRAGEN and GATK4 we use hg38 with ALT contigs. For other callers, we use the hg38 reference with decoy sequences, but without ALT contigs.

The labels +0.0 std, +0.5 std, etc… indicate a sample with X standard deviations worse average base quality than baseline. Think of these as progressively harder. (Full details)

Greater Accuracy on PCR Samples

In both the Readshift blog and an earlier DeepVariant blog, we noticed that certain HiSeqX samples caused much worse indel calling performance. We subsequently realized that these samples were the only non PCR-free ones. The use of PCR in samples is required in several cases, e.g. small DNA inputs, as well as new fast and automated-prep Illumina Nextera Flex kits. Strelka2’s strong results suggest that Illumina took special care to consider performance in PCR samples, and that other callers could learn similarly. DRAGEN’s performance now matches Strelka2’s on these samples as leading the pack in  performance for indels on PCR samples (the error mode that dominates in these samples). Improved Robustness Against Low-Quality Reads

Readshift’s main focus was to understand how calling performance degrades as the quality of a sequence run decreases. DRAGEN V2+ is not only more accurate, it is also able to resist the effect of lower quality reads up to the most extreme shift of +2.0 std.

* Discussion with Chris Saunders of Illumina indicates that a specific heuristic for SNP calling in Strelka2 may be interacting with Readshift.  We are working to make a version of Readshift which will not trigger this heuristic for a more accurate Strelka2 evaluation.

Faster Runtime on NovaSeq Samples

Readshift identified that low-quality NovaSeq data can lead to dramatically longer runtime (or program crashes). Brad Chapman has completed an excellent investigation into the use of read trimming in somatic calling that may help this and more broadly.

Edico Genome has made improvements to the runtime of DRAGEN on NovaSeq samples across the board, demonstrating the ability to quickly improve for new data types.

However, the relative slowdown in low-quality NovaSeq samples remains. DNAnexus and Edico are continuing discussions on how to improve this issue.

Future Directions

In addition, Edico Genome recently released its new DRAGEN Virtual Long Read Detection (VLRD) Pipeline (Coming Soon to DNAnexus), designed at achieving greater accuracy in segmental duplications than standard variant callers. In this pipeline, Edico Genome leverages the fact that because DRAGEN computes so quickly, they can leverage computationally intensive assembly-based techniques and jointly calling all regions that are similar. We hope to take a deeper investigation into this method (and these difficult, but important, regions of the genome) in a future blog. Based on the responsiveness of Edico in these collaborations, we are quite convinced there will be a:

How to Train your DRAGEN 2

 Edico’s Genome’s DRAGEN is available now as an easy to use app on DNAnexus. To pilot using DRAGEN V2+ on DNAnexus in your workflow, email edico@dnanexus.com.