Designing Bioinformatics Pipelines for Fast Iteration

When genetic tests are ordered, there’s probably little thought as to all of the bioinformatics work required to make the test possible. However, the bioinformatics team at Myriad Genetics understands firsthand just how much work it takes. Myriad Genetics provides diagnostic tests to help physicians understand risk profiles, diagnose medical conditions, or inform treatment decisions. To support their comprehensive test menu and commitment to providing timely and accurate test results, the bioinformatics team at Myriad focuses on optimizing their bioinformatics pipelines. How? By designing pipelines to leverage modularity and computational re-use to make improvements and iterate more quickly. 

Jeffrey Tratner, Director Software Engineering, Bioinformatics at Myriad spoke at DNAnexus Connect, explaining how fast iteration works on the DNAnexus Platform. You can learn more by watching his talk or reading the summary below.


Typical pipeline development involves setting up an infrastructure, building a computation process, and analyzing the results. When adjustments are made, this process repeats as many times as necessary until the pipeline has been properly validated. With complex pipelines, this process can consume many resources and a lot of time. Myriad wanted a more efficient way to iterate on their pipelines so that they could optimize them faster. Fast R&D, as Myriad defines it, is characterized by an environment in which you can make adjustments easily, find answers quickly, and don’t have to think too much or second guess which areas of the pipeline you need to change when making adjustments.

Myriad reduced pipeline R&D from 2 weeks to 2 hours by leveraging tools that enable them to re-use computations.

The team at Myriad first demonstrated this concept when they performed a retrospective analysis with a new background normalization step, the tenth step of a 15-step workflow, on over 100,000 NIPT (non-invasive prenatal test) samples. Simply rerunning the entire modified workflow would have taken 2 weeks.  Instead, Myriad reduced this time to two hours by rethinking the pipeline and leveraging tools that enable them to re-use computations.

Now codified, the approach in use at Myriad enables their team to make changes and iterate quickly, all with a focus on accuracy, reproducibility, and moving validated pipelines into production. 

So how can you borrow from their approach and design your bioinformatics pipelines for faster iteration?

Make computational modules smaller

Although it’s tempting to use monorepositories when coding because they promote sharing, convenience, and low overhead, they don’t promote modularity within pipelines. And modularity is what enables you to scale quickly, re-use steps, and identify/debug problems. Myriad organized all the source code for workflows in monorepositories, but developed smart ways to break the code within them into smaller modules and only build the modules that have been modified.

Take advantage of tools that enable you to reuse computations

If you run an app with the same data and the same input files and parameters, your results should be equivalent. So if you are changing a step downstream, why run all of the steps that come before it if they’ve already been run? The DNAnexus Platform, for example, includes a Smart Reuse feature. This feature enables organizations to optionally reuse outputs of jobs that share the same executable and input IDs, even if these outputs are across projects. By reusing computational results, developers can dramatically speed the development of new workflows and reduce resources spent on testing at scale. To learn more about Smart Reuse visit our DNAnexus documentation here.

Smart Reuse Bioinformatics Pipeline

Use workflow tools to describe dependencies and manage the build process

Workflow tools, such as WDL (Workflow Description Language), make pipelines easier to express and build. With WDL, you can easily describe the module dependencies and track version changes to the workflow. It’s also very natural to integrate Docker with WDL, so if you’re using some sort of open-source container hub, you can simply edit one line in WDL and load a module of a different version with a new docker image. Myriad writes their bioinformatics pipelines in WDL and statically compiles them with dxWDL into DNAnexus workflows, streamlining the build process. Learn more about running Docker containers within DNAnexus apps or dxWDL from our DNAnexus documentation.

Providing Bioinformatics Solutions to Address Challenges with Structural Variants

Contributors:  Arkarachai Fungtammasan, Jason Chin, Gigon Bae, Fernanda Foertter, Fritz Sedlazeck, Claudia Fonseca

“Houston, we’ve had a hackathon.”  

And this hackathon has yielded four creative bioinformatics solutions to address the complexities of structural variants.

Recently, Nvidia and DNAnexus jointly sponsored the NCBI Structural Variant Hackathon at Baylor College of Medicine on October 11-13. The event was attended by 45 participants from Baylor College of Medicine, UT Southwestern, Rice University, Stanford, and the Broad Institute. Some guests even traveled all the way from Qatar to attend. 

What is a Structural Variant?

A structural variant refers to any segment of DNA greater than 50 base pairs that has been rearranged in some fashion, whether that be inserted, deleted, duplicated, inverted, or translocated [1]. Structural variants can be contributors to many diseases, including cancer. Yet when compared to single nucleotide variants, our understanding of structural variants isn’t as far along because they are difficult to identify, particularly in short-read sequencing formats.

Leading the Charge.

Ben Busby, Scientific Lead at NCBI, and Fritz Sedlazeck, Assistant Professor from Baylor College of Medicine, led the hackathon as a means to encourage inter-institutional collaboration and thinking to tackle research questions related to structural variants of the genome. 

NCBI Structural Variant Hackathon
Attendees listen intently at the recent NCBI Structural Variant Hackathon.

DNAnexus provided cloud computing credit during the Hackathon, and both DNAnexus and Nvidia sent scientists to support attendees with cloud computing, GPU-accelerated computing, and bioinformatics pipeline construction. The participants had the opportunity to learn how to build workflows using the DNAnexus platform graphical user interface or from a command line using Workflow Design Language (WDL). They also could build reproducible prototypes using Jupyter notebook, a collaborative framework for working in the cloud environment. In addition, they were able to learn how to use graphical processing units, or GPUs, to transform the efficiency of bioinformatics workflows. Incidentally, GPUs were originally designed to support high-quality gaming experiences, but now their utility is being harnessed to facilitate computationally-intensive workflows such as Physics Simulation and Deep-learning AI.

These hackathons are important events that bring together people from different fields and different stages of their careers for three intense days to network, collaborate and tackle important bioinformatics challenges

FRITZ SEDLAZECK, PHD, ASSISTANT PROFESSOR, HUMAN GENOME SEQUENCING CENTER

The event also included an inspirational talk from Richard Gibbs, Director, Baylor College of Medicine Human Genome Sequencing Center, on how the hacking mindset is actively transforming our understanding of genomics. From the mapping of the first human genome to the current era of precision medicine, many great scientific ideas have originated from hacking.

Richard Gibbs Structural Variant Hackathon
Richard Gibbs presenting to the hackathon attendees. He presented in the same room in which many of the meetings for the human reference genome construction took place.
Structural Variant Hackathon HGSC
Hackathon groups visited the Human Genome Sequencing Center. There were many DNA sequencers from a variety of companies such as Illumina, Pacific Biosciences, and Thermo Fisher Scientific.

The 45 participants split into groups and each group went to work brainstorming ideas that they could work on over the next three days. Ideas were pitched to the larger group and refined based on feedback. 

The next three days were devoted to implementing each of the ideas. And this was when the room came alive. With help from the DNAnexus and Nvidia teams who helped groups get started, there was a lot of cross-talk and collaboration between the attendees, many of whom were merging ideas and borrowing from one another’s prototypes. According to Claudia M.B., Carvalho Fonseca, PhD, who was one of the attendees, “ It was fascinating to see the synergy between people from different disciplines — computational biology, bioinformatics, molecular biology, etc.– to work toward common goals.” She added: “The combination of good organization, time constraints, and diverse backgrounds boosted creativity and helped each group develop solutions.” 

The hackathon yielded some of the following innovations.

Fast and efficient QC for multi-sample VCF.

This Python package can perform a rapid evaluation of 2500 sample VCFs in one and a half minutes. Find the package here.

Bioinformatics Presentations

We then made the final presentation to the broader community. Here are some highlights.

Genome mis-assembly detection using structural variant calling.

This quality control tool for metagenomic assemblies uses dxWDL, a workflow development language compiler for the DNAnexus Platform, and Docker, to build workflows and port them across the DNAnexus Platform.

Fast structural variant graph analysis on GPUs.

DNAnexus Bioinformatics Analysis Workflow

This applet, called super-minityper, uses a set of cloud-based workflows for constructing structural variant graphs and mapping reads to them. The super-minityper is implemented as a DNAnexus cloud workflow/applet using dxWDL. For minimap2 + seqwish pipeline, the super-minityper also provides a WDL file where minimap2 is substituted by cudamapper in NVIDIA’s Clara Genomics Analysis SDK for faster analysis using GPU. It also provides a public Docker image (ncbicodeathons/superminityper:dx-wdl-builder-1.0) which enables easy-to-use DNANexus’ dxWDL compiler.

Note: The DNAnexus Platform currently doesn’t support GPU-enabled virtual machines for workflows from a web UI but this support is planned for a future release.

DeNovo Structural Variant

DeNovoSV.

This pipeline identifies and validates de novo structural variants in genomics datasets from trios.

SWIft Genomes in a graph.

This automated pipeline builds graphs quickly using k-mer approach. Generally, building graphs for genomes, or large genomic regions is computationally expensive; however, with a multi-scale approach, this pipelines employs  a simple algorithm and tool to build genome graphs for the human Major histocompatibility complex (MHC) region within three minutes.

The spirit of innovation continued after the hackathon, when a group of attendees visited the Space Center in Houston. There, attendees saw the Saturn V rocket, the same model that helped the Apollo 11 mission travel and walk on the moon.

Bioinformatics Hackathon Saturn V
Hackathon attendees in front of the Saturn V rocket.

The next structural variant hackathon at Baylor College of Medicine will take place on April 19th-21st. For more information or to register, visit:  https://www.hgsc.bcm.edu/events/hackathon

Works Cited
[1] https://www.jax.org/news-and-insights/2018/april/calling-all-structural-variants

Addressing the Complex Storage and Archival Needs of DNA Sequencing Data

DNA Data Archive

Computational biologist and large-scale computational DNA expert, Eric Schatz, estimates that by the year 2025 we will amass between 100 million and 2 billion sequenced human genomes.1 That’s a massive amount of data to use for the purpose of improving human health, but there’s a bit of a catch: we need to find creative solutions for storing these data. These solutions must be economical and must promote rapid retrieval of data when needed for analyses.

Typically, cloud providers, such as Amazon Web Services (AWS) and Microsoft Azure, use a tiered pricing structure. Data needed frequently command a higher storage fee than those needed infrequently and placed in cloud archives, or cold storage.

At DNAnexus, we are committed to supporting the sequencing community with creative solutions, which is why we are proud to announce the upcoming rollout of our new archival service.

The DNAnexus archival service provides a cost-effective and secure way to store files that do not need to be accessed frequently. More importantly, even though the files may be moved to cold-storage, the DNAnexus Platform continuously maintains the data provenance and keeps the meta-information of those files, such as tags and property key-value pairs, searchable. With the DNAnexus archival service, you can locate the files–whether they are archived or live–simply by querying their meta-information. 

With this feature, you can archive individual files, folders, or entire projects. You can also easily unarchive one or more files, folders, or projects when they need to make the data available for further analyses.

Currently, the DNAnexus Archival Service is available via the application program interface (API) in AWS regions only, and you must have a license.

  • To learn more about archiving and unarchiving, click here.
  • To request a license to use this feature, contact sales@dnanexus.com

1. Fleishman G. The Data Storage Demands of Genome Sequencing Will Be Enormous. MIT Technology Review. https://www.technologyreview.com/s/542806/how-do-genome-sequencing-centers-store-such-huge-amounts-of-data/. Published October 26, 2015. Accessed October 3, 2019.