Seeing The Trees In The Forest

One of the biggest challenges associated with the identification of genomic variation, is finding those that have a real and measurable impact and help explain, for example, a disease or drug response under investigation. Weeding through more than 5 million variants associated with the human genome is a huge effort that requires significant computational infrastructure and staff time to manually validate and correlate the identified biological findings associated with the data obtained. To expedite this process and free up more time for focusing on relevant data, these data must be narrowed down to a manageable size – ideally less than a few hundred variants.

We have just released a number of new features that will help solve this challenge by providing:

  1. Smart variation results filtering
  2. Linkouts to public and commercial data sources with gene to disease information

With this new functionality, you can – with a few simple queries – home in on the most relevant variants, whether they are associated with a specific gene, a coding region, a specific chromosome, or annotations that fulfill a specific set of characteristics. The result is quicker insight into affected processes that directly translates into faster hypothesis generation and decision making.

More Specifically…

To help you rapidly drill down on biologically interesting and relevant results, we have created a flexible query tool for filtering your variation analysis results within the DNAnexus Genome Browser. With just a few clicks, you can apply any number of filters to a results table, yielding a set of variant calls that allow easy navigation through the browser and further investigation.

In this release, we have added 13 distinct filters, including chromosome, variant type, gene/transcript name, zygosity, location relative to gene/transcript, among others. These filters are currently available for the DNAnexus Nucleotide-Level Variation (see screenshot below) and Population Allele Frequency analyses results. We are also working towards making them available for any data type, including RNA-seq and ChIP-seq data. All of the filtered results can be exported out of DNAnexus for further analyses in other tools, such as Excel or statistical tools.

Understanding And Validating Variant To Gene To Disease Results

To help you understand a prioritized list of variants as well as the genes and processes impacted as a result of these variants, we have included the ability to link out to other third party data sources, both public and commercial data sources that contain relevant gene-to-disease knowledge, allowing you to study how identified variations in DNA affect the response to diseases, bacteria, viruses, toxins and chemicals, including drugs and other therapies.

It’s All About The Data

DNAnexus specializes in addressing the data storage, management and analysis challenges inherent in next-generation sequencing. We believe that by leveraging the cloud, being data-source/platform agnostic we can provide the best possible support for anyone using these data in their work. We also believe that your input regarding what data is accessible through DNAnexus is critical and because our platform is flexible we can easily integrate with many of the data sources you would like to access or need for your research.

DNAnexus currently supports direct linkouts to 12 public and commercial data sources including: AmiGo, BioBase, Cosmic, dbSNP, Entrez Gene, GeneCards®, IPA®, KEGG, NextBio, OMIM, PharmGKB, Pubmed. For commercial data sources, we can provide integrated access for users who have licenses to access these data.

Please let us know if there are specific data that you would like to access via DNAnexus by emailing us at support@dnanexus.com.

Take Me To The Data

To access these data sources we have added the new Gene Info pages (see the BRCA1 Gene Info page as an example below), which provide a gene overview and a list of all the data sources accessible. Gene Info pages are meant to give you a preview of the gene, with linkouts to additional information.

Gene Info pages are accessible through hyperlinked gene names within the DNAnexus Genome Browser and analysis results tables, as shown here.

We now support 22 reference genomes, the latest additions include Staphylococcus genome S. epidermidis ATCC 12228 and the Macaque genome M. mulatta.

Tell Us What You Think

Much of the new functionality that makes its way into the DNAnexus platform is the result of requests by our many active users. We cannot emphasize enough how much we value user feedback; it is a critical component of our product development and feature prioritization process.

To simplify the process of providing feedback, we have added feedback links to both the filterable results tables and the Gene Info pages. You are also welcome to email us at support@dnanexus.com with any feature requests or questions you may have. We look forward to hearing from you and keeping you posted on the many new features we are working on and will be releasing in the coming months.

Streaming an entire sequencing center across the Internet

How much next-gen sequencing data do the top genome centers in the world produce? It’s a staggering amount compared to even one year ago: The Broad Institute now has over 50 HiSeq 2000s, and BGI has over 100. Each HiSeq 2000 can sequence two human genomes per week, which means these centers could sequence in excess of 5,000 and 10,000 human genomes per year, respectively.

What would it take to transmit all the sequence data over the Internet? It turns out, surprisingly little. Let’s do some math: Each HiSeq 2000 can sequence 200 Gigabases per run, but takes over a week to do so. Illumina quotes the throughput of the instrument at 25 Gigabases per day, or about 1 Gigabase per hour. With quality scores and some simple compression, each base takes less than 1 byte of storage. In other words, a HiSeq 2000 produces 1 Gigabyte of sequence data per hour, or 290 Kilobytes per second. To put this number in context, today people routinely stream movies over the Internet to their home at a higher bitrate! Yes, these instruments produce a lot of data compared to the previous generation technology, but it’s quite manageable over modern network connections.

Let me go even further: A sequencing center operates sequencing instruments at perhaps 80% efficiency, so 290 Kilobytes/second * 80% * 8 bits/byte (for network transmission) = 2 Megabits per second. That means 50 HiSeq 2000 instruments, or the entire sequencing capacity of the Broad Institute, could fit over a 100 Megabit connection. A Gigabit connection could support four times the sequencing output of the BGI. A much smaller sequencing core, for example one with one or two HiSeq 2000s, can be supported easily with a 10 Megabit connection.

For those of you that operate a sequencing center, it may seem almost ludicrous that this is possible, and there are certainly reasons why these calculations are ideal-case: it’s difficult to get 100% of your connection’s rated bandwidth, your network is often congested from other ongoing activities, individual TCP streams are difficult to scale to Gigabit speeds, etc. And moving further down the analysis pipeline to SAM/BAM files from read mapping and variant calling, the data transfer demands can easily go up 10-fold. But these calculations are nonetheless close to what’s actually achievable today. Even if your actual network throughput is 50% of the ideal, most sequencing centers with a reasonable connection have no bandwidth issues streaming all the sequence data across the Internet.

Why would we want to do this? Once your data has been moved to an outside data center, it opens up tremendous opportunities: You can then decide to store it there long-term, and access it from anywhere in the world. You can give collaborators access to your data instantly. You can tap into vast compute resources available in the cloud, for example 100,000s of CPUs available in Amazon. You’re no longer bound by what your internal computing and networking infrastructure can support, and can grow or shrink your infrastructure as needed. There are so many advantages to moving your sequence data outside your walls, that I’ll leave that discussion for a future blog posting.

Want to test out the bandwidth yourself? It’s easy to do – just sign up for a free account. You’ll be able to upload three samples for free. If you want to upload directly from the sequencing instrument, we can also help you set that up in 10 minutes. Email info@dnanexus.com to find out more information on how to try streaming the data off your instrument to the cloud.

Navigating the Exome with DNAnexus

With a growing number of targeted exome capture solutions being integrated into the next-generation sequencing workflow, targeted exome analysis has become the go-to and cost-effective approach for obtaining sequence coverage of protein-coding regions of the genome. As a result, researchers are starting projects with larger populations, with larger, more complicated datasets. One reason targeted exome capture is gaining steam is because whole-genome sequencing is still cost prohibitive for most researchers.

To support this important methodology we have added Exome Analysis to our repertoire of analyses tools. With the DNAnexus Exome Analysis method, we’ve simplified a critical step in the processing and analysis of these datasets. Using this analysis you can quickly analyze exome sequence data by determining whether regions of interest have been sequenced with sufficient coverage to allow for further analyses.


For each exon, DNAnexus reports on the number and fraction of bases covered by sequence reads, along with the average coverage within the exon. Exons that are overlapping genes in a gene annotation track are labeled with the gene name to allow for easy follow-on searching for exons from a gene of interest.