Publication Watch: In Early 2013, Nice Flow of New Papers from DNAnexus Users

It’s been awhile since we checked in on publications using DNAnexus, so we headed over to PubMed to provide an update. With so many great new papers coming out — more than 10 just in the past few months — we wanted to take the opportunity to look at a few of them and see how they’re making use of DNAnexus.


In the Journal of Medical Genetics, scientists from Hebrew University Medical Center and colleagues at other organizations published a paper entitled “Agenesis of corpus callosum and optic nerve hypoplasia due to mutations in SLC25A1 encoding the mitochondrial citrate transporter” (published online February 2013). Lead author Simon Edvardson et al. report on the first known patient with agenesis of corpus callosum caused by a mitochondrial citrate carrier deficiency. The team performed exome sequencing and used DNAnexus for read alignment and variant calling. Two pathogenic variants were found in a gene responsible for the mitochondrial citrate transporter, and functional studies in yeast validated the findings by displaying the same biomolecular effects of the mutated proteins.


In the January issue of Antimicrobial Agents and Chemotherapy, a journal from the American Society for Microbiology, a research team from Georgetown University Medical Center and the Institute of Microbiology in Beijing released a paper called “Azole Susceptibility and Transcriptome Profiling in Candida albicans Mitochondrial Electron Transport Chain Complex I Mutants.” In the study, the authors looked at how mitochondrial changes in yeast alter susceptibility to certain azole compounds commonly used as antifungal agents. As part of the effort, the team used RNA-seq to generate a transcriptome profile of two mutants known to increase susceptibility to azoles. Data analysis was conducted through DNAnexus. The scientists found that both mutants showed downregulation of transporter genes that encode efflux proteins, a mechanism thought to be linked to the cell energy required for azole susceptibility.


In the journal Human Mutation, a paper entitled “A Deletion Mutation in TMEM38B Associated with Autosomal Recessive Osteogenesis Imperfecta” (published online in January) comes from a research group at Ben Gurion University and the Soroka Medical Center, both in Israel. The scientists studied patients with autosomal recessive osteogenesis imperfecta, or brittle bone disease, which could not be explained by any previously known mutation. The team used genome-wide linkage analysis and whole exome sequencing to identify a single mutation common to all three patients: a homozygous deletion mutation of an exon in TMEM38B. Sequence read alignment, variant calling, and annotation were done with DNAnexus tools.


Finally, a paper published early online in February in the journal Case Reports in Genetics called “Targeted Next-generation Re-sequencing of F5 gene Identifies Novel Multiple Variants Pattern in Severe Hereditary Factor V Deficiency“ comes from a group that used DNAnexus for data quality, exome coverage, and exome-wide SNP/indel analysis. The authors — scientists from Pennsylvania State University and MS Hershey Medical Center — present a study of four people with severe factor V deficiency in which they used next-gen sequencing to study the factor V gene locus. They found five coding mutations and 75 noncoding variants, including three missense mutations previously associated with other factor V phenotypes.

On DNA Day, We’re Thinking About (What Else?) Data

Today is DNA Day! This year it’s an especially big deal as we’re honoring the 60th anniversary of Watson and Crick’s famous discovery of the double-helix structure of DNA as well as the 10th anniversary of the completion of the Human Genome Project.

DNAnexusBack when Watson and Crick were poring over Rosalind Franklin’s DNA radiograph, they never could have imagined the data that would ultimately be generated by scientists reading the sequence of those DNA molecules. Indeed, even 40 years later at the start of the HGP, the data requirements for processing genome sequence would have been staggering to consider.

Check out this handy guide from the National Human Genome Research Institute presenting statistics from the earliest HGP days to today. In 1990, GenBank contained about 49 megabases of sequence; today, that has soared to some 150 terabases. The computational power needed to tackle this amount of genomic data didn’t even exist when the HGP got underway. Consider what kind of computer you were using in 1990: for us, that brings back fond memories of the Apple IIe, mainframes, and the earliest days of Internet (brought to us by Prodigy).

A couple of decades later, we have a far better appreciation for the elastic compute needs for genomic studies. Not only do scientists’ data needs spike and dip depending on where they are in a given experiment, but we all know that the amount of genome data being produced globally will continue to skyrocket. That’s why cloud computing has become such a popular option for sequence data analysis, storage, and management — it’s a simple way for researchers who don’t have massive in-house compute resources to go about their science without having to spend time thinking about IT.

So on DNA Day, we honor those pioneers who launched their unprecedented studies with a leap of faith: that the compute power they needed would somehow materialize in the nick of time. Fortunately, for all of us, that was a gamble that paid off!

At Bio-IT World, Genome Centers Dished on Big Data

BioIT World 2013At the Bio-IT World Conference & Expo last week in Boston, more than 2,500 attendees descended on the World Trade Center to hear about the latest in hardware, analysis, data storage, and much more. The DNAnexus team was out in force, and we were delighted to share updates about our new platform with the many attendees who stopped by our booth.

The conference had a number of excellent keynote talks this year, including Atul Butte from Stanford and Andrew Hopkins from the University of Dundee. We also really enjoyed seeing Steven Salzberg’s acceptance of the Benjamin Franklin Award for Open Access in the Life Sciences — a much deserved honor for one of the veterans of the bioinformatics field.

Perhaps most interesting was a panel discussion about big data featuring members of major genome centers. Panelists included Guy Coates from Sanger, Xing Xu from BGI, Eric Jones from the Broad Institute, and Alexander (Sasha) Zaranek from Harvard Medical School and a company called Clinical Future.

For those of us who remember when it was a big deal to have a terabyte of storage available, it was truly amazing to hear that most of the panelists have 15 petabytes or more of data stored and easily accessible. Still, even with resources like that, some of the panelists encourage their institute members to delete data when possible, such as the unaligned reads from a sequencing run.

Access control is a real problem for managing data at these large centers. Sanger’s Coates said that his institute’s move into the clinical field — complete with consent forms and all the other compliance needs — makes controlling access “a real nightmare” for his team. Jones at the Broad said that this issue basically means people in the field are living on borrowed time as it becomes increasingly important to find the right solution to this challenge. Zaranek noted that Clinical Focus will use the Arvados tool to include security permissions and provenance along with the files to address this issue.

The panelists also specifically discussed cloud computing, with BGI’s Xu saying that the cloud is his center’s main data repository. Still one goal is to facilitate more rapid and efficient exchange of genomic data globally via higher bandwidth, although they have tested this using Aspera. They successfully transferred 24 GB in just 30 seconds across countries, but this feat is not yet economical enough for routine use. Coates said that his group uses cloud options (including Amazon) for research projects, but they are still evaluating how to integrate cloud for the production pipeline in a cost-effective way. At the Broad, Jones said, the need to move to the cloud is understood, but so far internal computing is still enough for institute members; he added that the cloud’s elasticity will ultimately drive adoption, allowing people to run very large jobs that would otherwise interfere with the rest of the institute’s compute resources. Zaranek’s group is using cloud computing from Harvard and from Amazon and said that having both options is incredibly valuable. It will also allow other organizations to access their resources. Coates and Jones said that the real challenge in managing data is when individual researchers start moving data around, because tracking that data and predicting resource needs can become difficult.

These are all issues that we have given a great deal of thought to as we designed and built the new DNAnexus, now available for beta testing. We agree that security and compliance are important components of any compute solution — whether cloud-based or in-house — and that’s why we baked the highest standards right into our new tool. Having flexibility to configure the environment as needed, such as scaling up or down at a moment’s notice, is another key trait of the new platform and one that we believe will be quite useful for scientists in individual labs or at these major genome centers streaming data around the clock.