ABRF: A Quick Meeting Recap

Here at DNAnexus, we’re lucky to have a terrific team supporting our goals. In this blog post, we wanted to share highlights from the recent ABRF meeting from the perspective of our marketing manager, Cristin Smith. Here’s her recap.

Just when we thought the Marco Island resort couldn’t be beaten for location, here comes the annual Association for Biomolecular Resource Facilities (ABRF) conference, held at the lakeside Disney Contemporary Resort right in the heart of Disney World, complete with a view of Space Mountain. I’m pretty sure the team back home in Mountain View was a little concerned that we weren’t going to come back.

The meeting’s opening keynote came from Trisha Davis, who runs the Yeast Resource Center at the University of Washington. Her work has focused on using yeast as a proving ground for various technologies, noting that as her center has evolved, so too has her team’s ability to really drill down into targeted interrogations of the organism. During her talk, entitled “Technology Development in a Multidisciplinary Center,” she noted how important it is to integrate multiple complex analyses in an attempt to relate genotype to phenotype.

On the final day of the meeting, “Omics Technologies to Transform Research, Health & Daily Life” also resonated with me. This was Harvard professor George Church’s vision of a future where genome sequence information is widely used and readily available. He spoke about some current logistical limitations, such as the fact that a $100 blood draw is cost-prohibitive, and that the field will have to move toward buccal swabs and other technologies that may cost only $1 to process in order for ’omic testing to become affordable. Citing some 37 next-gen sequencing technologies as the driver for the rapid drop in sequence costs, he said that his own estimate of the current genome price — from sample to interpretation — is $4,000. In order for genome sequencing to become medically useful, Church noted a few factors that will have to be addressed: a focus on completeness and standards to give FDA confidence in these technologies; the need for significantly more genetic counselors than we have right now; and better interpretation software that makes genome analysis truly straightforward.

Overall, we were excited to see how eager the core lab community is to receive technology improvements that generate a higher quantity and quality of sequence data for their customers in support of their research. This enthusiasm was a great setting to unveil our newly redesigned booth at the exhibit hall. It’s hard to find a more tech-loving crowd than the people who run core facilities, and we were glad to meet so many of them last week.

SOT: Still Early Days for Next-Gen Sequencing in Molecular Toxicology

The Society of Toxicology’s 51st annual meeting was held this week right in our back yard. Since I am a longtime member, I headed up to the Moscone Convention Center in San Francisco to check it out. The Annual Meeting and ToxExpo were packed; almost 7,500 people and more than 350 exhibitors.

SOT isn’t like the sequencing-focused meetings I’ve been attending since I joined DNAnexus, but it’s actually home turf for my own research background in toxicogenomics. This year’s meeting sponsors included a number of pharmas and biotechs, from Novartis and Bristol-Myers Squibb to Amgen and Syngenta. Scientific themes at the conference ranged from environmental health to clinical toxicology to regulatory science and toxicogenomics. Next-gen sequencing is still in its infancy in the world of molecular toxicology, which is still dominated by microarray expression experiments. There were very few posters showing applications of NGS data in toxicogenomics — the ones that did tended to be centered around microRNAs — but a lot of the people I had conversations with have recently started running sequencing studies to eventually retire microarray type experiments.

I found Lee Hood’s opening presentation particularly interesting because he focused on the need to combine data from various technology platforms and institutions all over the world. He talked about his P4 vision, of course — the idea that medicine going forward will have to be predictive, personalized, preventive, and participatory. He also included great gems about fostering a cross-disciplinary culture, mentioning genome sequencing of families, the human proteome, and mining genomic data together with phenotypic and clinical data.

Lee Hood. Photo Copyright Chuck Fazio

Another exciting talk that was well received came from Joe DeRisi at the University of California, San Francisco. He presented work analyzing hundreds of honey bee samples with microarrays combined with DNA and RNA sequencing. Using an internally developed de novo assembler called PRICE (short for Paired-Read Iterative Contig Extension; freely available on his website), his team identified a number of different organisms associated with the sequence data of the honey bee samples, including different viruses, phorides, and parasites. At this moment it’s not clear what is causing the honey bee population decline; it appears that there are multiple factors contributing to the phenomenon. It is great to see that DeRisi and team will continue working in this area.

Last but not least, Scott Auerbach from the National Toxicology Program announced the release of the previously commercial toxicogenomics database DrugMatrix to the public for free (announced earlier this year, but now officially made public). With this release, DrugMatrix is now the largest scientific and freely available toxicogenomic reference database and informatics system. The data included is based on rat organ toxicogenomic profiles for 638 compounds; DrugMatrix allows an investigator to formulate a comprehensive picture of a compound’s potential for toxicity with greater efficiency than traditional methods. All of the molecular data stems from microarray experiments, but Auerbach and team are now investigating what it will take to move from microarrays to RNA-seq experiments and how to integrate the different types of data. They are currently performing a pilot on a subset of compounds with the same RNA used for the microarray experiments. Their challenge, as he sees it, lies in the interpretation and validation of the newly generated RNA-seq data: what qualifies one platform as superior to the other? Since they are interested in the biology and in generating drug classifiers, one way of looking at it is to assess which platform is the basis for better classifiers based on sensitivity and specificity thresholds. It will be interesting to see whether the RNA-seq data-based classifiers will be comparable or superior to microarray classifiers.

AGBT in Review: Highlights and High Hopes for Data

Last week’s Advances in Genome Biology and Technology (AGBT) meeting was every bit the fast-paced roller coaster ride we were anticipating. As expected, there were no major leaps announced by the established vendors, although Illumina, Life Tech’s Ion Torrent, and Pacific Biosciences all had a big presence at the conference.

View from my hotel room: I got lucky with an ocean front room

The biggest splash by far came from Oxford Nanopore Technologies, which emerged from stealth mode with a talk from Chief Technology Officer Clive Brown. The company’s technology sequences DNA by detecting electrical current as the strand moves through a nanopore. Brown said the technology had been used successfully to sequence the phi X genome (a single 10 KB read got the sense and antisense strands) and the lambda genome (a 48 KB genome also covered in a single pass). Brown reported raw read error rate of 4 percent, mostly caused by the DNA strand oscillating in the nanopore instead of moving smoothly through it. Other significant features: the nanopore can read RNA directly, detect methylation status, and be used directly from a sample (such as blood) – no prep required.

What I thought was most interesting, though, was that at a meeting known for being wall-to-wall sequencing technology, this year’s event really focused more on two arenas: clinical genomics and data analysis. The conference kicked off with a session on clinical translation of genomics, with speakers including Lynn Jorde from the University of Utah and Heidi Rehm from Harvard. Both talked about the key challenges in data analysis and interpretation, with Rehm in particular stressing the need for a broadly accessible data platform with clinical-grade information that could be ranked with confidence level and would pull data together from a variety of disparate sources. Notably, the clinical talks generally were limited by small sample sizes, and sometimes wound up with results that were inconclusive in recommending a particular course of treatment. That’s to be expected in the early stages of moving sequence data into a clinical environment, of course, but it also underscores the opportunities here once low-cost sequencing becomes widely available.

The trend was clear: data, data, data. And the only way to make the most of all that data will be to pave the way to an environment where information can be accessed and shared easily, with as many tools as possible to interrogate, analyze, and validate it.