Comparison of BGISEQ 500 to Illumina NovaSeq Data

Andrew Carroll

 

 

 

Last Thursday, BGI uploaded three WGS data sets of NA12878/HG001. Included in this was a challenge to conduct a side-by-side analysis.

BGISEQ TweetAlbert Villela of Cambridge Epigenetix drew our attention to this dataset. Given our love of benchmarking and new technologies, we applied our evaluation frameworks to this data. The technology behind the BGI-SEQ 500 was based on Complete Genomics, which was acquired by BGI.

This uses DNA Nanoballs and a probe-based method of sequencing. We have seen data from this instrument in PE50, PE100, and PE150 formats, with the number indicating the length of paired-end reads (longer reads generally lead to better analysis).

A dataset for HG001 was previously submitted to Genome in a Bottle last year, with PE50 and PE100 reads and DNAnexus conducted an assessment of these data. This analysis indicated a reasonable gap, particularly in Indels. We also demonstrated that by re-training deep learning models on BGISEQ data (as Jason Chin did with Clairvoyante and Pi-Chuan Chang with DeepVariant) it is possible to bridge this gap using only software.

Given that this new dataset is released one year later and uses longer reads, the new set provides a measure of the progress BGI has made on the instrument and the difference in using longer reads. To quickly summarize, this most recent release of BGISEQ demonstrates significant improvements relative to prior data, though a (modest) gap relative to Illumina remains.

Performance Comparisons

We downloaded the 3 WGS sets directly from EBI. All of these data were generated and submitted by BGI – 2 runs from the BGISEQ500 PE150, and one run from NovaSeq6000 presumably operated by BGI. We analyzed each WGS set in its entirety through several standard pipelines – DeepVariant, Sentieon, Strelka2, GATK4, and Freebayes. Mapping was performed by Sentieon, which produces identical output to BWA-MEM and is faster.

Because the Illumina data here was submitted by BGI, we thought it fair to also include Illumina data generated by Illumina, so we used 35X WGS NovaSeq data available from BaseSpace that we have previously included in our Readshift blog.

BGISEQ ComparisonFigure 1 shows the SNP accuracy (combining false positives and false negatives). Lower bars in this chart indicate better performance. Some take-aways:

  1. The gap between Illumina NovaSeq and BGISEQ is quite narrow in these data. The difference between BGISEQ and Illumina BaseSpace for GATK is about the same as the difference in pipeline choice on Illumina data between GATK and DeepVariant. By this logic, if groups are OK in terms of accuracy choosing GATK, they should be OK to consider BGISEQ as well.
  2. The accuracy of the Illumina data available on BaseSpace is greater than the accuracy for the set uploaded by BGI. For SNPs, Illumina’s BaseSpace set is more representative of what we see in the community. The BGI set seems to be a somewhat worse sequencing run. The yellow illumina set is probably the fairer comparison.
  3. Note that the version of DeepVariant is NOT the one tuned with BGISEQ data ( we did not run in the BGISEQ tuned model for this investigation). It would be interesting to see whether performance will improve further on that model.

BGISEQ Comparison 2Figure 2 shows the Indel accuracy. All of the observations made previously apply with the addition of the following:

Based on the error profile of the callers here, the Illumina dataset submitted by BGI is clearly a PCR-positive dataset (based on the much stronger performance of Strelka2 and DeepVariant). Also, the impact of PCR on these data is slightly more pronounced than with other PCR-positive datasets we see.

Presumably, all available BGISEQ options are PCR-positive. However, Illumina provides the option of both PCR-free and PCR-positive preparations. Given this, it seems more fair to recommend a comparison relative to the yellow Illumina-operated NovaSeq. In addition, this is a good time to remind readers that they should be aware of the impact of PCR in sequencing quality and whether their datasets were generated with it.

Breakdown of False Positives and False Negatives

When we do these benchmarks, the most frequent request is to break down false positives and false negatives on the datasets. Figures 3 and 4 show this for SNP and Indels in one of the BGISEQ samples (which is representative of the other as well):

BGISEQ SNP Comparison

BGISEQ SNP Comparison2Computational Performance 

Finally, you may wonder if any of the programs have issues running on BGISEQ data (or run longer). The answer is – not really. Computationally, performance seems similar to what we see in Illumina data:

BGISEQ Core Hours Comparison

Conclusion –

If this newest data is broadly representative of BGISEQ performance, the BGISEQ looks like a technology worth considering. The price points that we have heard second-hand indicate that buyers would consider tradeoffs between less widely adopted and (slightly) less accurate BGISEQ genomes in exchange for better economics. Based on these benchmarks, these differences in accuracy are not so extreme that BGISEQ genomes would be considered very different.

It is important to note that these samples are PE150, the PE50 and PE100 may have worse performance. It is also important to note that these were specifically put out by BGI and likely represent the highest quality runs from the instrument.

Given that it is still early in the lifecycle of the instrument, it will be important to rigorously QC runs until the community has a good feel for the consistency of BGISEQ quality. If anyone else has runs of HG001/HG002/HG005 from the BGISEQ, I would love you to reach out to us (acarroll@dnanexus.com) so we can replicate this analysis from community-driven runs.

Announcing the Winners of Mosaic Microbiome Community Challenge: Strains #1

The application of next-generation sequencing in the study of microbial communities has fueled the rapid growth of interest in microbiome research. However, difficulties with the accuracy of computational analyses of these complex datasets have limited the translation of microbiome science into novel biotherapeutic products. In order to unleash the potential that metagenomics holds for human health, computational methods to identify unique microbial strains must be improved.

The Mosaic Community Challenge: Strains #1, sponsored by the Janssen Research & Development, LLC, through the Janssen Human Microbiome Institute, aims to benchmark and improve the performance of computational tools in analyzing these data, in order to provide better quality profiling of microbiome samples at high resolution. The challenge gave participants the opportunity to validate their bioinformatics tools in realtime on a neutral, unbiased platform, and see how they performed against other industry tools.

Participants of the challenge worked with datasets that were composed of four different sample types: a metagenomics dataset generated from real mouse fecal samples (of known bacterial composition), and three simulated datasets of varying complexity. Besides the challenge dataset, a distinct training dataset, which included the truth files, was provided to enable participants to train and improve their methods. Participants were then able to conduct analysis by either creating their own app on the Mosaic Platform, or by downloading the dataset and running their method in their own system. Over the four-month course of the challenge, participants could take advantage of a “Testing Ground” to get immediate feedback on their work with training datasets before submitting their final challenge entries.

Challenge Winners & Their Methods

We would like to congratulate the winners as well as thank all who participated for helping to take microbiome science to the next level.

Profiling

CosmosID, a bioinformatics and NGS service laboratory, scored highest in the Profiling part of the challenge. The CosmosID analysis pipeline achieved the highest cumulative F1-score, which is a measure of precision and recall. According to Nur Hasan, Chief Science Officer at CosmosID, the strength of their approach lies with the manually curated database, whose structure follows the phylogenetic hierarchy of all represented microorganisms which enables reliable microbial identification at all taxonomic levels, down to strain-level.

CosmosID’s submission scored the highest in the analysis of the Biological Sample (80%), which was 64% higher than the score of the second submission (48.9%). Interestingly, however, submissions based on the popular Metaphlan tool, performed better across the simulated datasets. The observation that the performance of tools vary based on the source of the sequencing data highlights the importance of benchmarking the tools on both biological and simulated datasets.

Figure 1. Precision/Recall Curve for the winning submission for each of the challenge datasets (to view this chart visit the submissions page on Mosaic).

To interactively compare the Profiling submissions and view Precision Recall Curves, visit the Strains #1 Profiling comparison page. 

Assembly  

Rayan Chikhi, PhD, Computer Scientist at the French National Center for Scientific Research (CNRS) and CRIStAL research center, and an advisor at Clarity Genomics, scored highest in the Assembly part of the challenge by using the Minia assembler to assemble the metagenomic data provided for the challenge. The assembly portion was judged on the total number of aligned bases divided by the reference genome size (Genome Fraction). The winning submission scored well across all other metrics reported in the leaderboard, namely Misassemblies and Mismatches.

Figure 2. Genome fraction scores across 13 biological sample reference strains 

Honorable mentions go to two other participants. Peter McCaffrey came a close second with his DeepBiome submission, while his submitted assemblies were longer than the winning submissions. Additionally, the submissions from Sergey Nurk (Metaspades assembler) had consistently the largest contigs.

To make your own comparisons between the submissions and dive in deeper in the rich comparison data available, visit the Strains #1 Assembly comparison page. 

Learn about the winners’ methods during our webinar confirmed for Tuesday, June 26th at 10am PT (1pm ET).

Want More Ways to Participate in the Mosaic Microbiome Community?

Learn more and get involved at mosaicbiome.com/challenges

Visit Us at Microbiome Drug Development Summit!  

DNAnexus will present Translation of Microbiome Research into Clinical Applications, this Friday, June 22nd at 12pm at the Microbiome Drug Development Summit in Boston. Join our talk, and stop by our exhibition table to learn more about DNAnexus microbiome capabilities, and the Mosaic Community Platform & Challenges. Email us to schedule a meeting in advance.

Translation of Microbiome Research into Clinical Applications

  • Crowdsourcing the advancement of microbiome research with the Mosaic Community platform and challenges
  • Considerations for incorporating microbiome data into clinical trials
  • Complying with GLP, 21 CFR Part 11, and more

Speakers:

   Omar Serang, Chief Cloud Officer, DNAnexus

  Michalis Hadjithomas, PhD, Microbiome Lead, DNAnexus

PrecisionFDA Receives FDA Commissioner’s Award for Outstanding Achievement

Today, the precisionFDA Next Generation Sequencing (NGS) Team received the FDA Commissioner’s Special Citation Award for Outstanding Achievement and Collaboration in the development of the precisionFDA platform promoting innovative regulatory science research to modernize regulation of NGS-based genomic tests. This award recognizes superior achievement of the Agency’s mission through teamwork, partnership, shared responsibility, and fostering collaboration to achieve the FDA goals.

 

PrecisionFDA is an online, cloud-based, virtual research space where members of the genomics community can experiment, share data and tools, collaborate, and define standards for evaluating and validating analytical pipelines. This open-source community platform, which has become a global reference standard for variant comparison, includes members from academia, industry, healthcare, and government, all working together to further innovation and develop regulatory standards for NGS-based drugs and devices. Launched in December 2015, the precisionFDA community includes nearly 5,000 users across 1,200 organizations, with more than 38 terabytes of genomic data stored.

To date, the precisionFDA NGS Team has engaged the genomics community through a series of community challenges:

  • The Consistency Challenge (Feb-Apr 2016): Invited participants to manipulate datasets with their software pipelines and conduct performance comparisons.
  • The Truth Challenge (Apr-May 2016): Gave participants the unique opportunity to test their NGS pipelines on an uncharacterized sample (HG002) and publish results for subsequent evaluation against a newly-revealed ‘truth’ dataset.
  • App-a-thon in a Box (Aug-Dec 2016): Invited the community to contribute NGS software to the precisionFDA app library, enabling the community to explore new tools.
  • Hidden Treasures Competition (Jul-Sep 2017): Participants beta-tested the in-silico analyses of NGS datasets for the purpose of determining the reliability and accuracy of different NGS tests.
  • CFSAN Pathogen Detection Challenge (Feb-Apr 2018): Participants helped to improve bioinformatics pipelines for detecting pathogens in samples sequenced using metagenomics.

We are thrilled that precisionFDA has been recognized for its efforts in fostering shared responsibility for the evaluation and validation of analytical pipelines. PrecisionFDA’s proven success has driven other scientific communities such as St. Jude Cloud to promote pediatric cancer research, and the Mosaic microbiome platform for advancing microbial strains analysis, to establish their own collaborative ecosystem for members to contribute and innovate. DNAnexus is proud to be the platform that powers precisionFDA and other community portals to advance scientific research through a secure and collaborative online environment.

To learn more about DNAnexus community portals please visit: http://go.dnanexus.com/community-portals.