What Does the Sunsetting of Python 2.7 Mean for You?

Sunsetting Python 2.x

As stated on python.org, the Python core development team sunset Python 2.x on January 1, 2020 and moving forward, will support only Python 3.x. This announcement means that the Python organization will no longer provide security updates, bug fixes, or other improvements going forward. Read on for information about what this means for you as a user of the DNAnexus Platform.

The Fine Print

As mentioned above, any new security vulnerabilities discovered in Python 2 after January 1, 2020, will remain unpatched. The DNAnexus execution environment isolates the execution of apps in a secure Linux container, and mitigates the impact of potential Python 2 security vulnerabilities. Given the lack of support after Python 2 goes End-of-Life (EOL), significant security vulnerabilities may cause the DNAnexus Platform to disable execution of Python 2 or have you assume full liability for execution of your Python 2 code.

As of December 2019, we provide an Ubuntu 16.04 app execution environment, “Python 2 AEE,” which includes the following:

  • The dx-toolkit package (including the “dx” command-line client and the “dxpy” python module), configured in a way that requires Python 2.
  • The stock Ubuntu python2.7 interpreter, available at /usr/bin/python.
  • The stock Ubuntu python3.5.2 interpreter, available at /usr/bin/python3.

To facilitate the migration to Python 3, we plan to provide a new Ubuntu 16.04 AEE in the first quarter of 2020. This new “Python 3 AEE” will include the dx-toolkit package configured in a way that makes “dxpy” compatible with both Python 2 and Python 3. The “dx” command-line client will use Python 3.

Furthermore, we will introduce a new configuration option to dxapp.json so that you can select between “Python 2 AEE” and the new “Python 3 AEE.” In addition, we will introduce a new “python3” value for the “interpreter” dxapp.json configuration option.

In summary, while it’s  possible to use both Python 2.x or Python 3, to prevent any security issues, we strongly encourage you to review your code for Python 2.7 dependencies and consider migrating to Python 3.0. 

For More Information

To help with your planning and to further explain what this means, we’ve put together an FAQ.

Refining GWAS Results Using Machine Learning

Genome-wide association studies (GWAS) present a viable approach for researchers to identify genetic variations associated with a particular trait. GWAS have already identified several single nucleotide polymorphisms associated with diabetes, Parkinson’s disease, amongst others. However, these comprehensive studies frequently identify large numbers of genetic variants associated with the phenotypes, not all of which are causal. 

Fine mapping, which is a statistical process in which additional data are introduced to the GWAS dataset, enables researchers to prioritize those variants that warrant additional examination. And it also helps them identify which variants narrowly missed the genome wide significance threshold but actually are causal.

But fine mapping is easier said than done. For starters, you have to set up the proper computing environment — one that promotes traceability and reproducibility. Traceability and reproducibility become even more important when you are testing a drug which will potentially enter clinical trials. You also need to assemble the data in a way your fine mapping algorithms expects, which can be challenging. Not to mention the scientific challenges: it’s hard to compare and evaluate models and there are no frameworks that enable you to interact with the models and improve upon them.

The DNAnexus Platform provides end-to-end support for machine learning and also enables you to build and deploy the models such that domain scientists can ask questions and interact with the models themselves.

Join us for our upcoming webinar in which we provide an overview of how to refine your GWAS results using fine mapping. Specifically, by borrowing from Bayesian statistical methods, we present an interactive approach for applying machine learning-based models in fine mapping. Real-life examples will be demonstrated using UK Biobank data on the DNAnexus Platform. Register now.

Designing Bioinformatics Pipelines for Fast Iteration

When genetic tests are ordered, there’s probably little thought as to all of the bioinformatics work required to make the test possible. However, the bioinformatics team at Myriad Genetics understands firsthand just how much work it takes. Myriad Genetics provides diagnostic tests to help physicians understand risk profiles, diagnose medical conditions, or inform treatment decisions. To support their comprehensive test menu and commitment to providing timely and accurate test results, the bioinformatics team at Myriad focuses on optimizing their bioinformatics pipelines. How? By designing pipelines to leverage modularity and computational re-use to make improvements and iterate more quickly. 

Jeffrey Tratner, Director Software Engineering, Bioinformatics at Myriad spoke at DNAnexus Connect, explaining how fast iteration works on the DNAnexus Platform. You can learn more by watching his talk or reading the summary below.


Typical pipeline development involves setting up an infrastructure, building a computation process, and analyzing the results. When adjustments are made, this process repeats as many times as necessary until the pipeline has been properly validated. With complex pipelines, this process can consume many resources and a lot of time. Myriad wanted a more efficient way to iterate on their pipelines so that they could optimize them faster. Fast R&D, as Myriad defines it, is characterized by an environment in which you can make adjustments easily, find answers quickly, and don’t have to think too much or second guess which areas of the pipeline you need to change when making adjustments.

Myriad reduced pipeline R&D from 2 weeks to 2 hours by leveraging tools that enable them to re-use computations.

The team at Myriad first demonstrated this concept when they performed a retrospective analysis with a new background normalization step, the tenth step of a 15-step workflow, on over 100,000 NIPT (non-invasive prenatal test) samples. Simply rerunning the entire modified workflow would have taken 2 weeks.  Instead, Myriad reduced this time to two hours by rethinking the pipeline and leveraging tools that enable them to re-use computations.

Now codified, the approach in use at Myriad enables their team to make changes and iterate quickly, all with a focus on accuracy, reproducibility, and moving validated pipelines into production. 

So how can you borrow from their approach and design your bioinformatics pipelines for faster iteration?

Make computational modules smaller

Although it’s tempting to use monorepositories when coding because they promote sharing, convenience, and low overhead, they don’t promote modularity within pipelines. And modularity is what enables you to scale quickly, re-use steps, and identify/debug problems. Myriad organized all the source code for workflows in monorepositories, but developed smart ways to break the code within them into smaller modules and only build the modules that have been modified.

Take advantage of tools that enable you to reuse computations

If you run an app with the same data and the same input files and parameters, your results should be equivalent. So if you are changing a step downstream, why run all of the steps that come before it if they’ve already been run? The DNAnexus Platform, for example, includes a Smart Reuse feature. This feature enables organizations to optionally reuse outputs of jobs that share the same executable and input IDs, even if these outputs are across projects. By reusing computational results, developers can dramatically speed the development of new workflows and reduce resources spent on testing at scale. To learn more about Smart Reuse visit our DNAnexus documentation here.

Smart Reuse Bioinformatics Pipeline

Use workflow tools to describe dependencies and manage the build process

Workflow tools, such as WDL (Workflow Description Language), make pipelines easier to express and build. With WDL, you can easily describe the module dependencies and track version changes to the workflow. It’s also very natural to integrate Docker with WDL, so if you’re using some sort of open-source container hub, you can simply edit one line in WDL and load a module of a different version with a new docker image. Myriad writes their bioinformatics pipelines in WDL and statically compiles them with dxWDL into DNAnexus workflows, streamlining the build process. Learn more about running Docker containers within DNAnexus apps or dxWDL from our DNAnexus documentation.