ExTASY: a flexible and scalable approach to biomolecular simulation

Author: Iain Bethune
Posted: 18 Jul 2016 | 12:20

Over the last 10 years, the growth in performance of HPC systems has come largely from increasing core counts, which poses a question of application developers and users – how to best make use of the parallelism on offer?

In the field of biomolecular simulation, for example, strong scaling – using more cores for a fixed problem size – is typically limited to a few hundred CPU cores (or a single GPU) with system sizes of 10-100 thousands of atoms. Weak scaling – increasing the problem size proportionally to the number of cores – produces good performance numbers for systems with up to a billion atoms, but most scientifically relevant simulations are much smaller.

To understand the behaviour and function of a biomolecular system requires achieving a good ‘sampling’ of the conformational space of the system – essentially running a long enough simulation that the molecule has time to randomly explore all the possible ‘shapes’ it is capable of taking. This sounds simple, but in practice poses a real problem – a Molecular Dynamics simulation visits states with a probability exponentially proportional to the free energy of that state. Biological systems usually have a small number of low energy states (for example different foldings of a protein) separated by high energy barriers. Crossing a barrier has such a low probability it might take a few milliseconds or longer for a molecule to spontaneously leave the current state and transition to a new one. The problem is that even with custom high-performance hardware and software, it can take months of computing to simulate even a single millisecond of dynamics! As a result various methods have been developed, such as Metadynamics and Accelerated Dynamics, that aim to ‘push’ simulations over energy barriers and speed up the exploration of the conformational space. Another approach is, rather than run a single MD simulation, to run a ‘swarm’ of 100s or more simulations starting from the same point, increasing the chance that at least one of them will cross a barrier.

Free energy landscape of Alanine-12, computed using Diffusion-Map-directed MD. Representative conformations are shown for each of the regions of the space which have been sampled.

Free energy landscape of Alanine-12, computed using Diffusion-Map-directed MD. Representative conformations are shown for each of the regions of the space which have been sampled.

These swarm or ‘ensemble’ simulations are well suited to making use of parallel computing resources since each simulation is independent, or perhaps loosely coupled via some infrequent analysis step. The ExTASY project has developed a Python framework – Ensemble Toolkit – which provides a powerful but user-friendly API for coding workflows involving ensemble MD simulations coupled to tools which analyse the results of the simulations and propose new start-points for further MD in an iterative ‘Simulation-Analysis Loop’ pattern. Building on top of the established distributed computing middleware RADICAL-Pilot, our toolkit allows not only for workflows to be specified programmatically, but they may then be executed directly on a range of compute resources, including locally or on remote HPC or cloud computing platforms. The differences between execution platforms are abstracted away from the user, and the middleware handles details such as resource allocation (perhaps via a batch system), bootstrapping the Pilot-Job which is used to execute the workflow, data staging to and from the target resource, as well as execution of the workflow, respecting dependencies that are defined between the tasks.

Dynamics of BPTI from a 1ms trajectory generated using the Anton computer. Different conformational states are labelled by colour, showing the the rare purple state which is only seen briefly towards the end of the simulation.

Dynamics of BPTI from a 1ms trajectory generated using the Anton computer. Different conformational states are labelled by colour, showing the the rare purple state which is only seen briefly towards the end of the simulation.

So far, we have worked with researchers from Rice University and the University of Nottingham to implement two different sampling workflows using Ensemble Toolkit: Diffusion-Map-directed-MD, which uses the GROMACS program for the simulations and LSDMap to analyse the results, and CoCo-MD, based on the AMBER MD program and an analysis tool called CoCo. Thanks to the platform-independent middleware, we have demonstrated these workflows running on several large HPC systems in the UK and US (ARCHER, a Cray XC30, Stampede, a Linux cluster and Blue Waters, a Cray XE6/XK7) using over 3000 cores.

Running advanced sampling applications on a remote HPC system using Ensemble MD

Running advanced sampling applications on a remote HPC system using Ensemble MD

The ExTASY project is just one example of how science is adapting to the trends in computing hardware, finding more parallelism to use more cores, coupled with the ability to run on whatever platform is available.

This blog article originally appeared as a guest entry on the CloudLightning project's blog. You can read the original here.

Author

Iain Bethune, EPCC