The Message-Passing Interface: unlocking the power of software

Author: Mike Jackson
Posted: 8 Jan 2015 | 11:53

Simulation by IES of daylight on a building.

 

 

 

 

 

EPCC helped lead the way in creating the standardised Message-Passing Interface (MPI) programming system to enable faster, more powerful, problem solving using parallel computing. It is now the ubiquitous de-facto standard among both hardware and software vendors.

Mike Jackson and Dan Holmes explain how EPCC continues to be involved in its development and use.

Parallel computing systems are everywhere, from the world’s most powerful supercomputers to multi-core laptops. However, differences in how computer hardware vendors support parallel programs mean it can be difficult to write a program that will run well across different computers. MPI allows the same software to be run on different parallel computers. Without it, software would need to be rewritten every time before it could be used on different hardware.

MPI allows users to exploit the power of multi-core, multi-processor, cloud or supercomputer architectures. They are free to choose hardware that meets their operational and computational needs, and is within their financial constraints. This presents enormous potential benefits including reduced processing times, increased potential revenue, and solving more complex problems in industry and academia.

Exascale: the future of computing

MPI is an integral part of the effort to develop Exascale computing: systems that can do over a billion billion calculations per second. EPCC and supercomputer manufacturer Cray are collaborating on new programming models to enhance the interfaces for Exascale computing.

The ubiquity of MPI for HPC programming at all scales means that, at the very least, a migration path from MPI to any Exascale solution must be found for all existing scientific software. The continued progress of innovative computational science, and the ongoing benefits to humanity that it brings, will depend on efficient exploitation of Exascale machines, which in turn depends on making MPI ready for Exascale.

We are also working with commercial and research organisations including Intel, Cray, Cisco, MPICH and OpenMPI to improve support for hybrid and multi-threaded programming, which complement MPI and offer another way to unlock the power of parallel computers. As part of the European EPiGRAM project, EPCC is using MPI to adapt software for modelling turbulence and space weather to enable it to make use of Exascale resources.

Dynamic network routing

Networking hardware is moving towards more dynamic configurations, especially with adaptive routing. In the MPI Forum, EPCC works with hardware vendors such as Cray and Intel to adapt MPI to dynamic network routing to allow users to take full advantage of new network capabilities.

Slashing job times

Our MPI expertise has benefited many companies over the past two decades. For example, as part of a partnership with Scottish Enterprise called Supercomputing Scotland, EPCC has worked with Integrated Environmental Systems (IES), the world’s leading provider of software and consultancy services on energy efficiency within the built environment. With MPI, IES’s SunCast software can now run on a supercomputer, creating massive time savings for the company. In one case, analysis time was reduced from 30 days to 24 hours.

IES Director Craig Wheatley said, “Using MPI in IES Consultancy has increased the efficiency and therefore profitability of our own consultancy offering and to date we have used it with 4 live projects with an average analysis time of under 12 hours. These particular projects were very large and complex and would otherwise have taken several weeks.”

Exploiting supercomputers

The exploitation of supercomputing resources requires MPI. A 2011 survey of the largest HPC systems in Europe, undertaken by the PRACE project, found that 100% of their users employed MPI in some form. All research output from ARCHER, the UK’s national supercomputing service, is underpinned by MPI. ARCHER supports a wide range of research, from international efforts to model climate change to investigating blood flow in the human brain. The use of MPI is key to enabling this work.

Cray’s Manager for Exascale Research Europe says that: “Today, a Cray MPI library is distributed with every Cray system and is the dominant method of parallelisation used on Cray systems. Cray reported total revenue of over $420 million in 2012, so this is a large industry which is heavily reliant on MPI and the work that EPCC contributed in this regard”.

MPI in a commercial environment

Solar analysis - the journey from two weeks to one hour with HPC” by Steven Turner discusses EPCC’s work with Integrated Environmental Systems (IES).

Open source solution

MVAPICH, an open-source implementation of MPI that was designed for use in high-performance computing, has recorded over 182,000 downloads and over 2,070 users in 70 countries. Among those users are 765 companies.

Future directions

The EC-funded EPiGRAM project  is preparing Message Passing programming models for Exascale systems. As part of this work, EPCC’s original version of MPI (dating from the early 1990s) is being used on ARCHER to investigate the future potential direction of MPI.

McMPI (Managed-code MPI), a new MPI library written entirely in pure C#, was developed by Dan Holmes. Read his blog to find out more.

EPCC contacts

Commercial enquiries about the use of MPI should be directed to George Graham. For research-related enquiries, please contact Lorna Smith.

Image: Simulation by IES of daylight on a building.