Hybrid MPI+PGAS programming models tutorial

Author: Nick Brown
Posted: 18 Sep 2013 | 11:37

Here at EPCC we are very excited about the PGAS2013 conference which is being held in Edinburgh on the 3rd and 4th of October. This will be the 7th international PGAS conference and, as always, it will be a great chance for those in the community to discuss latest developments in the field and their research.

As a run up to the main conference itself, a tutorial is being held the afternoon before. Professor Dhabaleswar Panda and his team from Ohio State University will be talking about Hybrid MPI+PGAS programming models in a session that is free, and all are very welcome to attend. The session will take place on Wednesday 2nd October, running from 1:30pm until about 5pm.

Dhabaleswar is a well known scientists in the HPC and PGAS community. He has given numerous keynotes and invited tutorials at well respected conferences such as SuperComputing and ISC. For those in the PGAS community, the title of the tutorial might seem familiar - Dhabaleswar gave an early version at PGAS 2010. Since then, his team has done much development in this area and the state of the art has progressed enough to warrant another, dedicated afternoon focused on this area.


To register please follow the link found at http://www.epcc.ed.ac.uk/pgas-tutorial which also contains further details. This tutorial is entirely free and all are very welcome to attend. It will be held in the JCMB on the Kings Buildings campus in Edinburgh (room to be confirmed.)

Tutorial abstract

Multi-core processors, accelerators (GPGPUs) and coprocessors (Xeon Phis), and high-performance interconnects (InfiniBand, 10 GigE/iWARP and RoCE) with RDMA support are shaping the architectures for next-generation clusters. Efficient programming models to design applications on these clusters, as well as on future exascale systems, are still evolving.

Partitioned Global Address Space (PGAS) Models provide an attractive alternative to traditional message-passing models owing to their easy-to-use global shared-memory abstractions and lightweight one-sided communication. Hybrid MPI+PGAS programming models are gaining attention as a possible solution to programming exascale systems. These hybrid models help the transition of codes designed using MPI to take advantage of PGAS models without paying the prohibitive cost of redesigning complete applications. They also enable hierarchical design of applications using the different models to suite modern architectures.

In this tutorial, we provide an overview of the research and development taking place along these directions and discuss associated opportunities and challenges as we head toward exascale. We start with an in-depth overview of modern system architectures with multi-core processors, GPU accelerators, Xeon Phi coprocessors and high-performance interconnects. We present an overview of language-based and library-based PGAS models with focus on UPC and OpenSHMEM. We introduce MPI+PGAS hybrid programming models and highlight the advantages and challenges of designing a unified runtime to support them. We examine the challenges in designing high-performance UPC, OpenSHMEM and unified MPI+UPC/OpenSHMEM runtimes. We also focus on changes to applications to exploit hybrid MPI+PGAS programming models.

Using the publicly available MVAPICH-2-X software package (http://mvapich.cse.ohiostate.edu/overview/mvapich2x/), we provide concrete case studies and in-depth evaluation of runtime and applications-level designs that are targeted for modern systems architectures with multi-core processors, GPUs, Xeon Phis and high-performance interconnects. 


Nick Brown, EPCC