Too much choice?
Posted: 17 Jun 2019 | 14:24
When is enough, enough? With so many parallel programming technologies, should we focus on consolidating them?
At the ISC conference in June I will moderate a panel discussion on whether it is time to focus on the consolidation and interoperability of existing parallel programming technologies, rather than the development of new ones.
MPI P2P, MPI RMA, OpenMP, OmpSs, Legion, GPI-Space, UPC++, Charm++, HPX, Chapel, GASPI, OpenACC, OpenCL, CUDA... This is just a tiny subset of the HPC programming technologies available for writing parallel code.
Whilst a few have found popularity, the majority have had limited uptake, which is unfortunate as some of the less popular technologies contain really useful features, but it is a struggle to get a new HPC programming technology widely adopted. There are various reasons for this, but probably most significant is that HPC codes tend to be long lived. There can be significant risk for an HPC developer in choosing a technology that isn’t fully mature because they can’t be sure whether it will do what is required, will be installed on the target machines, or will be fully supported throughout the life of the code.
Sometimes it is better the devil you know and, even though classical parallel technologies such as MPI v1 and OpenMP might not be perfect, at least their ubiquity means they are well supported, their future assured, and programmers know to some extent what to expect.
But we all agree that these tools are not ideal. So instead of focusing on developing new solutions, should we tackle the problem by consolidating what we have already and ensuring that they work together, the overall value being greater than the sum of their parts?
This really ties in with the INTERTWinE FET HPC project, which finished last year. Led by EPCC, it looked at developing approaches and standardisation for programming model interoperability, as well as exploring numerous use cases and writing best practice guides. The project was very successful and now it has concluded it is clear that there is still significant further work to be done by the HPC community, looking more at opportunities for increased coordination rather than focusing narrowly on our own specific programming technologies.
Our panel at ISC will consider whether we should be looking more closely at consolidating and combining existing parallel programming technologies, working towards standardisation to enable better interoperability. It will also discuss the sort of parallel programming technologies the community should support.
ISC 2019 panel discussion: When is enough, enough? With so many Parallel Programming Technologies, is it Time to Focus on Consolidating them?
Nick Brown, EPCC