Posted: 28 Jun 2013 | 10:13
Reaching the Exascale is rightly posed as a combination of challenges related to (i) energy efficiency, (ii) heterogeneity and resiliency of computation, storage and communication, and (iii) the scale of the parallelism involved. Many discussions about Exascale focus on the first two challenges. This is understandable – building an Exascale system with today’s most energy efficient technology would still require around 480 MWatts.
Posted: 23 May 2013 | 17:00
I've just finished working on two papers for this year's Supercomputing conference, SC13, which is going to be in Denver, Colorado, from the 17th-22nd November. EPCC will have an official presence, with an EPCC booth on the exhibition floor and a number of staff participating in the technical and education programmes. I thought this a good opportunity to write up some recent work I've been undertaking.
Posted: 10 May 2013 | 10:00
In my previous blog post I said that I was working on a library to move data between different data decompositions.
In many cases it is easier for a programmer to work with a global coordinate system that reflects the overall data in the program. This is the approach taken by many PGAS languages and some parallel libraries such as BLACS.
The programmer still wants to be in control over the data decomposition, but ideally this should be a separated concern than can be changed without forcing a complete rewrite of the rest of the program.
Posted: 3 May 2013 | 15:18
This article originally appeared in the Cisco blog by Jeff Squires and was written while I was undertaking a PhD before I joined EPCC as a member of staff. I thought it would be of interest to folks reading this blog.
My PhD involved building a message passing library using C#; not accessing an existing MPI library from C# code but creating a brand new MPI library written entirely in pure C#. The result is McMPI (Managed-code MPI), which is compliant with MPI-1 – as far as it can be given that there are no language bindings for C# in the MPI Standard. It also has reasonably good performance in micro-benchmarks for latency and bandwidth both in shared-memory and distributed-memory.
Posted: 2 May 2013 | 13:32
I'm currently working on a small library to support decomposition changes in parallel programs.
It turns out that a fairly simple interface can be used to describe a very large space of possible data decompositions. I'm therefore writing a library that can redistribute data between any two such decompositions.
Posted: 30 Apr 2013 | 14:00
Posted: 17 Apr 2013 | 12:09
The 7th International Conference on Partitioned Global Address Space (PGAS) Programming Models will be held in Edinburgh on the 3rd-4th October 2013.
Posted: 11 Apr 2013 | 14:18
It's the last session of the EASC conference and we still have 80-90 people in attendence, which is fantastic. We're all very pleased that over 130 people came and contributed to this event, it's great to see the demand for a discussion forum for Exascale research and issues.
Keep your eye on the EASC2013 website (http://www.easc2013.org.uk), because we will shortly be putting up the slides from the keynotes, which we've also been filming and will put online (providing the videos are of sufficient quality).
Posted: 11 Apr 2013 | 12:34
Posted: 10 Apr 2013 | 16:40
Day 2 of the EASC2013 conference in Edinburgh.
There has been a series of interesting and diverse talks today, with parallel sessions covering topics such as tools for exascale, preparing applications for exascale and the I/O challenges. Disruptive technologies has been a theme in many talks and discussions, a theme started yesterday with Iain Duff's talk on disruptive technologies within the EESI-2 project.