Posted: 15 Jun 2017 | 11:59
Posted: 15 Jun 2017 | 11:49
Reaching the exascale has been a focus of the HPC community for several years, and EPCC has been a key player from the beginning.
Posted: 24 May 2017 | 19:30
When we parallelise and optimise computational simulation codes we always have choices to make. Choices about the type of parallel model to use (distributed memory, shared memory, PGAS, single sided, etc), whether the algorithm used needs to be changed, what parallel functionality to use (loop parallelisation, blocking or non-blocking communications, collective or point-to-point messages, etc).
Posted: 11 May 2017 | 00:06
As part of the ARCHER Knights Landing (KNL) processor testbed, we have produced and collected a set of benchmark reports on the performance of various scientific applications on the system. This has involved the ARCHER CSE team, EPCC's Intel Parallel Computing Center (IPCC) team, and various users of the system all benchmarking and documenting the performance they have experienced.
Posted: 11 Apr 2017 | 17:59
Shall I compare thee...
Performance comparisons are always tricky to get exactly right. They are needed to ensure that we can demonstrate the performance improvements that optimisations, new hardware, new algorithms, etc... have had on an application or benchmark, but there is a lot of latitude in what can be compared, which makes it easy to get a performance comparison wrong and not properly demonstrate whatever it is you're trying to show.
Posted: 4 Apr 2017 | 14:51
Posted: 10 Mar 2017 | 13:54
Thread and process binding
Note, this post was updated on the 23rd March 2017 to include how to bind threads correctly on Cray systems (aprun -cc rather than taskset)
Making sure threads and processes are correctly placed, or bound, on cores or processors is essential to ensure good performance for a range of parallel applications.
This is not a new topic, and has been covered well by others before, ie http://www.glennklockwood.com/hpc-howtos/process-affinity.html. Generally this is just handled for you; if you're running an MPI program then your mpirun/mpiexec/aprun job launcher will do sensible process binding to cores.
Posted: 2 Feb 2017 | 11:37
Fluidity for tidal modelling
Figure 1: Mesh for the Sound of Islay tidal simulation. Courtesy Dr Creech.
We were recently involved in a project to optimise the CFD modelling package Fluidity for tidal modelling. This ARCHER eCSE project was primarily carried out by Dr Angus Creech from the Institute of Energy Systems in Edinburgh.
Posted: 27 Jan 2017 | 14:30
For those of you not acquainted with OpenFOAM, it's a large open source CFD package used by a wide variety of scientists and companies to investigate a whole range of scientific and engineering problems.
We support it on ARCHER and have a number of different versions available and in use on the machine. As part of our IPCC work we are interested in looking at the performance of OpenFOAM on the latest Xeon Phi processor, Knights Landing (KNL).
Posted: 15 Dec 2016 | 13:05
In a previous blog post I talked about ePython, the very lightweight version of Python that I have developed for the Epiphany co-processor. This co-processor is combined with a dual core ARM CPU on the Parallella single board computer, and this week an updated OS image was released for the machine which now includes ePython pre-installed.