HPC research

Spreading the love

Author: Adrian Jackson
Posted: 10 Mar 2017 | 13:54

Binding processes to coresThread and process binding

Note, this post was updated on the 23rd March 2017 to include how to bind threads correctly on Cray systems (aprun -cc rather than taskset)

Making sure threads and processes are correctly placed, or bound, on cores or processors is essential to ensure good performance for a range of parallel applications. 

This is not a new topic, and has been covered well by others before, ie http://www.glennklockwood.com/hpc-howtos/process-affinity.html. Generally this is just handled for you; if you're running an MPI program then your mpirun/mpiexec/aprun job launcher will do sensible process binding to cores. 

Optimised tidal modelling

Author: Adrian Jackson
Posted: 2 Feb 2017 | 11:37

Fluidity for tidal modelling

Tidal model

Figure 1: Mesh for the Sound of Islay tidal simulation. Courtesy Dr Creech.

We were recently involved in a project to optimise the CFD modelling package Fluidity for tidal modelling. This ARCHER eCSE project was primarily carried out by Dr Angus Creech from the Institute of Energy Systems in Edinburgh.

Cross-compiling OpenFOAM for KNL

Author: Adrian Jackson
Posted: 27 Jan 2017 | 14:30

Breaking of a dam (from the OpenFOAM user guide)

For those of you not acquainted with OpenFOAM, it's a large open source CFD package used by a wide variety of scientists and companies to investigate a whole range of scientific and engineering problems. 

We support it on ARCHER and have a number of different versions available and in use on the machine. As part of our IPCC work we are interested in looking at the performance of OpenFOAM on the latest Xeon Phi processor, Knights Landing (KNL).

ePython now ships as standard with every Parallella board

Author: Nick Brown
Posted: 15 Dec 2016 | 13:05

In a previous blog post I talked about ePython, the very lightweight version of Python that I have developed for the Epiphany co-processor. This co-processor is combined with a dual core ARM CPU on the Parallella single board computer, and this week an updated OS image was released for the machine which now includes ePython pre-installed.

NEXTGenIO: the next exciting stage begins!

Author: Michele Weiland
Posted: 24 Nov 2016 | 14:32

NEXTGenIO was one of several EC-funded exascale projects that we started work on last year. Here’s what’s been happening since it launched.

ePython: supporting Python on many core co-processors

Author: Nick Brown
Posted: 10 Nov 2016 | 11:24

Supercomputing, the biggest conference in our calendar, is on next week and one of the activities I am doing is presenting a paper at the workshop on Python for High-Performance and Scientific Computing.

How do you solve a problem like Sierpinski?

Author: Iain Bethune
Posted: 7 Nov 2016 | 15:32

I promised in a post last month that I'd write some more about the PrimeGrid project, and it so happened this week that we made a discovery which gives me a good excuse to blog! On 31st October 2016 at 22:13:54 UTC a computer owned by Péter Szabolcs of Hungary reported via the BOINC distributed computing software that the number 10223*231172165+1 was prime.

Self racing cars

Author: Adrian Jackson
Posted: 16 Sep 2016 | 11:34

Roborace DevBot on the trackAutonomous racing

Recently EPCC's Alan Gray and I attended a workshop at Donington Park held by Roborace.  For those who've not heard of Roborace, it's a project to build and race autonomous cars, along the lines of Formula 1 but without any drivers or human control of the cars.  Actually, it's more like Formula E but without drivers, as the plan is for the cars to be electric.

​ Exploring energy efficiency

Author: Mirren White
Posted: 1 Sep 2016 | 10:03

The ADEPT project is creating tools that can be used to design more efficient HPC systems.

Energy efficiency is one of the key challenges of modern computing – in an era where even the most efficient supercomputers come with massive energy bills, technology that can help to increase energy efficiency is critical to sustainable HPC development.

MPI performance on KNL

Author: Adrian Jackson
Posted: 30 Aug 2016 | 12:22

Knights Landing MPI performance

Following on from our recent post on early experiences with KNL performance, we have been looking at MPI performance on Intel's latest many-core processor.

MPI ping-pong latency on KNC and IvyBridge
Figure 1

The MPI performance on the first generation of Xeon Phi processor (KNC) was one of the reasons that some of the applications we ported to KNC had poor performance.  Figures 1 and 2 show the latency and bandwidth of an MPI ping-pong benchmark running on a single KNC and on a 2x8-core IvyBridge node.

Pages