Hardware

Cirrus transformed into Tier-2 system

Author: Andy Turner
Posted: 19 Jun 2017 | 15:35

EPCC has received £2.4m from the Engineering and Physical Sciences Research Council (EPSRC) as part its £20m investment in six new Tier-2 HPC centres.

The Intel Parallel Computing Centre at EPCC

Author: Adrian Jackson
Posted: 15 Jun 2017 | 13:41

We are entering the fourth year of the Intel Parallel Computing Centre (IPCC). This collaboration on code porting and optimisation has focussed on improving the performance of scientific applications on Intel hardware, specifically its Xeon and Xeon Phi processors.  

Balancing act: optimise for scaling or efficiency?

Author: Adrian Jackson
Posted: 24 May 2017 | 19:30

When we parallelise and optimise computational simulation codes we always have choices to make. Choices about the type of parallel model to use (distributed memory, shared memory, PGAS, single sided, etc), whether the algorithm used needs to be changed, what parallel functionality to use (loop parallelisation, blocking or non-blocking communications, collective or point-to-point messages, etc).

Spreading the love

Author: Adrian Jackson
Posted: 10 Mar 2017 | 13:54

Binding processes to coresThread and process binding

Note, this post was updated on the 23rd March 2017 to include how to bind threads correctly on Cray systems (aprun -cc rather than taskset)

Making sure threads and processes are correctly placed, or bound, on cores or processors is essential to ensure good performance for a range of parallel applications. 

This is not a new topic, and has been covered well by others before, ie http://www.glennklockwood.com/hpc-howtos/process-affinity.html. Generally this is just handled for you; if you're running an MPI program then your mpirun/mpiexec/aprun job launcher will do sensible process binding to cores. 

NEXTGenIO: the next exciting stage begins!

Author: Michele Weiland
Posted: 24 Nov 2016 | 14:32

NEXTGenIO was one of several EC-funded exascale projects that we started work on last year. Here’s what’s been happening since it launched.

ePython: supporting Python on many core co-processors

Author: Nick Brown
Posted: 10 Nov 2016 | 11:24

Supercomputing, the biggest conference in our calendar, is on next week and one of the activities I am doing is presenting a paper at the workshop on Python for High-Performance and Scientific Computing.

ARCHER gains parallel Knights Landing capability

Author: Alan Simpson
Posted: 25 Oct 2016 | 15:42

The ARCHER national service is being enhanced by the addition of a parallel Knights Landing (KNL) system that will be available to all ARCHER users. 

Self racing cars

Author: Adrian Jackson
Posted: 16 Sep 2016 | 11:34

Roborace DevBot on the trackAutonomous racing

Recently EPCC's Alan Gray and I attended a workshop at Donington Park held by Roborace.  For those who've not heard of Roborace, it's a project to build and race autonomous cars, along the lines of Formula 1 but without any drivers or human control of the cars.  Actually, it's more like Formula E but without drivers, as the plan is for the cars to be electric.

Secure access to remote systems

Author: Adrian Jackson
Posted: 5 Sep 2016 | 10:35

06/09/16: As pointed out by my colleague Stephen in the comments after this post, the way to solve most of these issues is to tunnel the key authentication and therefore bypass the need to have private keys anywhere but on my local machine.  I'm always learning :)

Password vs key

Having to remember a range of passwords for systems that I don't use regularly is hard. 

You can use a password manager, but that only helps if I'm only ever trying to log in from my own laptop. If I have to log in from someone else's machine for any reason then I'd need to know the password.

MPI performance on KNL

Author: Adrian Jackson
Posted: 30 Aug 2016 | 12:22

Knights Landing MPI performance

Following on from our recent post on early experiences with KNL performance, we have been looking at MPI performance on Intel's latest many-core processor.

MPI ping-pong latency on KNC and IvyBridge
Figure 1

The MPI performance on the first generation of Xeon Phi processor (KNC) was one of the reasons that some of the applications we ported to KNC had poor performance.  Figures 1 and 2 show the latency and bandwidth of an MPI ping-pong benchmark running on a single KNC and on a 2x8-core IvyBridge node.

Pages