Optimisation

Under pressure

Author: Adrian Jackson
Posted: 23 Mar 2020 | 10:45

Squeezed performance

Memory under pressure

I was recently working with a colleague to investigate performance issues on a login node for one of our HPC systems. I should say upfront that looking at performance on a login node is generally not advisable, they are shared resources not optimised for performance.

We always tell our students not to run performance benchmarking on login nodes, because it's hard to ensure the results are reproducible. However, in this case we were just running a very small (serial) test program on the login node to ensure it worked before submitting it to the batch systems and my colleague noticed a performance variation across login nodes that was unusual.

The tyranny of 100x

Author: Adrian Jackson
Posted: 10 Mar 2017 | 15:39

Reporting Performance

Measuring performance is a key part of any code optimisation or parallelisation process.  Without knowing the baseline performance, and what has been achieved after the work, it's impossible to judge how successful any intervention has been.  However, it's something that we, as a community, get wrong all the time, at least when we present our results in papers, presentation, blog posts, etc...  I'm not suggesting that people aren't measuring performance correctly, or are deliberately falsifying performance improvements, but the incentives to make your work look as impressive as possible causes people to present results in a way that really isn't justified.

 

Optimised tidal modelling

Author: Adrian Jackson
Posted: 2 Feb 2017 | 11:37

Fluidity for tidal modelling

Tidal model

Figure 1: Mesh for the Sound of Islay tidal simulation. Courtesy Dr Creech.

We were recently involved in a project to optimise the CFD modelling package Fluidity for tidal modelling. This ARCHER eCSE project was primarily carried out by Dr Angus Creech from the Institute of Energy Systems in Edinburgh.

Get into SHAPE! Removing barriers to HPC adoption for SMEs

Author: Paul Graham
Posted: 13 Apr 2016 | 14:47

SHAPE (SME HPC Adoption Programme in Europe) is a pan-European initiative supported by PRACE (Partnership for Advanced Computing in Europe). The Programme aims to raise awareness and provide European SMEs with the expertise necessary to take advantage of the innovation possibilities created by high-performance computing (HPC), thus increasing their competitiveness. SHAPE allows SMEs to benefit from the expertise and knowledge developed within the top-class PRACE Research Infrastructure.

Day 5 - Wrapping up the week

Author: Adrian Jackson
Posted: 21 Jun 2015 | 20:02

The final analysis and future plans

A week ago we finished our 5 days of intensive work optimising CP2K (and to a lesser extent GS2) for Xeon Phi processors. As discussed in previous blog posts (Day4, Day3, Day2, Day1), this was done in conjunction with research engineers from Colfax, and built on the previous year's work on these codes by EPCC staff through the Intel-funded IPCC project.

Second day of collaborating with Colfax

Author: Adrian Jackson
Posted: 10 Jun 2015 | 00:08

Day 2: profiling and the start of optimising

After a first day spent getting codes set up and systems running, we got into the profiling of CP2K in anger today and have made some good progress.

Working on the Xeon Phi

Author: Adrian Jackson
Posted: 8 Jun 2015 | 17:48

Intel Parallel Computing Center collaboration with Colfax

We're just kicking off a week's collaboration with Colfax, a US technology company that collaborates heavily with Intel on Xeon Phi optimisation and training for the Xeon Phi. 

Intel Parallel Computing Centre: progress report

Author: Adrian Jackson
Posted: 21 Nov 2014 | 10:29

EPCC's Grand Challenges Optimisation Centre, an Intel Parallel Computing Centre which we announced earlier in the year, has made significant progress over recent months. 

The collaboration was created to optimise codes for Intel processors, particularly to port and optimise scientific simulation codes for Intel Xeon Phi co-processors. As EPCC also runs the ARCHER supercomputer, which contains a large number of Intel Xeon processors (although no accelerators or co-processors), for EPSRC and other UK research funding councils, we also have a strong focus on ensuring that scientific simulation codes are highly optimised for these processors. Therefore, the IPCC work at EPCC has been concentrating on improving the performance of a range of codes that are heavily used for computational simulation in the UK on both Intel Xeon and Intel Xeon Phi processors.

Optimising OpenMP implementation of Tinker, a molecular dynamics modelling package

Author: Weronika Filinger
Posted: 3 Jul 2014 | 11:42

Do you use scientific codes in your research? Are the things you can do with it limited by the execution time? The code has been parallelised but does not scale well? How should you go about improving the performance? What can you do when you do not have full understanding of the code? There are some general steps that can be taken to improve the performance of parallelised codes. In this article I will describe briefly the process I have undertaken to optimise the parallel performance of a computational chemistry package, TINKER, as part of the EPCC/SSI APES project.

EPCC wins HPC Innovation Excellence Award

Author: Adrian Jackson
Posted: 24 Jun 2014 | 14:10

Electrostatic potential fluctuations in an annular region at mid-radius in the MAST tokamak, from a gyrokinetic simulation of the saturated turbulence using the GS2 code. A wedge of plasma has been removed from the visualisation so as to view the nature of the fluctuations inside the annulus.EPCC is delighted to be part of a team that has won an HPC Innovation Excellence Award. Presented at the International Supercomputing Conference (ISC14) in Leipzig (22-26 June 2014), the awards recognise outstanding application of HPC Computing for Business and Scientific Achievements.

Blog Archive