Nick Brown's blog
Posted: 10 Aug 2021 | 10:05
For over a decade our community has enjoyed significant performance benefits by leveraging heterogeneous supercomputers. Whilst GPUs are the most common form of accelerator there are also other hardware technologies which can be complementary.
Field Programmable Gate Arrays (FPGAs) enable developers to directly configure the chip, effectively enabling their application to run at the electronics level. There are potential performance and power benefits to tailoring code execution and avoiding the general purpose architecture imposed by CPUs and GPUs, and as such FPGAs have been popular in embedded computing for many years but have not yet enjoyed any level of uptake in HPC.
Posted: 2 Dec 2020 | 14:20
The release of Fujitsu’s A64FX CPU has been a high point in an otherwise disappointing year. This next-generation CPU is the brain in Fugaku, the supercomputer at RIKEN in Japan, which was number one in the June 2020 TOP500 list.
Since February, Fujitsu has given EPCC access to a development A64FX machine as part of an early-access programme. We have been exploring the performance of this technology applied to numerous HPC workloads.
Posted: 12 Nov 2020 | 16:41
Friday 13 November, 11:05–11:35 am ET
This Friday I am presenting work on accelerating Nekbone on FPGAs at the SC20 H2RC workshop. This is driven by our involvement in the EXCELLERAT CoE, which is looking to port specific engineering codes to future exascale machines.
One technology of interest is that of FPGAs, and a question for us was whether porting the most computationally intensive kernel to FPGAs and enabling the continuing streaming of data could provide performance and/or energy benefits against a modern CPU and GPU. We are focusing on the AX kernel, which applies the Poisson operator, and accounts for approximately 75% of the overall code runtime.
Posted: 10 Nov 2020 | 09:44
Join the workshop: Friday 13 November, 2:30pm–6:30pm ET
At SC20 this year we are chairing UrgentHPC, the second international workshop on the use of HPC for urgent decision making. The idea of the workshop is to explore the fusion of HPC, big data, and other technologies in responding to disasters such as global pandemics, wildfires, hurricanes, extreme flooding, earthquakes, tsunamis, winter weather conditions, and accidents. Whilst HPC has a long history of simulating disasters, we believe that technological advances are creating exciting new opportunities to support emergency, urgent, decision making in real-time.
Posted: 31 Aug 2020 | 15:33
SC, the world's largest high-performance computing conference, will be held virtually during the week of November 9-13. It is a disappointment not to be going to SC in person, but the flip side is that the conference will be open to a wider audience than would have been possible had it been held in Atlanta as originally planned.
This year I am organising the second run of the Urgent HPC workshop, which is an event aimed at bringing together those who are researching the role of HPC and data science in making urgent decisions to tackle disasters. The event first ran last year at SC19 and comprised a keynote talk by the founder of Technosylva – the world’s leading wildfire simulation code development company – six technical papers, and a panel. Based upon that success we decided to run the workshop again this year, and given all that has happened since then, exploring this topic is more timely than ever before!
Posted: 29 Jun 2020 | 14:38
We currently have applications open for PhD students, starting late 2020/early 2021 depending upon their situation. Four of these have been advertised on findaphd, with the closing date at the end of the month.
In addition to our MSc programmes, EPCC also hosts a number of PhD students who are researching a diverse set of areas: from traditional performance optimisation for HPC, to new data science technologies and novel computing architectures. Supervised by EPCC members of staff and housed in the Bayes, not only do these students benefit from being part of the UK’s leading HPC centre, but they also have access to the wider University’s large range of resources.
Posted: 23 Jun 2020 | 15:10
This week ISC, one of the largest conferences in the supercomputing calendar, should have been running in Frankfurt. It’s a funny feeling because, as I type, I realise that if it wasn’t for COVID-19 then, instead of being stuck at home, I would have been busy navigating the Messe Frankfurt, going from one session to another.
Posted: 11 Dec 2019 | 15:54
The Met Office relies on some of the world’s most computationally intensive codes and works to very tight time constraints. It is important to explore and understand any technology that can potentially accelerate its codes, ultimately modelling the atmosphere and forecasting the weather more rapidly.
Field Programmable Gate Arrays (FPGAs) provide a large number of configurable logic blocks sitting within a sea of configurable interconnect. It has recently become easier for developers to convert their algorithms to configure these fundamental components and so execute their HPC codes in hardware rather than software. This has significant potential benefits for both performance and energy usage, but as FPGAs are so different from CPUs or GPUs, a key challenge is how we design our algorithms to leverage them.
Posted: 9 Dec 2019 | 12:48
MVAPICH is a high performance implementation of MPI. It is specialised for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE communication technologies, but people generally use the default module loaded on their system. This is important because, as HPC programmers, we often optimise our codes but overlook the potential performance gains of better choice of MPI implementation.
Posted: 12 Nov 2019 | 11:11
Here in EPCC we lead a work package of the VESTEC EU FET project which is working on the fusion of real-time data and HPC for urgent decision-making for disaster response. While HPC has a long history of simulating disasters, what’s missing to support emergency, urgent, decision-making is fast, real-time acquisition of data and the ability to guarantee time constraints.