Posted: 13 Jan 2021 | 10:35
Work on the Edinburgh International Data Facility has passed three key milestones, bringing the infrastructure that will underpin the £600m Data-Driven Innovation Programme significantly closer to reality.
Firstly, and perhaps most importantly, EIDF’s home, Computer Room 4 (cr4) at the University’s Advanced Computing Facility, completed its main construction phase at the end of the third quarter of 2020 and cr4 has entered its commissioning and fit-out phase. If everything goes to plan, we will start to build infrastructure in the room from January 2021.
Posted: 2 Dec 2020 | 14:20
The release of Fujitsu’s A64FX CPU has been a high point in an otherwise disappointing year. This next-generation CPU is the brain in Fugaku, the supercomputer at RIKEN in Japan, which was number one in the June 2020 TOP500 list.
Since February, Fujitsu has given EPCC access to a development A64FX machine as part of an early-access programme. We have been exploring the performance of this technology applied to numerous HPC workloads.
Posted: 16 Nov 2020 | 14:07
EuroCC will raise participating countries to a common high level in high-performance computing (HPC), high-performance data analytics (HPDA), and artificial intelligence (AI).
National Competence Centres (NCCs) will be responsible for surveying and documenting the core HPC, HPDA, and AI activities and expertise in each participating country. The ultimate goal is to make HPC available to different users from science, industry, public administration, and society.
Posted: 12 Nov 2020 | 16:41
Friday 13 November, 11:05–11:35 am ET
This Friday I am presenting work on accelerating Nekbone on FPGAs at the SC20 H2RC workshop. This is driven by our involvement in the EXCELLERAT CoE, which is looking to port specific engineering codes to future exascale machines.
One technology of interest is that of FPGAs, and a question for us was whether porting the most computationally intensive kernel to FPGAs and enabling the continuing streaming of data could provide performance and/or energy benefits against a modern CPU and GPU. We are focusing on the AX kernel, which applies the Poisson operator, and accounts for approximately 75% of the overall code runtime.
Posted: 10 Nov 2020 | 09:44
Join the workshop: Friday 13 November, 2:30pm–6:30pm ET
At SC20 this year we are chairing UrgentHPC, the second international workshop on the use of HPC for urgent decision making. The idea of the workshop is to explore the fusion of HPC, big data, and other technologies in responding to disasters such as global pandemics, wildfires, hurricanes, extreme flooding, earthquakes, tsunamis, winter weather conditions, and accidents. Whilst HPC has a long history of simulating disasters, we believe that technological advances are creating exciting new opportunities to support emergency, urgent, decision making in real-time.
Posted: 6 Nov 2020 | 15:27
The 9th Workshop on Python for High-Performance and Scientific Computing (PyHPC) at SC20 will once again bring researchers and developers together to share their experiences of using Python across a wide range of disciplines and applications.
EPCC is a member of the PyHPC Organizing Committee and this year we are co-chairing PyHPC's 9th Workshop on Python for High-Performance and Scientific Computing at SC20 on Friday 13 Nov, 10:00–18:30 EST.
Posted: 28 Oct 2020 | 14:27
EPCC leads a work package of the VESTEC (Visual Exploration and Sampling Toolkit for Extreme Computing) project, which seeks to fuse HPC and real-time data for urgent decision-making for disaster response. Our work has resulted in two accepted papers into SC20 workshops: one around our custom workflow execution system, and another on extending the Common Workflow Language standard to better support parallel application execution.
Posted: 31 Aug 2020 | 15:33
SC, the world's largest high-performance computing conference, will be held virtually during the week of November 9-13. It is a disappointment not to be going to SC in person, but the flip side is that the conference will be open to a wider audience than would have been possible had it been held in Atlanta as originally planned.
This year I am organising the second run of the Urgent HPC workshop, which is an event aimed at bringing together those who are researching the role of HPC and data science in making urgent decisions to tackle disasters. The event first ran last year at SC19 and comprised a keynote talk by the founder of Technosylva – the world’s leading wildfire simulation code development company – six technical papers, and a panel. Based upon that success we decided to run the workshop again this year, and given all that has happened since then, exploring this topic is more timely than ever before!
Posted: 24 Aug 2020 | 18:11
Edoardo Coronado visited EPCC from 18th May–15th August 2019 under the HPC-Europa3 programme. Here he gives us an update on his research since his previous blog article.
During my visit to EPCC in 2019 under the auspices of the HPC-Europa3 programme, I worked on optimising routines to perform the Sparse Matrix-Vector (SpMV) product, a common operation in lots of scientific applications, on Intel Xeon Gold processors and Graphics Processing Units (GPUs).
Posted: 17 Aug 2020 | 15:02
Maurice Jamieson is a third-year PhD student at the EPCC working on the programmability of micro-core architectures, both in terms of design and implementation, through the development of the ePython programming language.
We recently published a paper at the Scientific Computing with Python (SciPy) 2020 conference that outlined the latest developments to ePython, a version of Python specifically written to leverage micro-core architectures. The paper provides a detailed discussion of the design and implementation of ePython, touching on the latest updates to manage arbitrary large data sets and native code generation. This post will summarise key elements of the paper.