Blog

Under pressure

Author: Adrian Jackson
Posted: 23 Mar 2020 | 10:45

Squeezed performance

Memory under pressure

I was recently working with a colleague to investigate performance issues on a login node for one of our HPC systems. I should say upfront that looking at performance on a login node is generally not advisable, they are shared resources not optimised for performance.

We always tell our students not to run performance benchmarking on login nodes, because it's hard to ensure the results are reproducible. However, in this case we were just running a very small (serial) test program on the login node to ensure it worked before submitting it to the batch systems and my colleague noticed a performance variation across login nodes that was unusual.

Efficient FE2 multi-scale implementation applied to composite deformation

Author: Guest blogger
Posted: 1 Mar 2020 | 10:37

Guido Giuntoli visited STFC's Daresbury Laboratory in December 2019 through the HPC-Europa3 programme. Here he describes his work and time in the UK.

Enhancing data-streaming programming platforms: a case for dispel4py and a GrPPI comparison

Author: Guest blogger
Posted: 12 Feb 2020 | 11:34

Javier Fernández Muñoz was an HPC-Europa3 visitor from 1st June 2019–30th August 2019 hosted at EPCC by Rosa Filgueira. Here he tells us about his experiences.

Hi! My name Javier Fernández Muñoz, I am working as a Visiting Professor in the Computer Architecture and Technology Area of the Computer Science Department at the University Carlos III de Madrid (Spain). 

My research field includes the development of parallel programming frameworks that enhance usability and portability. In this regard, I have been working several years in the development of GrPPI (Generic Reusable Parallel Pattern Interface), an open source generic and reusable parallel pattern programming interface. Basically, GRPPI accommodates a layer between developers and existing parallel programming frameworks targeted at multi-core processor capabilities, such as ISO C++ Threads, OpenMP, Intel TBB, and FastFlow. To achieve this goal, the interface leverages modern C++ features, meta-programming concepts, and generic programming to act as a switch between those frameworks.

Training researchers for a software/ data-intensive world through the Edinburgh Carpentries

Author: Giacomo Peru
Posted: 30 Jan 2020 | 14:12

The Edinburgh Carpentries (EdCarp) is a training initiative which offers the Carpentries computing and data skills curriculum in Edinburgh. We train researchers on fundamental skills they need, with a team of volunteers from across disciplines, academic units, and career stages. 

Since 2018, EdCarp has organised 25 workshops across the academic institution, training over 300 staff and students in tools such as R, Python, Unix shell, git, and OpenRefine. Courses are free to participants and get oversubscribed very quickly.

We are now rolling out our 2020 curriculum and announcing workshops at https://edcarp.github.io/.

Efficiently exploiting distributed large many-core systems

Author: Guest blogger
Posted: 27 Jan 2020 | 08:47

Marcos Maroñas was a visitor from BSC who, under the HPC-Europa3 Programme, was hosted here at EPCC from 1st Sep to 1st Dec 2019 by Dr Mark Bull. In this blog article he tell us about his visit.

Hi! My name is Marcos Maroñas. I am currently a PhD student at Barcelona Supercomputing Center (BSC) in the Programming Models group of the Computer Science department. I spent most of my time there doing research on runtime systems for parallel programming models such as OmpSs-2

Exploring correlations between precessing binary black holes and their merger remnants

Author: Guest blogger
Posted: 23 Jan 2020 | 15:40

The University of BirminghamLuca Reali visited the University of Birmingham from June to September 2019 as part of HPC-Europa3 to conduct research into binary black hole mergers. In this short article he describes his time in the UK.

Using FPGAs to model the atmosphere

Author: Nick Brown
Posted: 11 Dec 2019 | 15:54

The Met Office relies on some of the world’s most computationally intensive codes and works to very tight time constraints. It is important to explore and understand any technology that can potentially accelerate its codes, ultimately modelling the atmosphere and forecasting the weather more rapidly.

Field Programmable Gate Arrays (FPGAs) provide a large number of configurable logic blocks sitting within a sea of configurable interconnect. It has recently become easier for developers to convert their algorithms to configure these fundamental components and so execute their HPC codes in hardware rather than software. This has significant potential benefits for both performance and energy usage, but as FPGAs are so different from CPUs or GPUs, a key challenge is how we design our algorithms to leverage them.

PICTURES project: predicting disease with artificial intelligence

Author: Ally Hume
Posted: 10 Dec 2019 | 11:41

EPCC is part of a £4.4 million project to turn a database of millions of clinical images into a powerful research tool to help tackle health conditions including lung cancer and dementia.

Each year millions of clinical images such as X-rays, CT, MRI, ultrasound, nuclear medicine, and retinal images are generated by the NHS in Scotland and stored in the national imaging database. In addition to containing important clinical information, these images also potentially contain a great deal of information about the health of the individual which is not currently made use of in health care.

EPCC’s ARM system: comparing the performance of MPI implementations

Author: Nick Brown
Posted: 9 Dec 2019 | 12:48

MVAPICH is a high performance implementation of MPI. It is specialised for InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE communication technologies, but people generally use the default module loaded on their system. This is important because, as HPC programmers, we often optimise our codes but overlook the potential performance gains of better choice of MPI implementation.

A data-driven approach to public health

Author: Ally Hume
Posted: 6 Dec 2019 | 14:28

EPCC and other partners at the University of Edinburgh have commenced work on a new programme to develop DataLoch, a data repository for all local, regional and national health and social care data for residents of the Edinburgh & South East Scotland region. DataLoch and the associated Data Driven Innovation team will drive research and innovation, improve patient care, and reduce health inequalities across the region.

Pages

Contact

Tracy Peet
+44 (0) 131 650 5362
t.peet@epcc.ed.ac.uk