EPCC Cirrus service evolves with hardware refresh

11 December 2025

The latest iteration of the long-running Cirrus national supercomputing service, operated by EPCC, commenced this week.

Cirrus HPC system

EPCC's refreshed Cirrus service, with cabinet artwork showing cirrus clouds over the Cuillin mountains, Skye.

For the past nine years, the Cirrus service at EPCC has offered flexible HPC resources for academia and industry. The service's latest hardware refresh will provide new resources for CPU-based high performance computing (HPC) use and increase the flexibility of the service. Cirrus will also continue to be one of the UKRI Tier-2 National HPC Services available to researchers across all research domains funded by UKRI.

The core of the updated Cirrus service is provided by an HPE Cray EX4000 system with over 70,000 CPU cores connected by the high performance Slingshot 11 interconnect. Core compilers and computational libraries are available through the HPE Cray Programming Environment, supplemented by a range of research software installed by EPCC. Users also have access to in-depth technical support from EPCC experts.

Flexibility

The Cirrus service has always provided flexibility to meet a wide variety of use cases – from parallel applications using thousands of cores concurrently to workloads that have large memory requirements. The updated Cirrus service continues to evolve this flexible approach – in addition to supporting different workflow types, it will also offer more ways to interact with Cirrus.

The back-end compute nodes have access to the external internet to allow users to access datasets hosted elsewhere directly from running jobs. Cirrus will also provide new ways to launch work on the service on top of the traditional SSH access: web-based access will be provided by Open OnDemand and there will be support for users wishing to run persistent services that can access Cirrus compute nodes. While traditional SSH access has been available from day one of the service, these additional access options will be rolled out over the early life of Cirrus.

As part of the integration with the Edinburgh International Data Facility (EIDF), we are also working towards enabling sharing data with EIDF virtual machines (VMs) and running work on Cirrus directly from EIDF VMs. These new functionalities will allow Cirrus to be used effectively for a greater range of research and development.

Unparalleled user support

Having the right support in place is critical to users gaining maximum benefit from Cirrus. The service is backed by EPCC’s decades of expertise in running and supporting successful HPC services, and the Cirrus service desk has access to more than 80 EPCC technical experts. This allows us to help users with problems ranging from porting and optimising complex research software to integrating Cirrus into existing workflows. For organisations with more in-depth requirements that require working directly with EPCC technical experts, we offer consultancy and collaboration for both short-term and long-term projects. Cirrus is also supported by comprehensive documentation, allowing users to start using the service quickly.

Access

Cirrus has always provided different access routes to meet the requirements of commercial and academic organisations, and this continues on the updated service.

Commercial access

There are opportunities to try Cirrus free of charge, pay per use, or arrange access that includes dedicated consultancy from EPCC to allow your company to make best use of the service. To explore these options, please email our Commercial manager, Julien Sindt.

Academic access

Quick, free access options, as well as routes for accessing larger amounts of resource over longer timescales, are available. Pump Priming (replacing the old Instant Access route) provides free, quick access to a small amount of resource to allow research groups to assess use of the service. The Cirrus Driving Test gives individual researchers access to Cirrus to improve their understanding of HPC. For larger amounts of resource, academics can apply to UKRI Access to HPC calls (typically open two or three times per year) and add Cirrus access to research grant applications. See the access page on the Cirrus website for more information.

Hardware

The updated core Cirrus hardware is provided by an HPE EX4000 supercomputing system with 256 compute nodes. Each compute node has 288 cores (dual AMD EPYC 9825 144-core 2.2 GHz processors) giving a total of 73,228 cores. There are 192 standard memory nodes and 64 high memory nodes. Compute nodes are interconnected by an HPE Slingshot 11 interconnect. Storage is provided by existing EIDF file systems: Lustre for the “work” high performance parallel file system and CephFS for the “home” file system.

SSH login access is provided by dedicated login nodes integrated within the EX4000 HPC system with more flexible access resources provided by virtual machines (VMs) hosted within the Edinburgh International Data Facility Virtual Desktops service.

Software

The updated Cirrus system runs RHEL 9 with the base software stack provided by the HPE Cray Programming Environment (CPE). The CPE provides software though a standard module framework (Lmod) and includes different compiler environments (GCC, Cray, AMD), parallel libraries (MPI, OpenMP, SHMEM), standard numerical and IO libraries (BLAS/LAPACK/ScaLAPACK, FFTW, HDF5, NetCDF), parallel Python and R environments and parallel profiling and debugging tools. 

Containers are also supported via the Apptainer software. On top of this base software environment, EPCC installs and maintains a suite of common research software for the Cirrus user community to enable use of Cirrus to be as straightforward as possible.

Links

For the latest news about this service, please see the Cirrus website.

HPC services at EPCC

Author

Dr Andrew Turner
Andy Turner