KNL

Accelerating cloud physics and atmospheric models using GPUs, KNLs and FPGAs

Author: Nick Brown
Posted: 24 Apr 2019 | 11:51

The blog post below is based on the abstract of a talk at the PASC mini-symposium 'Modelling Cloud Physics: Preparing for Exascale' (Zurich, 13 June 2019).

The Met Office NERC Cloud model (MONC) is an atmospheric model used throughout the weather and climate community to study clouds and turbulent flows. This is often coupled with the CASIM microphysics model, which provides the capability to investigate interactions at the millimetre scale and study the formation and development of moisture. One of the main targets of these models is the problem of fog, which is very hard to model due to the high resolution required – for context the main UK weather forecast resolves to 1km, whereas the fog problem requires 1metre or less.

Cross-compiling OpenFOAM for KNL

Author: Adrian Jackson
Posted: 27 Jan 2017 | 14:30

Breaking of a dam (from the OpenFOAM user guide)

For those of you not acquainted with OpenFOAM, it's a large open source CFD package used by a wide variety of scientists and companies to investigate a whole range of scientific and engineering problems. 

We support it on ARCHER and have a number of different versions available and in use on the machine. As part of our IPCC work we are interested in looking at the performance of OpenFOAM on the latest Xeon Phi processor, Knights Landing (KNL).

ARCHER gains parallel Knights Landing capability

Author: Alan Simpson
Posted: 25 Oct 2016 | 15:42

The ARCHER national service is being enhanced by the addition of a parallel Knights Landing (KNL) system that will be available to all ARCHER users. 

MPI performance on KNL

Author: Adrian Jackson
Posted: 30 Aug 2016 | 12:22

Knights Landing MPI performance

Following on from our recent post on early experiences with KNL performance, we have been looking at MPI performance on Intel's latest many-core processor.

MPI ping-pong latency on KNC and IvyBridge
Figure 1

The MPI performance on the first generation of Xeon Phi processor (KNC) was one of the reasons that some of the applications we ported to KNC had poor performance.  Figures 1 and 2 show the latency and bandwidth of an MPI ping-pong benchmark running on a single KNC and on a 2x8-core IvyBridge node.

Early experiences with KNL

Author: Adrian Jackson
Posted: 29 Jul 2016 | 16:45

Initial experiences on early KNL

Updated 1st August 2016 to add a sentence describing the MPI configurations of the benchmarks run.
Updated 30th August 2016 to add CASTEP performance numbers on Broadwell with some discussion

EPCC was lucky enough to be allowed access to Intel's early KNL (Knights Landing, Intel's new Xeon Phi processor) cluster, through our IPCC project.  KNL Processor Die

KNL is a many-core processor, successor to the KNC, that has up to 72 cores, each of which can run 4 threads, and 16 GB of high bandwidth memory stacked directly on to the chip.

HPC hardware in 2016 and beyond

Author: Adrian Jackson
Posted: 19 Apr 2016 | 23:14

Anyone taking more than a passing interest in HPC hardware recently will have noticed that there are a number of reasonably significant trends coming to fruition in 2016. Of particular interest to me are on-package memory, integrated functionality, and new processor competitors.  Intel Xeon Phi (KNL) Die

On-package memory, memory that is directly attached to the processor, has been promised for a number of years now. The first product of this type I can remember was Micron's Hybrid Memory Cube around 2010/2011, but it's taken a few years for the hardware to become mature enough (or technically feasible and cheap enough) to make it to mass market chips. We now have it in the form of MCDRAM for Intel's upcoming Xeon Phi processor (Knights Landing), and as HBM2 on Nvidia's recently announced P100 GPU.

Blog Archive