IPCC

The Intel Parallel Computing Centre at EPCC

Author: Adrian Jackson
Posted: 15 Jun 2017 | 13:41

We are entering the fourth year of the Intel Parallel Computing Centre (IPCC). This collaboration on code porting and optimisation has focussed on improving the performance of scientific applications on Intel hardware, specifically its Xeon and Xeon Phi processors.  

MPI performance on KNL

Author: Adrian Jackson
Posted: 30 Aug 2016 | 12:22

Knights Landing MPI performance

Following on from our recent post on early experiences with KNL performance, we have been looking at MPI performance on Intel's latest many-core processor.

MPI ping-pong latency on KNC and IvyBridge
Figure 1

The MPI performance on the first generation of Xeon Phi processor (KNC) was one of the reasons that some of the applications we ported to KNC had poor performance.  Figures 1 and 2 show the latency and bandwidth of an MPI ping-pong benchmark running on a single KNC and on a 2x8-core IvyBridge node.

Early experiences with KNL

Author: Adrian Jackson
Posted: 29 Jul 2016 | 16:45

Initial experiences on early KNL

Updated 1st August 2016 to add a sentence describing the MPI configurations of the benchmarks run.
Updated 30th August 2016 to add CASTEP performance numbers on Broadwell with some discussion

EPCC was lucky enough to be allowed access to Intel's early KNL (Knights Landing, Intel's new Xeon Phi processor) cluster, through our IPCC project.  KNL Processor Die

KNL is a many-core processor, successor to the KNC, that has up to 72 cores, each of which can run 4 threads, and 16 GB of high bandwidth memory stacked directly on to the chip.

ParCo Symposium on Xeon Phi experiences

Author: Adrian Jackson
Posted: 20 Jul 2015 | 17:12

ParCo Symposium

Experiences of porting and optimising code for Xeon Phi processors

EPCC is jointly organising a symposium at the ParCo conference on experiences from those working on porting and optimising codes for this architecture about the challenges and successes they have experienced when working with the Xeon Phi, and how these also apply to standard parallel computing hardware.

Next Generation Computational Modelling Summer School

Author: Adrian Jackson
Posted: 15 Jul 2015 | 15:06

Discussions on computing

Myself and Fiona ReidNGCM - Next Generation Computational Modelling recently presented a 2-day course on porting and optimising for the Xeon Phi at the NGCM (Next Generation Computational Modelling) summer academy in Southampton. 

This one-week academy is designed to give PhD students some of the skills they need to undertake the range of computational simulations and data analysis tasks that their work requires.

Day 5 - Wrapping up the week

Author: Adrian Jackson
Posted: 21 Jun 2015 | 20:02

The final analysis and future plans

A week ago we finished our 5 days of intensive work optimising CP2K (and to a lesser extent GS2) for Xeon Phi processors. As discussed in previous blog posts (Day4, Day3, Day2, Day1), this was done in conjunction with research engineers from Colfax, and built on the previous year's work on these codes by EPCC staff through the Intel-funded IPCC project.

Day 4 of IPCC-Colfax work at EPCC

Author: Adrian Jackson
Posted: 12 Jun 2015 | 15:41

MPI and vectorisation: Two ends of the optimisation spectrum

Day four of this week of intensive work optimising codes for Xeon Phi saw a range of work. The majority of the effort focussed on the vectorisation performance of CP2K and GS2; looking at the low level details of the computationally-intensive parts of these codes and seeing whether the compiler is producing vectorised codes, and if not is there anything that can be done to make the code vectorise.

Second day of collaborating with Colfax

Author: Adrian Jackson
Posted: 10 Jun 2015 | 00:08

Day 2: profiling and the start of optimising

After a first day spent getting codes set up and systems running, we got into the profiling of CP2K in anger today and have made some good progress.

Working on the Xeon Phi

Author: Adrian Jackson
Posted: 8 Jun 2015 | 17:48

Intel Parallel Computing Center collaboration with Colfax

We're just kicking off a week's collaboration with Colfax, a US technology company that collaborates heavily with Intel on Xeon Phi optimisation and training for the Xeon Phi. 

Intel Parallel Computing Centre: progress report

Author: Adrian Jackson
Posted: 21 Nov 2014 | 10:29

EPCC's Grand Challenges Optimisation Centre, an Intel Parallel Computing Centre which we announced earlier in the year, has made significant progress over recent months. 

The collaboration was created to optimise codes for Intel processors, particularly to port and optimise scientific simulation codes for Intel Xeon Phi co-processors. As EPCC also runs the ARCHER supercomputer, which contains a large number of Intel Xeon processors (although no accelerators or co-processors), for EPSRC and other UK research funding councils, we also have a strong focus on ensuring that scientific simulation codes are highly optimised for these processors. Therefore, the IPCC work at EPCC has been concentrating on improving the performance of a range of codes that are heavily used for computational simulation in the UK on both Intel Xeon and Intel Xeon Phi processors.

Pages

Blog Archive