Posted: 20 Jul 2015 | 17:12
Experiences of porting and optimising code for Xeon Phi processors
EPCC is jointly organising a symposium at the ParCo conference on experiences from those working on porting and optimising codes for this architecture about the challenges and successes they have experienced when working with the Xeon Phi, and how these also apply to standard parallel computing hardware.
Posted: 15 Jul 2015 | 15:06
Discussions on computing
This one-week academy is designed to give PhD students some of the skills they need to undertake the range of computational simulations and data analysis tasks that their work requires.
Posted: 21 Jun 2015 | 20:02
The final analysis and future plans
A week ago we finished our 5 days of intensive work optimising CP2K (and to a lesser extent GS2) for Xeon Phi processors. As discussed in previous blog posts (Day4, Day3, Day2, Day1), this was done in conjunction with research engineers from Colfax, and built on the previous year's work on these codes by EPCC staff through the Intel-funded IPCC project.
Posted: 12 Jun 2015 | 15:41
MPI and vectorisation: Two ends of the optimisation spectrum
Day four of this week of intensive work optimising codes for Xeon Phi saw a range of work. The majority of the effort focussed on the vectorisation performance of CP2K and GS2; looking at the low level details of the computationally-intensive parts of these codes and seeing whether the compiler is producing vectorised codes, and if not is there anything that can be done to make the code vectorise.
Posted: 11 Jun 2015 | 16:01
Moving from OpenMP to vectorisation and MPI
Reality hit home a bit on the third day of our intensive week working with Colfax to optimise codes for the Xeon Phi.
After further implementation and analysis work it appears that the removal of the allocation and deallocation calls from some of the low level routines in CP2K will improve the OpenMP performance on Xeon and Xeon Phi, but only because there is an issue with the Intel compiler that is causing poor performance. The optimisation can see a reduction in runtime of around 20-30% for the OpenMP code, but only with versions 15 and 16 of the Intel compiler, on v14 there is a much smaller performance improvement.
Posted: 10 Jun 2015 | 00:08
Day 2: profiling and the start of optimising
After a first day spent getting codes set up and systems running, we got into the profiling of CP2K in anger today and have made some good progress.
Posted: 8 Jun 2015 | 17:48
Intel Parallel Computing Center collaboration with Colfax
We're just kicking off a week's collaboration with Colfax, a US technology company that collaborates heavily with Intel on Xeon Phi optimisation and training for the Xeon Phi.
Posted: 21 Nov 2014 | 10:29
EPCC's Grand Challenges Optimisation Centre, an Intel Parallel Computing Centre which we announced earlier in the year, has made significant progress over recent months.
The collaboration was created to optimise codes for Intel processors, particularly to port and optimise scientific simulation codes for Intel Xeon Phi co-processors. As EPCC also runs the ARCHER supercomputer, which contains a large number of Intel Xeon processors (although no accelerators or co-processors), for EPSRC and other UK research funding councils, we also have a strong focus on ensuring that scientific simulation codes are highly optimised for these processors. Therefore, the IPCC work at EPCC has been concentrating on improving the performance of a range of codes that are heavily used for computational simulation in the UK on both Intel Xeon and Intel Xeon Phi processors.
Posted: 26 Aug 2014 | 17:37
Last month EPCC added a new supercomputer to its portfolio. Working in collaboration with the Digital Health Institute Scotland we have acquired the SGI UV2000 system. Unlike many of our existing HPC resources, Ultra (as it’s known by the DNS name) is not a cluster, there is just one Linux operating system controlling all 512 computing cores and 8TB of memory. This offers many advantages to the researchers and opens up new possibilities - suddenly we can run a large code without complex parallelisation!
Posted: 11 Aug 2014 | 09:36
Omega Tau: science and engineering in your headphones is a STEM-themed podcast produced by Markus Völter and Nora Ludewig which covers a wide range of interesting topics including aerospace, spaceflight, computing and physical sciences in great detail through interviews and discussions with area experts.