Posted: 23 Sep 2015 | 14:59
After a couple of years of measuring and trying to understand the power and energy consumption of (parallel) software and hardware, we have now released one of the key tools that we've been using as part of this research: the Adept Benchmark Suite!
While measuring performance (ie time to solution) is well understood, doing the same for power or energy is much less straightforward and often hardware dependent. The Adept Benchmark Suite relies on third party power measurement (such as instrumentation of the hardware) to be in place. However, to get users started with initial experiments, we provide a library to use RAPL (Running Average Power Limit) counters on Intel processors to measure the power of CPUs and memory, as well as some example code on how to use this library within the Adept Benchmarks.
Posted: 16 Sep 2015 | 15:14
The International Parallel Computing Conference (ParCo) was held in Edinburgh from 1st–4th September. ParCo2015 was the 16th instance of the series which started in 1983, making it the longest running series of international conferences in Europe on advances in the development and application of parallel computing technologies.
Posted: 12 Sep 2015 | 10:01
I was among those presenting at EPCC's recent Big Data seminars at Edinburgh BioQuarter and BioDundee. Both events provided a good opportunity for me to talk to people about their Big Data problems and their views on what Big Data means to them.
Posted: 11 Sep 2015 | 13:41
It's not often that the internecine rivalries of the HPC research and development community spill over into the public arena. However, a video recently posted on YouTube (and the associated comments), ostensibly a light-hearted advert for a SC15 tutorial on heterogenous programming, shows how real and deep these rivalries can be.
Posted: 3 Sep 2015 | 17:55
Big Data vs HPC
Whilst I was writing my talk for last week's How to Make Big Data work for your business seminar at Edinburgh BioQuarter, it occurred to me that the way computational simulation codes have evolved over the last 40 years has really been a response to big data issues.
Posted: 30 Jul 2015 | 14:40
Posted: 20 Jul 2015 | 17:12
Experiences of porting and optimising code for Xeon Phi processors
EPCC is jointly organising a symposium at the ParCo conference on experiences from those working on porting and optimising codes for this architecture about the challenges and successes they have experienced when working with the Xeon Phi, and how these also apply to standard parallel computing hardware.
Posted: 15 Jul 2015 | 15:06
Discussions on computing
This one-week academy is designed to give PhD students some of the skills they need to undertake the range of computational simulations and data analysis tasks that their work requires.
Posted: 21 Jun 2015 | 20:02
The final analysis and future plans
A week ago we finished our 5 days of intensive work optimising CP2K (and to a lesser extent GS2) for Xeon Phi processors. As discussed in previous blog posts (Day4, Day3, Day2, Day1), this was done in conjunction with research engineers from Colfax, and built on the previous year's work on these codes by EPCC staff through the Intel-funded IPCC project.
Posted: 12 Jun 2015 | 15:41
MPI and vectorisation: Two ends of the optimisation spectrum
Day four of this week of intensive work optimising codes for Xeon Phi saw a range of work. The majority of the effort focussed on the vectorisation performance of CP2K and GS2; looking at the low level details of the computationally-intensive parts of these codes and seeing whether the compiler is producing vectorised codes, and if not is there anything that can be done to make the code vectorise.