HPC

Are we underestimating the real challenge of Exascale?

Author: Mark Parsons
Posted: 28 Jun 2013 | 10:13

Reaching the Exascale is rightly posed as a combination of challenges related to (i) energy efficiency, (ii) heterogeneity and resiliency of computation, storage and communication, and (iii) the scale of the parallelism involved. Many discussions about Exascale focus on the first two challenges. This is understandable – building an Exascale system with today’s most energy efficient technology would still require around 480 MWatts.

Beatbox Workshop - Day 2

Author: Mario Antonioletti
Posted: 25 Jun 2013 | 12:33

The second and final day of the Beatbox workshop that Adrian Jackson described yesterday consisted of a tutorial where some of the participants were walked through running Beatbox scripts and using Beatbox in general. 

The whole set-up was done using a bootable 8Gb Linux USB key which contained the key components, including part of the Beatbox distribution. That worked quite well and would be worth considering for this kind of course. The attendees got to take the USB keys away so they could continue evaluating Beatbox after the event, which is kind of neat.

Workshop talk on Beatbox: Biophysically and Anatomically Realistic Cardiac Simulations

Author: Adrian Jackson
Posted: 24 Jun 2013 | 10:22

I'm just preparing to give a talk, along with my colleague Mario, at the Beatbox workshop being held at Manchester University this week. Beatbox is an HPC environment for Biophysically and Anatomically Realistic

CP2K-UK: Now recruiting!

Author: Iain Bethune
Posted: 20 Jun 2013 | 13:48

'CP2K-UK' is a new project starting shortly at EPCC, aiming to nurture the growth of a self-sustaining user and developer community around the CP2K materials science code here in the UK. I have been working on CP2K for nearly 5 years now thanks to a series of HECToR dCSE and PRACE projects, so it is great to get a chance to work on some of the more fundamental issues around usability and sustainability of the code, thanks to success in the EPSRC 'Software for the Future' call.

MIC check

Author: Iain Bethune
Posted: 12 Jun 2013 | 13:20

Following on from my recent post on Xeon Phi, thanks to the hard work of our Systems Development Team we now have a fully configured server sporting the two Intel 5110P Many Integrated Core (MIC) co-processor cards installed and ready to go. The imaginately named 'phi' machine is connected to our internal Hydra cluster and is available for staff, students and visitors to port and test their applications.

File transfer technologies for HPC systems

Author: Stephen Booth
Posted: 28 May 2013 | 14:32

Introduction

While a surprisingly high proportion of HPC users are happy to keep their data on a single HPC service, or at most to move it within the hosting institution, sometimes is becomes necessary to move large volumes of data between different sites and institutions. As anyone who has ever tried to support users in this endeavour knows, it can be much harder to get good performance than it should be. This post is an attempt to document the available tools and technologies as well as common problems and bottlenecks.

Simulation code usage on HECToR

Author: Andy Turner
Posted: 24 May 2013 | 16:00

What sort of research is the HECToR supercomputing facility used for and what simulation software does it make use of?

EPCC measures the use of different simulation codes used on the HECToR facility to get an idea of which codes are used most and what size of jobs different code are used for. In this post I will take a look at which codes are used most on the facility and speculate whether we can infer anything from the patterns we see.

Software optimisation papers for SC13

Author: Adrian Jackson
Posted: 23 May 2013 | 17:00

I've just finished working on two papers for this year's Supercomputing conference, SC13, which is going to be in Denver, Colorado, from the 17th-22nd November. EPCC will have an official presence, with an EPCC booth on the exhibition floor and a number of staff participating in the technical and education programmes. I thought this a good opportunity to write up some recent work I've been undertaking. 

McMPI

Author: Daniel Holmes
Posted: 3 May 2013 | 15:18

This article originally appeared in the Cisco blog by Jeff Squires and was written while I was undertaking a PhD before I joined EPCC as a member of staff. I thought it would be of interest to folks reading this blog.

My PhD involved building a message passing library using C#; not accessing an existing MPI library from C# code but creating a brand new MPI library written entirely in pure C#. The result is McMPI (Managed-code MPI), which is compliant with MPI-1 – as far as it can be given that there are no language bindings for C# in the MPI Standard. It also has reasonably good performance in micro-benchmarks for latency and bandwidth both in shared-memory and distributed-memory.

A library for changing between different data decompositions

Author: Stephen Booth
Posted: 2 May 2013 | 13:32

I'm currently working on a small library to support decomposition changes in parallel programs.

It turns out that a fairly simple interface can be used to describe a very large space of possible data decompositions. I'm therefore writing a library that can redistribute data between any two such decompositions.

Pages