HPC

CP2K-UK: Now recruiting!

Author: Iain Bethune
Posted: 20 Jun 2013 | 13:48

'CP2K-UK' is a new project starting shortly at EPCC, aiming to nurture the growth of a self-sustaining user and developer community around the CP2K materials science code here in the UK. I have been working on CP2K for nearly 5 years now thanks to a series of HECToR dCSE and PRACE projects, so it is great to get a chance to work on some of the more fundamental issues around usability and sustainability of the code, thanks to success in the EPSRC 'Software for the Future' call.

MIC check

Author: Iain Bethune
Posted: 12 Jun 2013 | 13:20

Following on from my recent post on Xeon Phi, thanks to the hard work of our Systems Development Team we now have a fully configured server sporting the two Intel 5110P Many Integrated Core (MIC) co-processor cards installed and ready to go. The imaginately named 'phi' machine is connected to our internal Hydra cluster and is available for staff, students and visitors to port and test their applications.

File transfer technologies for HPC systems

Author: Stephen Booth
Posted: 28 May 2013 | 14:32

Introduction

While a surprisingly high proportion of HPC users are happy to keep their data on a single HPC service, or at most to move it within the hosting institution, sometimes is becomes necessary to move large volumes of data between different sites and institutions. As anyone who has ever tried to support users in this endeavour knows, it can be much harder to get good performance than it should be. This post is an attempt to document the available tools and technologies as well as common problems and bottlenecks.

Simulation code usage on HECToR

Author: Andy Turner
Posted: 24 May 2013 | 16:00

What sort of research is the HECToR supercomputing facility used for and what simulation software does it make use of?

EPCC measures the use of different simulation codes used on the HECToR facility to get an idea of which codes are used most and what size of jobs different code are used for. In this post I will take a look at which codes are used most on the facility and speculate whether we can infer anything from the patterns we see.

Software optimisation papers for SC13

Author: Adrian Jackson
Posted: 23 May 2013 | 17:00

I've just finished working on two papers for this year's Supercomputing conference, SC13, which is going to be in Denver, Colorado, from the 17th-22nd November. EPCC will have an official presence, with an EPCC booth on the exhibition floor and a number of staff participating in the technical and education programmes. I thought this a good opportunity to write up some recent work I've been undertaking. 

McMPI

Author: Daniel Holmes
Posted: 3 May 2013 | 15:18

This article originally appeared in the Cisco blog by Jeff Squires and was written while I was undertaking a PhD before I joined EPCC as a member of staff. I thought it would be of interest to folks reading this blog.

My PhD involved building a message passing library using C#; not accessing an existing MPI library from C# code but creating a brand new MPI library written entirely in pure C#. The result is McMPI (Managed-code MPI), which is compliant with MPI-1 – as far as it can be given that there are no language bindings for C# in the MPI Standard. It also has reasonably good performance in micro-benchmarks for latency and bandwidth both in shared-memory and distributed-memory.

A library for changing between different data decompositions

Author: Stephen Booth
Posted: 2 May 2013 | 13:32

I'm currently working on a small library to support decomposition changes in parallel programs.

It turns out that a fairly simple interface can be used to describe a very large space of possible data decompositions. I'm therefore writing a library that can redistribute data between any two such decompositions.

GULP: HPC simulations of complex solids and clusters using static lattice techniques

Author: Iain Bethune
Posted: 29 Apr 2013 | 07:09

Inter-crystalline boundaries in ZSM-5

Materials science - understanding how the microscopic structure of matter gives rise to macroscopic properties of materials - is one of EPSRC's key research areas, with applications in fields as diverse as energy storage, electronics, fabrics and nanotechnology.  EPCC helps develop a number of important simulation codes in this area such as CP2K, GROMACS, and in this project GULP, the General Utility Lattice Program.

Launching applications non-homogenously on Cray supercomputers

Author: Tom Edwards
Posted: 12 Apr 2013 | 11:31

The vast majority of applications running on HECToR today are designed around the Single Instruction Multiple Data (SIMD) parallel programming paradigm. Each processing element (PE), i.e. MPI rank or Fortran Coarray image, runs the same program and performs the same operations in parallel on the same or a similar amount of data. Usually these application are launched on the compute nodes homogenously, with the same number of processes spawned on each node and each with the same number of threads (if required).

EASC2013 - The final day

Author: Lorna Smith
Posted: 11 Apr 2013 | 12:34

After an enjoyable meal at the HUB last night, the last day of the EASC2013 conference began with a keynote speak from George Mozdzynski, from the European Centre for Medium-Range Weather Forecasts (ECMWF).

Pages

Blog Archive