Posted: 25 Apr 2014 | 08:51
How does the national supercomputing service compare with two boys aged 5 and 6?
Lorna Smith (ARCHER Computational Science & Engineering Deputy Director) gave the answers during her talk at the Women in HPC launch.
Posted: 29 Nov 2013 | 11:00
ARCHER (Advanced Research Computing High End Resource) is the next national HPC service for academic research. The service comprises a number of components: accommodation provided by the University of Edinburgh; hardware by Cray; systems support by EPCC and Daresbury Laboratory; and user and computational science and engineering support by EPCC.
Posted: 21 Nov 2013 | 09:31
Posted: 9 Oct 2013 | 11:47
The NAIS project (Numerical Algorithms and Intellegent Software), which EPCC is a member of, has recently purchased 8 NVidia K20 GPGPUs, and associated computer nodes to house them, for use by NAIS members and researchers.
Posted: 24 May 2013 | 16:00
What sort of research is the HECToR supercomputing facility used for and what simulation software does it make use of?
EPCC measures the use of different simulation codes used on the HECToR facility to get an idea of which codes are used most and what size of jobs different code are used for. In this post I will take a look at which codes are used most on the facility and speculate whether we can infer anything from the patterns we see.
Posted: 12 Apr 2013 | 14:00
The deadline has been extended for submission of papers to the special session on Energy Efficient Systems at the 2013 IEEE International Conference on Systems, Man and Cybernetics.
Posted: 12 Apr 2013 | 11:31
The vast majority of applications running on HECToR today are designed around the Single Instruction Multiple Data (SIMD) parallel programming paradigm. Each processing element (PE), i.e. MPI rank or Fortran Coarray image, runs the same program and performs the same operations in parallel on the same or a similar amount of data. Usually these application are launched on the compute nodes homogenously, with the same number of processes spawned on each node and each with the same number of threads (if required).