Scaling

Balancing act: optimise for scaling or efficiency?

Author: Adrian Jackson
Posted: 24 May 2017 | 19:30

When we parallelise and optimise computational simulation codes we always have choices to make. Choices about the type of parallel model to use (distributed memory, shared memory, PGAS, single sided, etc), whether the algorithm used needs to be changed, what parallel functionality to use (loop parallelisation, blocking or non-blocking communications, collective or point-to-point messages, etc).

Scaling to thousands of GPUs on Titan

Author: Alan Gray
Posted: 26 Apr 2013 | 10:45

We have been among the first researchers to take advantage of the massive amounts of computing power available on the world's fastest "Titan" supercomputer (based at Oak Ridge National Laboratory). The full machine will boast 18 thousand GPUs, and just under half of these have been made available recently. We have shown that our highly scalable "Ludwig" soft matter physics application can efficiently take advantage of at least 8192 GPUs in parallel.

Blog Archive