Shared-Memory Programming with OpenMP
An introduction to using OpenMP for parallel programming. No prior parallel programming experience required.
Almost all modern computers now have a shared-memory architecture with multiple CPUs connected to the same physical memory, for example multicore laptops or large multi-processor compute servers. This course covers OpenMP, the industry standard for shared-memory programming, which enables serial programs to be parallelised easily using compiler directives. Users of desktop machines can use OpenMP on its own to improve program performance by running on multiple cores; users of parallel supercomputers can use OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of the compute nodes.
This two-day course will cover an introduction to the fundamental concepts of the shared variables model, followed by the syntax and semantics of OpenMP and how it can be used to parallelise real programs. Hands-on practical programming exercises make up a significant, and integral, part of this course.
No prior HPC or parallel programming knowledge is assumed, but attendees must already be able to program in C, C++ or Fortran. Access will be given to appropriate hardware for all the exercises, although many of them can also be performed on a standard Linux laptop.
This course is free to all academics.
Pre-requisite programming languages
Fortran, C or C++. It is not possible to do the exercises in Java.
09:30 Lectures: Shared Memory Concepts; OpenMP Fundamentals; Parallel Regions
11:30 Practicals: Hello World; Mandelbrot 1
14:00 Lectures: Work sharing; Synchronisation
16:00 Practicals: Mandelbrot 2; Molecular Dynamics (MD)
09:30 Lectures: Further topics; OpenMP Tasks
11:30 Practicals: MD with orphaning; Mandelbrot with tasks
14:00 Lectures: Memory model; Performance tuning
16:00 Practicals: MD tuning
If you have any questions please contact the EPCC Helpdesk.