Parallel Programming: Classic Track
Edinburgh Parallel Computing Centre
University of Edinburgh
Modern parallel computers are a combination of the shared-memory and distributed-memory architectures. The standard ways to program these architectures for HPC applications are OpenMP and MPI (the Message-Passing Interface) respectively. This hands-on course assumes a very basic understanding of OpenMP and MPI and students are expected to have covered some online material in advance. In the lectures, I will explore important issues that are often glossed over in introductory courses. For example, what happens in MPI if you send a message and there is no receive posted? What happens if there is a receive but the matching message is too large? I will also discuss why you might want to combine the two approaches in a single parallel program and explain how to do it. All exercises can be done using C, C++ or Fortran. MPI can in principle be called from Python this is not explicitly covered in this course. Therefore, students using Python will have to sort out any technicalities using the online "mpi4py" documentation.
David Henty has been working with supercomputers for over 25 years and teaching people how to use them for almost as long. He joined Edinburgh Parallel Computing Centre (EPCC) after doing research in computational theoretical physics. The EPCC is based at the University of Edinburgh, Scotland where we house and support the UK national supercomputer service ARCHER.
Course Content
For MPI, I will assume that students have covered the pre-requisite online material in advance.
I will then cover:
- Introduction
- The 1D traffic model (video available on YouTube at https://youtu.be/SIZaIkD_Jfg)
- Hands-on: run traffic model code on Bridges (exercise material at bottom of page)
- MPI Quiz ("Room Name" is HPCQUIZ)
- Video of previous run of this quiz https://www.youtube.com/watch?v=mOCIEio5zfw&feature=youtu.be&t=0m35s
- PDF of results from this run of quiz
- Communicators, tags and modes (largely focusing on MPI modes)
- Non-blocking communications
- Hands-on: Work on traffic model
- Asynchronous methods (if time permits)
For OpenMP, I also assume that students have covered the pre-requisite online material in advance.
I will then cover:
- OpenMP Overview
- Walkthrough of simple parallelisation of Pi example (see code below)
- Work-sharing directives
- Hands-on: Work on traffic model
- HPC Challenge: description
- HPC Challenge: rules
- Hybrid MPI + OpenMP
- MPI-3 shared memory (if time permits)
For all of these topics we wil use the same practical example based on the 1D traffic model:
- Exercise sheet
- Traffic model source code
- Crib sheet for running jobs on Bridges
I also include code for the simpler pi example.