Parallel Programming: Classic Track

Instructor: David Henty
Edinburgh Parallel Computing Centre
University of Edinburgh
d.henty@epcc.ed.ac.uk
David Henty has been working with supercomputers for over 30 years and teaching people how to use them for almost as long. He joined Edinburgh Parallel Computing Centre (EPCC) after doing research in computational theoretical particle physics. EPCC is based at the University of Edinburgh, Scotland where we house and support the UK national supercomputer service ARCHER2.
Archived Session Materials from IHPCSS 2021
Recording: Classic Parallel Programming Day 1
Recording: Classic Parallel Programming Day 2
Recording: Classic Parallel Programming Day 3
Course Description
Modern parallel computers are a combination of shared-memory and distributed-memory architectures. The standard ways to program these architectures for HPC applications are OpenMP and MPI (the Message-Passing Interface) respectively. This hands-on course assumes a basic understanding of OpenMP and MPI and students are expected to have covered some prerequisite online material in advance. In the lectures, I will explore important issues that are often glossed over in introductory courses. For example, what happens in MPI if you send a message and there is no receive posted? What happens if there is a receive but the matching message is too large? I will also discuss why you might want to combine the two approaches in a single parallel program and explain how to do it. All exercises can be done using C, C++, or Fortran. MPI can in principle be called from Python but this is not explicitly covered in this course. Therefore, students using Python will have to sort out any technicalities using online "mpi4py" documentation.
Summer School Materials
Here is the material that I will be presenting during the Summer School - remember that I assume that everyone has already taken a look at the Prerequisite Materials below.
Tuesday 20th July
- 10:15 - 10:30 (CEST); 13:15 - 13:30 (EDT): Overview
- 10:30 - 11:00 (CEST); 13:30 - 14:00 (EDT): MPI Quiz
- 11:00 - 11:15 (CEST); 14:00 - 14:15 (EDT): Break
- 11:15 - 11:30 (CEST); 14:15 - 14:30 (EDT): Overview of traffic model example
- 11:30 - 12:00 (CEST); 14:30 - 15:00 (EDT): Non-Blocking Communications (Lecture)
I will begin with a short (and hopefully fun!) MPI multiple-choice quiz. This is entirely anonymous so, if you don't know an answer, just take a guess - the main aim of the quiz is to promote discussion. None of the possible answers are "stupid" answers: all of them could easily be correct for a general message-passing library, it's just that they may not be true for MPI.
To join the quiz go to https://b.socrative.com/login/student/ and enter the "Room Name" as HPCQUIZ
Material
- Non-blocking communications lecture
- Exercise sheet for all three days
- Template code for the exercises (unpack with: tar -xvf traffic.tar)
- there is also a copy of this file on Bridges2; you should be able to copy it using: cp /jet/home/dsh/ihpcss21/traffic.tar .
Wednesday 21st July
- 09:00 - 09:45 (CEST); 12:00 - 12:45 (EDT): Hands-on exercise (Traffic model: introduce MPI non-blocking)
- 09:45 - 10:00 (CEST); 12:45 - 13:00 (EDT): Break
- 10:00 - 10:45 (CEST); 13:00 - 13:35 (EDT): Overview of OpenMP (Lecture)
- 10:45 - 11:15 (CEST); 13:45 - 14:15 (EDT): Hands-on exercise (Traffic model: work on OpenMP)
Material
- OpenMP: Parallel Regions
- OpenMP: Worksharing
Thursday 22nd July
- 09:00 - 09:45 (CEST); 12:00 - 12:45 (EDT): Overview of Hybrid MPI / OpenMP (Lecture)
- 09:45 - 10:15 (CEST); 12:45 - 13:15 (EDT): Hands-on exercise (Traffic model: work Hybrid MPI / OpenMP)
Material
Prerequisite Materials
For MPI, I will assume that students have covered the pre-requisite online material in advance. You should ensure that you are familiar with the basics of MPI which are contained in the material up to the end of Block 2 in our online self-service MPI course for the ARCHER2 system (note that the solutions to the exercises are contained in Block 3). Access to this material normally requires a separate registration process so please do not share this link outside of the Summer School.
For OpenMP, the background material is covered in the first four lectures of our online self-service OpenMP course for the ARCHER2 system. Again, access to this material normally requires a separate registration process so please do not share this link outside of the Summer School.
Practical Examples
Both MPI and OpenMP are portable to all HPC systems, but our online courses are set up to run on ARCHER2. You do not have an ARCHER2 account so you should run the examples on Bridges2 - please ignore any ARCHER2-specific information in the courses.
All practical exercises will be done on the Bridges2 system. Here is a crib sheet explaining how to compile and run MPI and OpenMP programs on Bridges2.
Note that if you have set your PSC password directly (via the instructions at https://www.psc.edu/resources/bridges-2/user-guide-2/) then you don't need to use port 2222 to access bridges2, e.g. you can connect using ssh username@bridges2.psc.edu . Note
that your XSEDE password and PSC password are not necessarily the same - using port 2222 you will be prompted for you XSEDE password, the default port requires the PSC password.