https://www.hpc-training.org/moodle/pluginfile.php/884/mod_page/content/39/trafficlib.f90https://www.hpc-training.org/moodle/pluginfile.php/884/mod_page/content/39/IHPCSS2017_Hybrid_Computing_Challenge.pdf

Instructor: David Henty
Edinburgh Parallel Computing Centre
University of Edinburgh

Modern parallel computers are a combination of the shared-memory and distributed-memory architectures. The standard ways to program these architectures for HPC applications are OpenMP and MPI (the Message-Passing Interface) respectively. This hands-on course assumes a very basic understanding of OpenMP and MPI and students are expected to have covered some online material in advance. In the lectures, I will explore important issues that are often glossed over in introductory courses. For example, what happens in MPI if you send a message and there is no receive posted? What happens if there is a receive but the matching message is too large? I will also discuss why you might want to combine the two approaches in a single parallel program and explain how to do it. All exercises can be done using C, C++ or Fortran. MPI can in principle be called from Python this is not explicitly covered in this course. Therefore, students using Python will have to sort out any technicalities using the online "mpi4py" documentation.


David Henty has been working with supercomputers for over 25 years and teaching people how to use them for almost as long. He joined Edinburgh Parallel Computing Centre (EPCC) after doing research in computational theoretical physics. The EPCC is based at the University of Edinburgh, Scotland where we house and support the UK national supercomputer service ARCHER.

OpenMP solutions for traffic model:

Last modified: Wednesday, June 28, 2017, 12:03 PM