MPI is a widely used, highly portable framework composed of syntax and semantics, enclosed by library routines, to provide communication protocols between processes in distributed memory systems. It is open-source, lightweight, and a de facto standard in the parallel software ecosystem, especially within research high-performance computing. MPI is primarily available to C/C++ and Fortran applications. However, it is also available to Python applications through a series of bindings implemented in the mpi4py package.

This course will guide you through the essential components of the Massage Passing Interface (MPI) paradigm, allowing you to break through the common initial barriers and unleash the power of scalable parallel computing applications. It is designed for students with no previous experience with MPI. 

This tutorial provides an introduction to parallel computing on high-performance systems. Core concepts covered include terminology, programming models, system architecture, data and task parallelism, and performance measurement. Hands-on exercises using OpenMP will explore how to build new parallel applications and transform serial applications into parallel ones incrementally in a shared memory environment. (OpenMP is a standardized API for parallelizing Fortran, C, and C++ programs on shared-memory architectures.) After completing this tutorial, you will be prepared for more advanced or different parallel computing tools and techniques that build on these core concepts.