Message Passing Interface

Courses tagged with "Message Passing Interface"

The Message Passing Interface, or MPI, is a standard library of subroutines (Fortran) or function calls (C) that can be used to implement a message-passing program. MPI allows for the coordination of a program running as multiple processes in a distributed memory environment, yet is flexible enough to be used in a shared memory environment. MPI is the defacto standard for message-passing, and as such, MPI programs should compile and run on any platform supporting it. This provides ease of use and source code portability. It also allows efficient implementations across a range of architectures, offers a great deal of functionality, includes different communication types and special routines for common collective operations, handles user-defined data types and topologies, and supports heterogeneous parallel architectures.

This tutorial provides an introduction to MPI so you can begin using it to develop message-passing programs in Fortran or C.

Target Audience: Programmers and researchers interested in using or writing parallel programs to solve complex problems.

Prerequisites: No prior experience with MPI or parallel programming is required to take this course. However, an understanding of computer programming is necessary.

The Multilevel Parallel Programming (MLP) approach is a mixture of message passing via MPI and either compiler directives or explicit threading. The MLP approach can use one of several combinations referred to as MPI+X, where X can be OpenMP, CUDA, OpenACC, etc. In this tutorial, you will learn about MPI+OpenMP. Both are widely used for scientific applications and are supported on virtually every parallel system architecture currently in production.

Prerequisites: A basic understanding of MPI and OpenMP.

MPI is a widely used, highly portable framework composed of syntax and semantics, enclosed by library routines, to provide communication protocols between processes in distributed memory systems. It is open-source, lightweight, and a de facto standard in the parallel software ecosystem, especially within research high-performance computing. MPI is primarily available to C/C++ and Fortran applications. However, it is also available to Python applications through a series of bindings implemented in the mpi4py package.

This course will guide you through the essential components of the Massage Passing Interface (MPI) paradigm, allowing you to break through the common initial barriers and unleash the power of scalable parallel computing applications. It is designed for students with no previous experience with MPI.