Parallel Programming

Courses tagged with "Parallel Programming"

No matter how experienced you may be at debugging code, it can be a frustrating task especially if all you have at your disposal is the ability to insert print statements at strategic locations with the hope of finding where the problem lies. This tutorial introduces you to a better method for debugging your code -- using debugger software. We start by giving an overview of debugger software capabilities and then describe several different types of errors that can be made and how to debug them. The lessons are divided into debugging serial and parallel codes since the methods for debugging them are slightly different, as are the types of bugs you encounter. To help you understand each lesson's concepts, a sample program is given that has a particular bug, and the process for debugging it is described. To further your understanding, exercises are provided so that you can debug a program on your own.

This course's unique feature is the explanation of why a certain debugger command is used during a debugging session. There are many step-by-step manuals for debuggers that list what-to-type-when. However, these manuals do not explain why you should type it. In each debugging session shown in this course, we explain how the debugger is being used and emphasize the overall debugging strategy.

Prerequisites: General programming knowledge. Course examples use Fortran and C/C++.

Note: This course was previously offered on CI-Tutor.

The Message Passing Interface, or MPI, is a standard library of subroutines (Fortran) or function calls (C) that can be used to implement a message-passing program. MPI allows for the coordination of a program running as multiple processes in a distributed memory environment, yet is flexible enough to be used in a shared memory environment. MPI is the defacto standard for message-passing, and as such, MPI programs should compile and run on any platform supporting it. This provides ease of use and source code portability. It also allows efficient implementations across a range of architectures, offers a great deal of functionality, includes different communication types and special routines for common collective operations, handles user-defined data types and topologies, and supports heterogeneous parallel architectures.

This tutorial provides an introduction to MPI so you can begin using it to develop message-passing programs in Fortran or C.

Target Audience: Programmers and researchers interested in using or writing parallel programs to solve complex problems.

Prerequisites: No prior experience with MPI or parallel programming is required to take this course. However, an understanding of computer programming is necessary.

OpenMP is a standardized API for parallelizing Fortran, C, and C++ programs on shared-memory architectures. This tutorial provides an introduction to OpenMP in a concise, progressive fashion, so you can begin to apply OpenMP to your codes in a minimum amount of time. Some general information on parallel processing is also included to the extent necessary to explain various points about OpenMP. Examples are presented in both Fortran and C.

Prerequisites: Knowledge of basic programming in Fortran, C, or C++.

Note: This course was previously offered on CI-Tutor.

The Multilevel Parallel Programming (MLP) approach is a mixture of message passing via MPI and either compiler directives or explicit threading. The MLP approach can use one of several combinations referred to as MPI+X, where X can be OpenMP, CUDA, OpenACC, etc. In this tutorial, you will learn about MPI+OpenMP. Both are widely used for scientific applications and are supported on virtually every parallel system architecture currently in production.

Prerequisites: A basic understanding of MPI and OpenMP.

MPI is a widely used, highly portable framework composed of syntax and semantics, enclosed by library routines, to provide communication protocols between processes in distributed memory systems. It is open-source, lightweight, and a de facto standard in the parallel software ecosystem, especially within research high-performance computing. MPI is primarily available to C/C++ and Fortran applications. However, it is also available to Python applications through a series of bindings implemented in the mpi4py package.

This course will guide you through the essential components of the Massage Passing Interface (MPI) paradigm, allowing you to break through the common initial barriers and unleash the power of scalable parallel computing applications. It is designed for students with no previous experience with MPI. 

This tutorial provides an introduction to parallel computing on high-performance systems. Core concepts covered include terminology, programming models, system architecture, data and task parallelism, and performance measurement. Hands-on exercises using OpenMP will explore how to build new parallel applications and transform serial applications into parallel ones incrementally in a shared memory environment. (OpenMP is a standardized API for parallelizing Fortran, C, and C++ programs on shared-memory architectures.) After completing this tutorial, you will be prepared for more advanced or different parallel computing tools and techniques that build on these core concepts.