General HPC

Related tags:

Courses tagged with "General HPC"

A default computing environment is automatically set up for you when your HPC cluster account is created. It provides access to the default compilers, directories, and software you need to use its resources efficiently. Eventually, you may find that the default environment doesn't meet your specific usage requirements. When that happens, you may customize your environment by configuring various settings and creating shortcuts for accomplishing tasks you perform regularly.

In this tutorial, you will learn how to customize your HPC cluster computing environment using Unix commands and the Modules package.

Using an HPC Cluster, or supercomputer, for scientific applications can significantly accelerate scientific research. They offer exceptional computing power, memory capacity, and parallel processing capabilities that cannot be achieved using a desktop computer. In this short tutorial, you will learn about the following:

  • the basic components of an HPC Cluster
  • the types of file systems available on an HPC cluster and the common policies for their use
  • the steps that are commonly taken to run an application on an HPC cluster

While each HPC Cluster has unique characteristics, this tutorial describes the core components and how they can be used for scientific research. This knowledge can then be applied when using any HPC Cluster.


This tutorial covers the fundamentals of how to use the Lustre File System in a scientific application to achieve efficient I/O performance. Lustre is an object-based, parallel distributed file system used for large-scale, high-performance computing clusters. It enables scaling to a large number of nodes (tens of thousands), petabytes (PB) of storage, and high aggregate throughput (hundreds of gigabytes per second) making it an advantageous data storage solution for many scientific computing applications. Many of the top supercomputers in the world, as well as small workgroup clusters and large-scale, multi-site clusters, use Lustre.

Target Audience: HPC application users and developers who run applications on a Lustre file system.

This tutorial provides an introduction to parallel computing on high-performance systems. Core concepts covered include terminology, programming models, system architecture, data and task parallelism, and performance measurement. Hands-on exercises using OpenMP will explore how to build new parallel applications and transform serial applications into parallel ones incrementally in a shared memory environment. (OpenMP is a standardized API for parallelizing Fortran, C, and C++ programs on shared-memory architectures.) After completing this tutorial, you will be prepared for more advanced or different parallel computing tools and techniques that build on these core concepts. 

This is a recording of a training webinar given by Scot Breitenfeld (HDF), Gerd Heber (HDF), and Hariharan Devarajan (LLNL) on 3/12/2024. 

Category: Past Webinars