HPC Cluster

Related tags:

Courses tagged with "HPC Cluster"

Using an HPC Cluster, or supercomputer, for scientific applications can significantly accelerate scientific research. They offer exceptional computing power, memory capacity, and parallel processing capabilities that cannot be achieved using a desktop computer. In this short tutorial, you will learn about the following:

  • the basic components of an HPC Cluster
  • the types of file systems available on an HPC cluster and the common policies for their use
  • the steps that are commonly taken to run an application on an HPC cluster

While each HPC Cluster has unique characteristics, this tutorial describes the core components and how they can be used for scientific research. This knowledge can then be applied when using any HPC Cluster.


Multi-core is a term used to describe a computer architecture where two or more processors, or cores, are integrated into a single chip package. This architecture is used to run multiple instructions simultaneously, leading to increased performance for parallel applications. Multi-core processors are ubiquitous in today's computing devices. They are not only found in high-performance computing (HPC) systems but also in desktop and laptop computers, tablets, and smartphones. While these multi-core systems can provide automatic performance improvements, individual applications must be modified to take full advantage of them. This tutorial introduces the general concepts of multi-core systems and the methods used to improve HPC application performance on them.

Target Audience: HPC application users and developers who run applications on a multi-core system.


This tutorial covers the fundamentals of how to use the Lustre File System in a scientific application to achieve efficient I/O performance. Lustre is an object-based, parallel distributed file system used for large-scale, high-performance computing clusters. It enables scaling to a large number of nodes (tens of thousands), petabytes (PB) of storage, and high aggregate throughput (hundreds of gigabytes per second) making it an advantageous data storage solution for many scientific computing applications. Many of the top supercomputers in the world, as well as small workgroup clusters and large-scale, multi-site clusters, use Lustre.

Target Audience: HPC application users and developers who run applications on a Lustre file system.

This tutorial provides an introduction to parallel computing on high-performance systems. Core concepts covered include terminology, programming models, system architecture, data and task parallelism, and performance measurement. Hands-on exercises using OpenMP will explore how to build new parallel applications and transform serial applications into parallel ones incrementally in a shared memory environment. (OpenMP is a standardized API for parallelizing Fortran, C, and C++ programs on shared-memory architectures.) After completing this tutorial, you will be prepared for more advanced or different parallel computing tools and techniques that build on these core concepts. 

This tutorial introduces how to use the Illinois Campus Cluster to run scientific application code. It begins with a lesson on the fundamental concepts of scientific computing on a High-Performance Computing (HPC) cluster. The tutorial then progresses through how-to access the Illinois Campus Cluster, manage files, edit files, setup your software environment, and transfer files. Finally, you will complete a hands-on exercise where you will run an example application. Even if you are not a Campus Cluster user, by taking this tutorial you can still learn about using an HPC cluster for scientific computing.