Skip to main content

Notes on OpenMP vs. MPI

When parallelizing code across multiple nodes, OpenMP and MPI (Message Passing Interface) are two commonly used approaches that differ in underlying concepts and methodologies.

OpenMP (shared memory)

OpenMP is a programming model mainly used for shared-memory parallelism. It lets you parallelize code within a single node or multicore processor by distributing work across multiple threads. OpenMP does not natively support parallelism across multiple nodes.

When using OpenMP, a single instance of the program runs on a node and multiple threads are created to share the workload across available cores. Threads can access shared memory, which simplifies communication and synchronization between them. However, OpenMP alone cannot handle inter-node communication.

MPI (distributed memory)

MPI is a message-passing library used for distributed-memory parallelism. It enables parallelization across multiple nodes in a cluster or distributed environment. Each node runs independently and has its own memory. MPI provides communication and coordination functions between nodes.

With MPI you launch multiple instances of the program, called MPI processes or ranks, each running on a separate node. These processes communicate by explicitly sending and receiving messages via MPI function calls. Data is exchanged using point-to-point communications or collective operations. MPI supports a wide range of communication patterns, enabling efficient coordination between nodes.

When allocating multiple nodes

OpenMP: With OpenMP you typically allocate multiple cores within a single node. OpenMP itself does not directly manage node allocation. The operating system or a resource manager (like SLURM, PBS) is responsible for assigning cores to your OpenMP threads.

MPI: With MPI you explicitly allocate multiple nodes. You launch multiple MPI processes, each running on a separate node. The resource manager or an MPI launcher (like mpirun) manages node allocation for you.

In summary, OpenMP is suited for shared-memory parallelism within a single node, while MPI is designed for distributed-memory parallelism across multiple nodes. MPI gives explicit control over inter-node communication, whereas OpenMP focuses on intra-node parallelism.