Development/Parallel Programming: Difference between revisions
mNo edit summary |
K Siegmund (talk | contribs) No edit summary |
||
(41 intermediate revisions by 6 users not shown) | |||
Line 5: | Line 5: | ||
| module load |
| module load |
||
| mpi/impi | mpi/openmpi |
| mpi/impi | mpi/openmpi |
||
|- |
|||
| Availability |
|||
| [[bwUniCluster]] | [[BwForCluster_Chemistry]] | bwGRiD_tu |
|||
|- |
|- |
||
| Links |
| Links |
||
Line 20: | Line 17: | ||
Please refer to the [[BwUniCluster_Environment_Modules|Modules Documentation]] how to setup your environment on bwUniCluster to load a specific software installation. |
Please refer to the [[BwUniCluster_Environment_Modules|Modules Documentation]] how to setup your environment on bwUniCluster to load a specific software installation. |
||
<br> |
<br> |
||
More informations about Compilers installed at our clusters is available here: |
|||
<br> |
|||
* [[Development/Intel_Compiler|Intel Compiler Suite]] |
|||
= Versions and Availability = |
|||
* [[Development/GCC|GNU Compiler (GCC)]] |
|||
=== impi (Intel) === |
|||
* [[Development/General_compiler_usage|General Compiler Usage (incl. PGI Compiler)]] |
|||
A list of versions currently available compilers on the bwHPC-C5-Clusters can be obtained from the |
|||
<br> |
|||
<big> |
|||
<br> |
|||
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS] |
|||
<br></big> |
|||
{{#widget:Iframe |
|||
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/mpi/impi |
|||
|width=99% |
|||
|height=1000 |
|||
}} |
|||
=== openmpi === |
|||
A list of versions currently available compilers on the bwHPC-C5-Clusters can be obtained from the |
|||
<br> |
|||
<big> |
|||
<br> |
|||
[https://cis-hpc.uni-konstanz.de/prod.cis/ Cluster Information System CIS] |
|||
<br></big> |
|||
{{#widget:Iframe |
|||
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/mpi/openmpi |
|||
|width=99% |
|||
|height=1500 |
|||
}} |
|||
<br> |
|||
= OpenMP = |
= OpenMP = |
||
== General Information == |
== General Information == |
||
OpenMP is a mature specification [http://openmp.org/ |
OpenMP is a mature specification [http://www.openmp.org/specifications/] to allow easy, portable, and most importantly incremental node-level parallelisation of code. |
||
Being a thread-based approach, OpenMP is aimed at more fine-grained parallelism than |
Being a thread-based approach, OpenMP is aimed at more fine-grained parallelism than MPI. |
||
Although there have been extensions to extend OpenMP for inter-node parallelisation, it is a node-level approach aimed to make best usage of a node's cores<!-- -- the section [[#Hybrid Parallelisation|Hybrid Parallelisation]] will explain how to parallelise utilizing MPI plus a thread-based parallelization paradigm like OpenMP-->. |
Although there have been extensions to extend OpenMP for inter-node parallelisation, it is a node-level approach aimed to make best usage of a node's cores<!-- -- the section [[#Hybrid Parallelisation|Hybrid Parallelisation]] will explain how to parallelise utilizing MPI plus a thread-based parallelization paradigm like OpenMP-->. |
||
<br> |
<br> |
||
Line 126: | Line 101: | ||
<tt>warning #12246: variable "v" has loop carried data dependency that may lead to incorrect program execution in parallel mode; see (file:omp_norm2.c line:32)</tt> |
<tt>warning #12246: variable "v" has loop carried data dependency that may lead to incorrect program execution in parallel mode; see (file:omp_norm2.c line:32)</tt> |
||
* Always specify <tt>default(none)</tt> on larger parallel regions in order to specifically set the visibility of variables to either <tt>shared</tt> or <tt>private</tt>. |
* Always specify <tt>default(none)</tt> on larger parallel regions in order to specifically set the visibility of variables to either <tt>shared</tt> or <tt>private</tt>. |
||
* Try to restructure code to allow for <tt>nowait</tt>: OpenMP defines synchronization points (implied barriers) at the end of work sharing constructs such as the <tt>pragma omp for<tt> directive. If the ensuing section of code does not depend on data being generated inside the parallel section, adding the <tt>nowait</tt> clause to the worksharing directive allows the compiler to eliminate this synchronization point. This reduces overhead, allows for better overlap and better utilization of the processor's resources. This might imply however to restructure the code (move portions of independent code in between dependent works-sharing constructs). |
* Try to restructure code to allow for <tt>nowait</tt>: OpenMP defines synchronization points (implied barriers) at the end of work sharing constructs such as the <tt>pragma omp for</tt> directive. If the ensuing section of code does not depend on data being generated inside the parallel section, adding the <tt>nowait</tt> clause to the worksharing directive allows the compiler to eliminate this synchronization point. This reduces overhead, allows for better overlap and better utilization of the processor's resources. This might imply however to restructure the code (move portions of independent code in between dependent works-sharing constructs). |
||
== Usage == |
== Usage == |
||
OpenMP is supported by various compilers, here the usage for two main compilers [[ |
OpenMP is supported by various compilers, here the usage for two main compilers [[Development/GCC|GCC]] and [[Development/Intel_Compiler|Intel Suite]] are introduced. |
||
For both compilers, You first need to turn on OpenMP support by specifying a parameter on the compiler's command-line. |
For both compilers, You first need to turn on OpenMP support by specifying a parameter on the compiler's command-line. |
||
In case You make function calls to OpenMP's API, You also need to include the header-file <tt>omp.h</tt>. |
In case You make function calls to OpenMP's API, You also need to include the header-file <tt>omp.h</tt>. |
||
Line 152: | Line 128: | ||
In this section, You will find information regarding the supported installations of the Message-Passing Interface libraries and their usage.<br> |
In this section, You will find information regarding the supported installations of the Message-Passing Interface libraries and their usage.<br> |
||
Due to the Fortran interface ABI, all MPI-libraries are normally bound to a specific compiler-vendor and even the specific compiler version. |
Due to the Fortran interface ABI, all MPI-libraries are normally bound to a specific compiler-vendor and even the specific compiler version. |
||
Therefore, |
Therefore, two compilers are supported on bwUniCluster: [[Development/GCC|GCC]] and [[Development/Intel_Compiler|Intel Compiler Suite]]. |
||
As both compilers are |
As both compilers are continuously improving, the communication libraries will be adopted and recompiled in lock-step. |
||
<br> |
<br> |
||
With a set of different implementations, there comes the problem of choice. These pages should inform the user of the communication libraries, what considerations should be done with regard to performance, maintainability and debugging -- in general tool support -- of the various implementations. |
With a set of different implementations, there comes the problem of choice. These pages should inform the user of the communication libraries, what considerations should be done with regard to performance, maintainability and debugging -- in general tool support -- of the various implementations. |
||
== MPI Introduction == |
== MPI Introduction == |
||
The Message-Passing Interface is a standard provided by the [http://www.mpi-forum.org MPI-Forum] which regularly convenes for the [http:// |
The Message-Passing Interface is a standard provided by the [http://www.mpi-forum.org MPI-Forum] which regularly convenes for the [http://mpi-forum.org/meetings/ MPI-Forum Meetings] to update this standard. The current version is MPI-3.1 available as [http://mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf PDF]. |
||
This document defines the API of over 300 functions for the C- and the Fortran-language -- however, You will certainly not need all of them to begin with. |
This document defines the API of over 300 functions for the C- and the Fortran-language -- however, You will certainly not need all of them to begin with. |
||
<br> |
<br> |
||
Every MPI-conforming program needs to call <tt>MPI_Init()</tt> and <tt>MPI_Finalize()</tt> upon start and shutdown -- or <tt>MPI_Abort()</tt> in case of an abnormal termination. |
Every MPI-conforming program needs to call <tt>MPI_Init()</tt> and <tt>MPI_Finalize()</tt> upon start and shutdown -- or <tt>MPI_Abort()</tt> in case of an abnormal termination. |
||
Only after initialization the program may call any other MPI-function, specifically communication functions (among the exceptions are <tt>MPI_Initialized()</tt> and <tt>MPI_Get_version()</tt>, used in libraries to check whether the calling application has started MPI and the version). |
|||
The usual functions, any MPI application needs are used to figure out the size in number of processes <tt>MPI_Comm_size()</tt> and the number the calling process has (here called a rank) using <tt>MPI_Comm_rank()</tt>. |
|||
Communication is always relative to a so-called communicator -- the default one after initialization being called <tt>MPI_COMM_WORLD</tt>- |
Communication is always relative to a so-called communicator -- the default one after initialization being called <tt>MPI_COMM_WORLD</tt>. For any MPI-communicator it always holds size >= 1, and 0 <= rank < size. |
||
<br> |
<br> |
||
There's basically three ways of communication: |
There's basically three ways of communication: |
||
Line 170: | Line 146: | ||
* one-sided communication, where communication between two processes is initiated by one-process, only. With proper RMA-hardware support and careful programming, this may allow higher performance or scalibility. |
* one-sided communication, where communication between two processes is initiated by one-process, only. With proper RMA-hardware support and careful programming, this may allow higher performance or scalibility. |
||
<br> |
<br> |
||
All parts of the |
All parts of the program, which reference MPI functionality need to be compiled with the '''same''' compiler settings/include files and linked to the same MPI-Library. This is stressed here, since without taking pre-cautions, a different MPI's header may be included, resulting in funny errors: consider that Intel MPI is derived from MPIch, with MPI-datatypes being C <tt>int</tt>s, while Open MPI uses pointers to structures (the former being 4, the latter being 8 bytes on bwUniCluster). |
||
To ease the programmer's life, MPI implementations offer compiler-wrappers, e.g. <tt>mpicc</tt> for C and <tt>mpif90</tt> for Fortran90 for compilation and linking, taking care to include all required libraries. |
To ease the programmer's life, MPI implementations offer compiler-wrappers, e.g. <tt>mpicc</tt> for C and <tt>mpif90</tt> for Fortran90 for compilation and linking, taking care to include all required libraries. |
||
<br> |
<br> |
||
Line 201: | Line 177: | ||
Without any option the user gets a list of flags, the Open MPI installation was compiled for (version of compilers, specific configure-flags, e.g. debugging, or profiling options). Furthermore, using <tt>ompi_info --param all all</tt> one may see all of the MCA's options, e.g. that the default PML-MCA uses an initial free-list of 4 blocks (increased by 64 upon first encountering this limit): |
Without any option the user gets a list of flags, the Open MPI installation was compiled for (version of compilers, specific configure-flags, e.g. debugging, or profiling options). Furthermore, using <tt>ompi_info --param all all</tt> one may see all of the MCA's options, e.g. that the default PML-MCA uses an initial free-list of 4 blocks (increased by 64 upon first encountering this limit): |
||
<tt>ompi_info --param ob1 all</tt> -- which may be increased for applications that are certain to benefit from a larger value upon startup. |
<tt>ompi_info --param ob1 all</tt> -- which may be increased for applications that are certain to benefit from a larger value upon startup. |
||
In order to limit output, the <tt>ompi_info</tt> tool defines various levels: this following requests the maximum "level 9" and will show all the available options: |
|||
<tt>ompi_info --level 9 --all --param all all</tt> |
|||
* Open MPI allows adapting MCA parameters on the command-line: parameters may be supplied, e.g. the above-mentioned parameter <tt>mpirun -np 128 --mca mpirun -np 16 --mca pml_ob1_free_list_num 128 ./mpi_stub</tt>. |
* Open MPI allows adapting MCA parameters on the command-line: parameters may be supplied, e.g. the above-mentioned parameter <tt>mpirun -np 128 --mca mpirun -np 16 --mca pml_ob1_free_list_num 128 ./mpi_stub</tt>. |
||
* Open MPI internally uses the tool [http://www.open-mpi.org/projects/hwloc/ hwloc] for node-local processor-information, as well as process- and memory-affinity. This tool also is a good tool to get information on the node's processor topology and Cache-information. This may be used to optimize and balance memory usage or for choosing a better ratio of MPI processes per node vs. OpenMP threads per core. |
* Open MPI internally uses the tool [http://www.open-mpi.org/projects/hwloc/ hwloc] for node-local processor-information, as well as process- and memory-affinity. This tool also is a good tool to get information on the node's processor topology and Cache-information. This may be used to optimize and balance memory usage or for choosing a better ratio of MPI processes per node vs. OpenMP threads per core. |
||
* Running Jobs interactively using SLURM with multiple nodes may hang. Adding <tt>--debug-daemons</tt> to <tt>mpirun</tt> will show SLURMs error message: |
|||
<br> |
|||
<tt>srun: Job step creation temporarily disabled, retrying</tt> |
|||
<br> |
|||
In order to run these jobs, one needs to set the run-time's PLM option as MCA paramter: |
|||
<br> |
|||
<tt>mpirun --mca plm_slurm_args '--mem-per-cpu=0' ./app</tt> |
|||
<br> |
|||
(see the fSLURM bug-report https://bugs.schedmd.com/show_bug.cgi?id=1420) |
|||
== Intel (i)MPI == |
== Intel (i)MPI == |
||
The Intel MPI Library is a multi-fabric message passing library that implements the Message Passing Interface, v2.2 (MPI-2.2) specification. It provides a standard library across Intel |
The Intel MPI Library is a multi-fabric message passing library that implements the Message Passing Interface, v2.2 (MPI-2.2) specification. It provides a standard library across Intel |
||
Line 210: | Line 197: | ||
<br> |
<br> |
||
The library is provided in the following kits: |
The library is provided in the following kits: |
||
* The Intel MPI Library Runtime Environment (RTO) has the tools you need to run programs, including Multipurpose Daemon |
* The Intel MPI Library Runtime Environment (RTO) has the tools you need to run programs, including Multipurpose Daemon (MPD), Hydra and supporting utilities, shared (.so) libraries, and documentation. |
||
* The Intel MPI Library Development Kit (SDK) includes all of the Runtime Environment components plus compilation tools, including compiler commands such as <b>mpiicc</b>, include files and modules, static (.a) libraries, debug libraries, trace libraries, and test codes. |
* The Intel MPI Library Development Kit (SDK) includes all of the Runtime Environment components plus compilation tools, including compiler commands such as <b>mpiicc</b>, include files and modules, static (.a) libraries, debug libraries, trace libraries, and test codes. |
||
=== General information === |
=== General information === |
||
Line 224: | Line 211: | ||
The following table lists available MPI compiler commands and the underlying compilers, compiler families, languages, and application binary interfaces (ABIs) that they support. |
The following table lists available MPI compiler commands and the underlying compilers, compiler families, languages, and application binary interfaces (ABIs) that they support. |
||
<br> |
<br> |
||
The Intel MPI Library Compiler Drivers |
<u>The Intel MPI Library Compiler Drivers</u> |
||
{| width=600px class="wikitable" |
{| width=600px class="wikitable" |
||
|- |
|- |
||
! Compiler Command !! Default Compiler !! Supported Language(s) !! Supported ABI's |
! Compiler Command !! Default Compiler !! Supported Language(s) !! Supported ABI's |
||
|- |
|- |
||
| colspan=4 | Generic Compilers |
| colspan=4 style="background-color:#DCDCDC;" | Generic Compilers |
||
|- |
|- |
||
| mpicc || gcc, cc || C || 32/64 bit |
| mpicc || gcc, cc || C || 32/64 bit |
||
Line 237: | Line 224: | ||
| mpifc || gfortran || Fortran77/Fortran 95 || 32/64 bit |
| mpifc || gfortran || Fortran77/Fortran 95 || 32/64 bit |
||
|- |
|- |
||
| colspan=4 | GNU Compiler Versions 3 and higher |
| colspan=4 style="background-color:#DCDCDC;" | [[Development/GCC|GNU Compiler]] Versions 3 and higher |
||
|- |
|- |
||
| mpigcc || gcc || C || 32/64 bit |
| mpigcc || gcc || C || 32/64 bit |
||
Line 247: | Line 234: | ||
| mpif90 || gfortran || Fortran 95 || 32/64 bit |
| mpif90 || gfortran || Fortran 95 || 32/64 bit |
||
|- |
|- |
||
| colspan=4 | Intel Fortran, C++ Compilers Versions 13.1 through 14.0 and Higher |
| colspan=4 style="background-color:#DCDCDC;" | [[Development/Intel_Compiler|'''Intel Fortran, C++ Compilers''']] Versions 13.1 through 14.0 and Higher |
||
|- |
|- |
||
| mpiicc || icc || C || 32/64 bit |
| mpiicc || icc || C || 32/64 bit |
||
Line 256: | Line 243: | ||
|- |
|- |
||
|} |
|} |
||
* Compiler commands are available only in the Intel |
* Compiler commands are available only in the Intel MPI Library Development Kit. |
||
* Compiler commands are in the <installdir>/<arch>/bin directory. Where <installdir> refers to the Intel |
* Compiler commands are in the <installdir>/<arch>/bin directory. Where <installdir> refers to the Intel MPI Library installation directory (depending on the loaded mpi module) and <arch> is one of the following architectures: |
||
* ia32 - IA-32 architecture |
* ia32 - IA-32 architecture |
||
* intel64 - Intel |
* intel64 - Intel 64 architecture |
||
* mic – Intel |
* mic – Intel Xeon Phi™ Coprocessor architecture |
||
* Ensure that the corresponding underlying compilers (32-bit or 64-bit, as appropriate) are already in your PATH. |
* Ensure that the corresponding underlying compilers (32-bit or 64-bit, as appropriate) are already in your PATH. This is normally done by the 'module load' command. |
||
* To port existing MPI-enabled applications to the Intel |
* To port existing MPI-enabled applications to the Intel MPI Library, recompile all sources. |
||
* To display mini-help of a compiler command, execute it without any parameters. |
* To display mini-help of a compiler command, execute it without any parameters. |
||
=== Further information === |
|||
=== Sample MPI programm in "C" === |
|||
The following example code includes a sample MPI program written in C. |
|||
<source lang="c"> |
|||
#****************************************************************************** |
|||
# Content: (exemplify version) |
|||
# Based on a Monto Carlo method, this MPI sample code uses volumes to |
|||
# estimate the number PI. |
|||
#*****************************************************************************/ |
|||
#include <stdlib.h> |
|||
#include <stdio.h> |
|||
#include <math.h> |
|||
#include <time.h> |
|||
#include <math.h> |
|||
// here you'l include the main MPI-library |
|||
#include "mpi.h" |
|||
#define MASTER 0 |
|||
#define TAG_HELLO 4 |
|||
#define TAG_TEST 5 |
|||
#define TAG_TIME 6 |
|||
int main(int argc, char *argv[]) { |
|||
int i, id, remote_id, num_procs; |
|||
MPI_Status stat; |
|||
int namelen; |
|||
char name[MPI_MAX_PROCESSOR_NAME]; |
|||
// Start MPI. |
|||
if (MPI_Init (&argc, &argv) != MPI_SUCCESS) { |
|||
printf ("Failed to initialize MPIn"); |
|||
return (-1); |
|||
} |
|||
// Create the communicator, and retrieve the number of processes. |
|||
MPI_Comm_size (MPI_COMM_WORLD, &num_procs); |
|||
// Determine the rank of the process. |
|||
MPI_Comm_rank (MPI_COMM_WORLD, &id); |
|||
// Get machine name |
|||
MPI_Get_processor_name (name, &namelen); |
|||
if (id == MASTER) { |
|||
printf ("Hello world: rank %d of %d running on %sn", id, num_procs, name); |
|||
for (i = 1; i<num_procs; i++) { |
|||
MPI_Recv (&remote_id, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &stat); |
|||
MPI_Recv (&num_procs, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &stat); |
|||
MPI_Recv (&namelen, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &stat); |
|||
MPI_Recv (name, namelen+1, MPI_CHAR, i, TAG_HELLO, MPI_COMM_WORLD, &stat); |
|||
printf ("Hello world: rank %d of %d running on %sn", remote_id, num_procs, name); |
|||
} |
|||
} |
|||
else { |
|||
MPI_Send (&id, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD); |
|||
MPI_Send (&num_procs, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD); |
|||
MPI_Send (&namelen, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD); |
|||
MPI_Send (name, namelen+1, MPI_CHAR, MASTER, TAG_HELLO, MPI_COMM_WORLD); |
|||
} |
|||
// Rank 0 distributes seek randomly to all processes. |
|||
double startprocess, endprocess; |
|||
int distributed_seed = 0; |
|||
int *buff; |
|||
buff = (int *)malloc(num_procs * sizeof(int)); |
|||
unsigned int MAX_NUM_POINTS = pow (2,32) - 1; |
|||
unsigned int num_local_points = MAX_NUM_POINTS / num_procs; |
|||
if (id == MASTER) { |
|||
srand (time(NULL)); |
|||
for (i=0; i<num_procs; i++) { |
|||
distributed_seed = rand(); |
|||
buff[i] = distributed_seed; |
|||
} |
|||
} |
|||
// Broadcast the seed to all processes |
|||
MPI_Bcast(buff, num_procs, MPI_INT, MASTER, MPI_COMM_WORLD); |
|||
// At this point, every process (including rank 0) has a different seed. Using their seed, |
|||
// each process generates N points randomly in the interval [1/n, 1, 1] |
|||
startprocess = MPI_Wtime(); |
|||
srand (buff[id]); |
|||
unsigned int point = 0; |
|||
unsigned int rand_MAX = 128000; |
|||
float p_x, p_y, p_z; |
|||
float temp, temp2, pi; |
|||
double result; |
|||
unsigned int inside = 0, total_inside = 0; |
|||
for (point=0; point<num_local_points; point++) { |
|||
temp = (rand() % (rand_MAX+1)); |
|||
p_x = temp / rand_MAX; |
|||
p_x = p_x / num_procs; |
|||
temp2 = (float)id / num_procs; // id belongs to 0, num_procs-1 |
|||
p_x += temp2; |
|||
temp = (rand() % (rand_MAX+1)); |
|||
p_y = temp / rand_MAX; |
|||
temp = (rand() % (rand_MAX+1)); |
|||
p_z = temp / rand_MAX; |
|||
// Compute the number of points residing inside of the 1/8 of the sphere |
|||
result = p_x * p_x + p_y * p_y + p_z * p_z; |
|||
if (result <= 1) inside++; |
|||
} |
|||
double elapsed = MPI_Wtime() - startprocess; |
|||
MPI_Reduce (&inside, &total_inside, 1, MPI_UNSIGNED, MPI_SUM, MASTER, MPI_COMM_WORLD); |
|||
#if DEBUG |
|||
printf ("rank %d counts %u points inside the spheren", id, inside); |
|||
#endif |
|||
if (id == MASTER) { |
|||
double timeprocess[num_procs]; |
|||
timeprocess[MASTER] = elapsed; |
|||
printf("Elapsed time from rank %d: %10.2f (sec) n", MASTER, timeprocess[MASTER]); |
|||
for (i=1; i<num_procs; i++) { |
|||
// Rank 0 waits for elapsed time value |
|||
MPI_Recv (&timeprocess[i], 1, MPI_DOUBLE, i, TAG_TIME, MPI_COMM_WORLD, &stat); |
|||
printf("Elapsed time from rank %d: %10.2f (sec) n", i, timeprocess[i]); |
|||
} |
|||
temp = 6 * (float)total_inside; |
|||
pi = temp / MAX_NUM_POINTS; |
|||
printf ( "Out of %u points, there are %u points inside the sphere => pi=%16.12fn", |
|||
MAX_NUM_POINTS, total_inside, pi); |
|||
} |
|||
else { |
|||
// Send back the processing time (in second) |
|||
MPI_Send (&elapsed, 1, MPI_DOUBLE, MASTER, TAG_TIME, MPI_COMM_WORLD); |
|||
} |
|||
free(buff); |
|||
// Terminate MPI. |
|||
MPI_Finalize(); |
|||
return 0; |
|||
}</source> |
|||
<!-- |
<!-- |
||
= Hybrid Parallelization = |
= Hybrid Parallelization = |
Latest revision as of 10:12, 20 June 2023
Description | Content |
---|---|
module load | mpi/impi | mpi/openmpi |
Links | Intel® MPI Library | Open MPI |
License | Intel MPI Library Licensing FAQ install-doc/EULA.txt | Open MPI License |
Introduction
This page will provide information regarding the supported parallel programming paradigms and specific hints on their usage.
Please refer to the Modules Documentation how to setup your environment on bwUniCluster to load a specific software installation.
More informations about Compilers installed at our clusters is available here:
OpenMP
General Information
OpenMP is a mature specification [1] to allow easy, portable, and most importantly incremental node-level parallelisation of code.
Being a thread-based approach, OpenMP is aimed at more fine-grained parallelism than MPI.
Although there have been extensions to extend OpenMP for inter-node parallelisation, it is a node-level approach aimed to make best usage of a node's cores.
With regard to ease-of-use, OpenMP is ahead of any other common approach: the source-code is annotated using #pragma omp or !$omp statements, in C/C++ and Fortran respectively.
Whenever the compiler encompasses a semantic block of code encapsulated in a parallel region, this block of code is transparently compiled into a function, which is passed to a so-called team-of-threads upon entering this semantic block. This fork-join model of execution eases a lot of the programmer's pain involved with Threads.
Being a loop-centric approach, OpenMP is aimed at codes with long/time-consuming loops.
A single combined directive pragma omp parallel for will tell the compiler to automatically parallel the ensuing for-loop.
The following example is a bit more advanced in that even reductions of variables over multiple threads are easily to parallel:
for (int i=0, sum = 0.0; i < VECTOR_LEN; i++)
norm2 += (v[i]*v[i]);
is parallelized by just adding a single line as in:
# pragma omp parallel for reduction(+:norm2)
for (int i=0, sum = 0.0; i < VECTOR_LEN; i++)
norm2 += (v[i]*v[i]);
With VECTOR_LENGTH being large enough, this piece of code compiled with OpenMP will run in parallel, exhibiting very nice speedup.
Compiled without, the code remains as is. Developpers may therefore incrementally parallelize their application based on the profile derived from performance analysis tools, starting with the most time-consuming loops.
Using OpenMP's concise API, one may query the number of running threads, the number of processors, a time to calculate runtime, and even set parameters such as the number of threads to execute a parallel region.
The OpenMP-4.0 specification added support for the SIMD-directive to better utilize SIMD-vectorization, as well as integrating directives to offload computation to accelerators using the target directive: these are integrated into the Intel Compiler and are actively being worked on for the GNU compiler, some restrictions may apply.
OpenMP Best Practice Guide
The following silly example to calculate the squared Euklidian Norm shows some techniques:
#include <stdlib.h>
#include <stdio.h>
#include <omp.h>
#define VECTOR_LENGTH 5
int main (int argc, char * argv[])
{
int len = VECTOR_LENGTH;
int i;
double * v;
double norm2 = 0.0;
double t1, tdiff;
if (argc > 1)
len = atoi (argv[1]);
v = malloc (len * sizeof(double));
t1 = omp_get_wtime();
// Initialization already with (the same number of) threads
#pragma omp parallel for
for (i=0; i < len; i++) {
v[i] = i;
}
// Now aggregate the sum-of-squares by specifying a reduction
#pragma omp parallel for reduction(+:norm2)
for(i=0; i < len; i++) {
norm2 += (v[i]*v[i]);
}
tdiff = omp_get_wtime() - t1;
printf ("norm2: %f Time:%f\n", norm2, tdiff);
return 0;
}
- Group independent parallel sections together: in the above example, You may combine those two sections into one larger parallel block. This will just once enter the parallel region (in the fork-join model) instead of twice. Especially in inner loops, this will considerably decrease overhead.
- Compile with the Intel compiler's option -diag-enable sc-parallel3 to get the further warnings on thread-safety, performance, etc. The following code with loop-carried dependency will e.g. compile fine (aka without warning):
#pragma omp parallel for reduction(+:norm2)
for(i=1; i < len-1; i++) {
v[i] = v[i-1]+v[i+1];
}
However the Intel compiler with -diag-enable sc-parallel3 will produce the following warning: warning #12246: variable "v" has loop carried data dependency that may lead to incorrect program execution in parallel mode; see (file:omp_norm2.c line:32)
- Always specify default(none) on larger parallel regions in order to specifically set the visibility of variables to either shared or private.
- Try to restructure code to allow for nowait: OpenMP defines synchronization points (implied barriers) at the end of work sharing constructs such as the pragma omp for directive. If the ensuing section of code does not depend on data being generated inside the parallel section, adding the nowait clause to the worksharing directive allows the compiler to eliminate this synchronization point. This reduces overhead, allows for better overlap and better utilization of the processor's resources. This might imply however to restructure the code (move portions of independent code in between dependent works-sharing constructs).
Usage
OpenMP is supported by various compilers, here the usage for two main compilers GCC and Intel Suite are introduced. For both compilers, You first need to turn on OpenMP support by specifying a parameter on the compiler's command-line. In case You make function calls to OpenMP's API, You also need to include the header-file omp.h. OpenMP's API allows to query or set the number of threads, query the number of processors, get a wall-clock time to measure execution times, etc.
OpenMP with GNU Compiler Collection
Starting with version 4.2 the gcc compiler supports OpenMP-2.5.
Since then the analysis capabilities of the GNU compiler have steadily improved.
The installed compilers support OpenMP-3.1.
To use OpenMP with the gcc-compiler, pass -fopenmp as parameter.
OpenMP with Intel Compiler
The Intel Compiler's support for OpenMP is more advanced than gcc's -- especially in term of programmer support.
To use OpenMP with the Intel compiler, pass -openmp as command-line parameter.
One may get very insightful information about OpenMP, when compiling with
- Compiling with -openmp-report2 to get information, which loops were parallelized and a reason why not.
- Compiling with -diag-enable sc-parallel3 to get errors and warnings about your sources weaknesses with regard to parallelization (see example below).
MPI
In this section, You will find information regarding the supported installations of the Message-Passing Interface libraries and their usage.
Due to the Fortran interface ABI, all MPI-libraries are normally bound to a specific compiler-vendor and even the specific compiler version.
Therefore, two compilers are supported on bwUniCluster: GCC and Intel Compiler Suite.
As both compilers are continuously improving, the communication libraries will be adopted and recompiled in lock-step.
With a set of different implementations, there comes the problem of choice. These pages should inform the user of the communication libraries, what considerations should be done with regard to performance, maintainability and debugging -- in general tool support -- of the various implementations.
MPI Introduction
The Message-Passing Interface is a standard provided by the MPI-Forum which regularly convenes for the MPI-Forum Meetings to update this standard. The current version is MPI-3.1 available as PDF.
This document defines the API of over 300 functions for the C- and the Fortran-language -- however, You will certainly not need all of them to begin with.
Every MPI-conforming program needs to call MPI_Init() and MPI_Finalize() upon start and shutdown -- or MPI_Abort() in case of an abnormal termination.
Only after initialization the program may call any other MPI-function, specifically communication functions (among the exceptions are MPI_Initialized() and MPI_Get_version(), used in libraries to check whether the calling application has started MPI and the version).
The usual functions, any MPI application needs are used to figure out the size in number of processes MPI_Comm_size() and the number the calling process has (here called a rank) using MPI_Comm_rank().
Communication is always relative to a so-called communicator -- the default one after initialization being called MPI_COMM_WORLD. For any MPI-communicator it always holds size >= 1, and 0 <= rank < size.
There's basically three ways of communication:
- two-sided communication using point-to-point (often abbreviated P2P) functions, such as MPI_Send() and MPI_Recv(), which always involves two participating processes,
- collectice communcation functions (often abbreviated as colls) involve multiple processes, examples are MPI_Bcast() and MPI_Reduce(),
- one-sided communication, where communication between two processes is initiated by one-process, only. With proper RMA-hardware support and careful programming, this may allow higher performance or scalibility.
All parts of the program, which reference MPI functionality need to be compiled with the same compiler settings/include files and linked to the same MPI-Library. This is stressed here, since without taking pre-cautions, a different MPI's header may be included, resulting in funny errors: consider that Intel MPI is derived from MPIch, with MPI-datatypes being C ints, while Open MPI uses pointers to structures (the former being 4, the latter being 8 bytes on bwUniCluster).
To ease the programmer's life, MPI implementations offer compiler-wrappers, e.g. mpicc for C and mpif90 for Fortran90 for compilation and linking, taking care to include all required libraries.
All programs must be started using the mpirun or mpiexec command. Depending on the actual implementation, it uses different arguments, however the following works with any MPI:
- mpirun -np 128 ./app starts 128 processes (with ranks 0 to 127)
- mpiexec -n 128 -hostfile mynodes.txt ./app starts 128 processes on only the nodes listed line-by-line in the provided text-file mynodes.txt.
- mpiexec -n 64 ./app1 : -n 64 ./app2 starts 128 processes, 64 of which execute app1, the other 64 execute app2. All processes however participate in the same MPI_COMM_WORLD and therefore must accordingly take care about their respective ranks.
Please note, that process placement (e.g. a round-robin scheme), and specifically process-binding to sockets is MPI-implementation dependant.
MPI Best Practice Guide
Specific performance considerations with regard to MPI (independent of the implementation):
- No communication at all is best: Only communicate between processes if at all necessary. Consider that file-access is "communication" as well.
- If communication is done with multiple processes, try to involve as many processes in just one call: MPI optimizes the communication pattern for so-called "collective communication" to take advantage of the underlying network (with regard to network topology, message sizes, queueing capabilities of the network interconnect, etc.). Therefore try to always think in collective communication, if a communication pattern involves a group of processes.
- Try to group processes together: Function calls like MPI_Cart_create will come in handy for applications with cartesian domains but also general communicators derived from MPI_COMM_WORLD using MPI_Comm_split() may benefit by MPI's knowing the underlying network topology. Use MPI3's MPI_Comm_split_type() with MPI_COMM_TYPE_SHARED for a sub-communicator with processes having access to the same shared memory region (aka on bwUniCluster the same node).
- File-accesses to load / store data must be done collectively: Writing to storage, or even reading the initialization data -- all of which involves getting data from/to all MPI processes -- must be done collectively. MPI's Parallel IO offers a rich API to read and distribute the data access -- in order to take advantage of parallel filesystems like Lustre. A many-fold performance improvement may be seen by writing data in large chunks in collective fashion -- and at the same time being nice to other users and applications.
- Try to hide the communication by computation: Try to hide (some) of the cost of communication of Point-to-point communication by using non-blocking / immediate P2P-calls (MPI_Isend and MPI_Irecv et al, followed by MPI_Wait or MPI_Test et al). This may allow the MPI-implementation to initiate or even offload communication to the network interconnect and resume executing your application, while data is being transferred. MPI-3 adds non-blocking collectives, e.g. MPI_Ibcast() or MPI_Iallreduce(). For extra credit, explain the use-cases of MPI_Ibarrier().
- Every call to MPI may trigger an access to physical hardware -- limit it: When calling communication-related functions such as MPI_Test to check whether a specific communication has finished, the queue of the network adapter may need to be queried. This memory access or even physical hardware access to query the state will cost cycles. Therefore, the programmer should combine multiple requests with functions such as MPI_Waitall() or MPI_Waitany() or their Test*-counterparts.
- Make usage of derived datatypes: instead of manually copying data into temporary, even newly allocated memory, describe the data-layout to MPI -- and let the implementation, or even the network HCA's hardware do the data fetching.
- Bind Your processes to sockets: Operating Systems are good in making best use of the ressources -- which sometimes involves moving tasks from one core to another, or even (though more unlikely since the OS' heuristics try to avoid it) to another socket, with the obvious effects: Caches are cold, every memory access to memory allocated on the previous socket "has to travel the bus". This is particularly happening if You have multiple OpenMP parallel regions which are separated by code that does IO -- and threads are sleeping -- the processes doing IO may wander to a different socket... Bind Your processes to at least the socket. All major MPIs support this binding (see below).
- Do not use the C++ interface: First of all, it has been marked as deprecated in the MPI-3.0 standard, since it added little benefit to C++ programmers over the C-interface. Moreover, since MPI implementations are written in C, the interface adds another level of indirection and therefore a bit of overhead in terms of instructions and Cache misses.
Open MPI
The Open MPI library is an open, flexible and nevertheless performant implementation of MPI-2 and MPI-3. Licensed under BSD, it is being actively developed by an open community of industry and research institutions. The flexibility comes in handy: using the concept of a MCA (aka a plugin) Open MPI supports many different network interconnects (Infinband, TCP, Cray, etc.) , on the other hand, a installation may be tailored to suite an installation, e.g. the network (Infiniband with specific settings), the main startup-mechanism, etc. Furthermore, the FAQ offers hints on performance tuning.
Usage
Like other MPI implementations, after loading the module, Open MPI provides the compiler-wrappers mpicc, mpicxx and mpifort (or for versions lower than 1.7 mpif77 and mpif90) for the C-, C++ and Fortran compilers respectively. Albeit their usage is not required, these wrappers are handy to not have to use the command-line options for header- or library directories, aka -I and -L, as well as the actual needed MPI-libraries itselve.
Further information
Open MPI also features a few specific functionalities that will help users and developpers, alike:
- Open MPI's tool ompi_info allows seeing all of Open MPI's installed MCA components and their specific options.
Without any option the user gets a list of flags, the Open MPI installation was compiled for (version of compilers, specific configure-flags, e.g. debugging, or profiling options). Furthermore, using ompi_info --param all all one may see all of the MCA's options, e.g. that the default PML-MCA uses an initial free-list of 4 blocks (increased by 64 upon first encountering this limit): ompi_info --param ob1 all -- which may be increased for applications that are certain to benefit from a larger value upon startup. In order to limit output, the ompi_info tool defines various levels: this following requests the maximum "level 9" and will show all the available options: ompi_info --level 9 --all --param all all
- Open MPI allows adapting MCA parameters on the command-line: parameters may be supplied, e.g. the above-mentioned parameter mpirun -np 128 --mca mpirun -np 16 --mca pml_ob1_free_list_num 128 ./mpi_stub.
- Open MPI internally uses the tool hwloc for node-local processor-information, as well as process- and memory-affinity. This tool also is a good tool to get information on the node's processor topology and Cache-information. This may be used to optimize and balance memory usage or for choosing a better ratio of MPI processes per node vs. OpenMP threads per core.
- Running Jobs interactively using SLURM with multiple nodes may hang. Adding --debug-daemons to mpirun will show SLURMs error message:
srun: Job step creation temporarily disabled, retrying
In order to run these jobs, one needs to set the run-time's PLM option as MCA paramter:
mpirun --mca plm_slurm_args '--mem-per-cpu=0' ./app
(see the fSLURM bug-report https://bugs.schedmd.com/show_bug.cgi?id=1420)
Intel (i)MPI
The Intel MPI Library is a multi-fabric message passing library that implements the Message Passing Interface, v2.2 (MPI-2.2) specification. It provides a standard library across Intel
platforms that enable adoption of MPI-2.2 functions as their needs dictate.
The Intel MPI Library enables developers to change or to upgrade processors and interconnects as new technology becomes available without changes to the software or to the operating environment.
The library is provided in the following kits:
- The Intel MPI Library Runtime Environment (RTO) has the tools you need to run programs, including Multipurpose Daemon (MPD), Hydra and supporting utilities, shared (.so) libraries, and documentation.
- The Intel MPI Library Development Kit (SDK) includes all of the Runtime Environment components plus compilation tools, including compiler commands such as mpiicc, include files and modules, static (.a) libraries, debug libraries, trace libraries, and test codes.
General information
All Intel mpi-modules are called 'mpi/impi'.
These modules provides the Intel Message Passing Interface (mpicc, mpicxx, mpif77
and mpif90) for the Intel compiler suite (icc, icpc and ifort) (see also Intel MPI Library).
The corresponding Intel compiler module is loaded automatically (if not done before).
Compiler and MPI module must fit. Don't mix incongruous versions!
Usage
The following table lists available MPI compiler commands and the underlying compilers, compiler families, languages, and application binary interfaces (ABIs) that they support.
The Intel MPI Library Compiler Drivers
Compiler Command | Default Compiler | Supported Language(s) | Supported ABI's |
---|---|---|---|
Generic Compilers | |||
mpicc | gcc, cc | C | 32/64 bit |
mpicxx | g++ | C/C++ | 32/64 bit |
mpifc | gfortran | Fortran77/Fortran 95 | 32/64 bit |
GNU Compiler Versions 3 and higher | |||
mpigcc | gcc | C | 32/64 bit |
mpigxx | g++ | C/C++ | 32/64 bit |
mpif77 | g77 | Fortran 77 | 32/64 bit |
mpif90 | gfortran | Fortran 95 | 32/64 bit |
Intel Fortran, C++ Compilers Versions 13.1 through 14.0 and Higher | |||
mpiicc | icc | C | 32/64 bit |
mpiicpc | icpc | C++ | 32/64 bit |
impiifort | ifort | Fortran77/Fortran 95 | 32/64 bit |
- Compiler commands are available only in the Intel MPI Library Development Kit.
- Compiler commands are in the <installdir>/<arch>/bin directory. Where <installdir> refers to the Intel MPI Library installation directory (depending on the loaded mpi module) and <arch> is one of the following architectures:
- ia32 - IA-32 architecture
- intel64 - Intel 64 architecture
- mic – Intel Xeon Phi™ Coprocessor architecture
- Ensure that the corresponding underlying compilers (32-bit or 64-bit, as appropriate) are already in your PATH. This is normally done by the 'module load' command.
- To port existing MPI-enabled applications to the Intel MPI Library, recompile all sources.
- To display mini-help of a compiler command, execute it without any parameters.
Sample MPI programm in "C"
The following example code includes a sample MPI program written in C.
#******************************************************************************
# Content: (exemplify version)
# Based on a Monto Carlo method, this MPI sample code uses volumes to
# estimate the number PI.
#*****************************************************************************/
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <time.h>
#include <math.h>
// here you'l include the main MPI-library
#include "mpi.h"
#define MASTER 0
#define TAG_HELLO 4
#define TAG_TEST 5
#define TAG_TIME 6
int main(int argc, char *argv[]) {
int i, id, remote_id, num_procs;
MPI_Status stat;
int namelen;
char name[MPI_MAX_PROCESSOR_NAME];
// Start MPI.
if (MPI_Init (&argc, &argv) != MPI_SUCCESS) {
printf ("Failed to initialize MPIn");
return (-1);
}
// Create the communicator, and retrieve the number of processes.
MPI_Comm_size (MPI_COMM_WORLD, &num_procs);
// Determine the rank of the process.
MPI_Comm_rank (MPI_COMM_WORLD, &id);
// Get machine name
MPI_Get_processor_name (name, &namelen);
if (id == MASTER) {
printf ("Hello world: rank %d of %d running on %sn", id, num_procs, name);
for (i = 1; i<num_procs; i++) {
MPI_Recv (&remote_id, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &stat);
MPI_Recv (&num_procs, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &stat);
MPI_Recv (&namelen, 1, MPI_INT, i, TAG_HELLO, MPI_COMM_WORLD, &stat);
MPI_Recv (name, namelen+1, MPI_CHAR, i, TAG_HELLO, MPI_COMM_WORLD, &stat);
printf ("Hello world: rank %d of %d running on %sn", remote_id, num_procs, name);
}
}
else {
MPI_Send (&id, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD);
MPI_Send (&num_procs, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD);
MPI_Send (&namelen, 1, MPI_INT, MASTER, TAG_HELLO, MPI_COMM_WORLD);
MPI_Send (name, namelen+1, MPI_CHAR, MASTER, TAG_HELLO, MPI_COMM_WORLD);
}
// Rank 0 distributes seek randomly to all processes.
double startprocess, endprocess;
int distributed_seed = 0;
int *buff;
buff = (int *)malloc(num_procs * sizeof(int));
unsigned int MAX_NUM_POINTS = pow (2,32) - 1;
unsigned int num_local_points = MAX_NUM_POINTS / num_procs;
if (id == MASTER) {
srand (time(NULL));
for (i=0; i<num_procs; i++) {
distributed_seed = rand();
buff[i] = distributed_seed;
}
}
// Broadcast the seed to all processes
MPI_Bcast(buff, num_procs, MPI_INT, MASTER, MPI_COMM_WORLD);
// At this point, every process (including rank 0) has a different seed. Using their seed,
// each process generates N points randomly in the interval [1/n, 1, 1]
startprocess = MPI_Wtime();
srand (buff[id]);
unsigned int point = 0;
unsigned int rand_MAX = 128000;
float p_x, p_y, p_z;
float temp, temp2, pi;
double result;
unsigned int inside = 0, total_inside = 0;
for (point=0; point<num_local_points; point++) {
temp = (rand() % (rand_MAX+1));
p_x = temp / rand_MAX;
p_x = p_x / num_procs;
temp2 = (float)id / num_procs; // id belongs to 0, num_procs-1
p_x += temp2;
temp = (rand() % (rand_MAX+1));
p_y = temp / rand_MAX;
temp = (rand() % (rand_MAX+1));
p_z = temp / rand_MAX;
// Compute the number of points residing inside of the 1/8 of the sphere
result = p_x * p_x + p_y * p_y + p_z * p_z;
if (result <= 1) inside++;
}
double elapsed = MPI_Wtime() - startprocess;
MPI_Reduce (&inside, &total_inside, 1, MPI_UNSIGNED, MPI_SUM, MASTER, MPI_COMM_WORLD);
#if DEBUG
printf ("rank %d counts %u points inside the spheren", id, inside);
#endif
if (id == MASTER) {
double timeprocess[num_procs];
timeprocess[MASTER] = elapsed;
printf("Elapsed time from rank %d: %10.2f (sec) n", MASTER, timeprocess[MASTER]);
for (i=1; i<num_procs; i++) {
// Rank 0 waits for elapsed time value
MPI_Recv (&timeprocess[i], 1, MPI_DOUBLE, i, TAG_TIME, MPI_COMM_WORLD, &stat);
printf("Elapsed time from rank %d: %10.2f (sec) n", i, timeprocess[i]);
}
temp = 6 * (float)total_inside;
pi = temp / MAX_NUM_POINTS;
printf ( "Out of %u points, there are %u points inside the sphere => pi=%16.12fn",
MAX_NUM_POINTS, total_inside, pi);
}
else {
// Send back the processing time (in second)
MPI_Send (&elapsed, 1, MPI_DOUBLE, MASTER, TAG_TIME, MPI_COMM_WORLD);
}
free(buff);
// Terminate MPI.
MPI_Finalize();
return 0;
}