Development/Parallel Programming
Navigation: bwHPC BPR |
---|
Parallel Programming
On this page, You will find valuable information regarding the supported parallel programming paradigms and specific hints on the usage.. Please refer to BwUniCluster_Environment_Modules how to setup your environment on bwUniCluster to load a specific installation.
OpenMP
under construction.
MPI
On this section, You will find valuable information regarding the supported installations of the Message-Passing Interface libraries and their usage. Due to the Fortran interface ABI, all MPI-libraries are normally bound to a specific compiler-vendor and even the specific compiler version. Therefore, as listed in BwHPC_BPG_Compiler two compilers are supported on bwUniCluster: GCC and Intel Suite. As both compilers are continously improving, the communication libraries will be adopted in lock-step.
With a set of different implementations, there comes the problem of choice. These pages should inform the user of the communication libraries, what considerations should be done with regard to performance, maintainability and debugging -- in general tool support -- of the various implementations.
General Performance Considerations
Specific performance considerations with regard to MPI (independent of the implementation):
# No communication is best: Only communicate between processes if at all necessary. Consider that file-access is "communication" as well
- If communication is done, try to involve as many processes: MPI optimizes the communication pattern for so-called "collective communication" to take advantage of the underlying network (with regard to network topology, message sizes, queueing capabilities of the network interconnect, etc.). Therefore try to always think in collective communication if a communication pattern involves a group of processors. Function calls like
MPI_Cart_create
will come in handy for applications with cartesian domains but also general communicators derived fromMPI_COMM_WORLD
may benefit by knowing the underlying network topology. - File-accesses to load / store data must be done collectively: Writing to storage, or even reading the initialization data -- all of which involves getting data from/to all MPI processes must be done collectively. MPI's Parallel IO implementation offers a rich API to read and distribute the data access -- in order to take advantage of parallel filesystems like Lustre. A many-fold performance improvement may be seen by writing data in large chunks in collective fashion -- and at the same time being nice to other users and applications.
- For Point-to-point communication (P2P), try to hide the communication by computation by using non-blocking / immediate P2P-calls (
MPI_Isend
andMPI_Irecv
followed byMPI_Wait
orMPI_Test
. This may allow the MPI-implementation to offload communication to the network interconnect and resume executing your application, while data is being transferred. - Every call to MPI may trigger an access to physical hardware -- limit it: When calling communication-related functions such as
MPI_Test
to check whether a specific communication has finished, the queue of the network adapter may need to be queried. This memory access or even physical hardware access to query the state will cost cycles. Therefore, the programmer may want to use functions such asMPI_Testall
orMPI_Waitall
.
Open MPI
The [MPI] library is an open, flexible and nevertheless performant implementation of MPI-2 and MPI-3. Licensed under BSD, it is being actively developped by an open community of industry and research institutions. The flexibility comes in handy: using the concept of a [[1]] (aka a plugin) Open MPI supports many different network interconnects (Infinband, TCP, Cray, etc.) , on the other hand, a installation may be tailored to suite an installation, e.g. the network (Infiniband with specific settings), the main startup-mechanism, etc. Furthermore, the [[2]] offers hints on [tuning].
Usage
Like other MPI implementations, after loading the module, Open MPI provides the compiler-wrappers mpicc
, mpicxx
and the various Fortran-representatives mpif77
for the C-, C++ and Fortran compilers respectively. Albeit their usage is not required, these wrappers are handy to not have to use the command-line options for header- or library directories, aka -I
and -L
, as well as the actual needed MPI-libraries itselve.
Further information
Open MPI also features a few specific functionalities that will help users and developpers, alike:
- Open MPI's tool
ompi_info
allows seeing all of Open MPI's installed MCA components and their specific options.
Without any option the user gets a list of flags, the Open MPI installation was compiled for (version of compilers, specific configure-flags, e.g. debugging, or profiling options). Furthermore, using ompi_info --param all all
one may see all of the MCA's options, e.g. that the default PML-MCA uses an initial free-list of 4 blocks (increased by 64 upon first encountering this limit):
ompi_info --param ob1 all
-- which may be increased for applications that are certain to benefit from a larger value upon startup.
- Open MPI allows adapting MCA parameters on the command-line: parameters may be supplied, e.g. the above-mentioned parameter
mpirun -np 128 --mca mpirun -np 16 --mca pml_ob1_free_list_num 128 ./mpi_stub
. - Open MPI internally uses the tool [[3]] for node-local processor-information, as well as process- and memory-affinity. This tool also is a good tool to get information on the node's processor topology and Cache-information. This may be used to optimize and balance memory usage or for choosing a better ratio of MPI processes per node vs. OpenMP threads per core.
Intel® MPI
under construction.