Development/General compiler usage
Description | Content |
---|---|
module load | compiler/gnu or compiler/intel or compiler/llvm and others... |
License | Intel: Commercial | GNU: GPL | LLVM: Apache 2 | PGI/NVIDIA: Commercial |
Description
Basically, compilers translate human-readable source code (e.g. C++ interpreted as adhering to ISO/IEC 14882:2014, encoded in UTF-8 text) into binary byte code (e.g. x86-64 with Linux ABI in ELF-format). Compilers are complex software and have become very powerful in the last decades, to guide you as a programmer writing better, more portable, more performant programs. Use the compiler as a tool -- and best use multiple compilers on the same source code for best results. The basic operations and hints can be performed with the same or similar commands on all available compilers. For advanced usage such as optimization and profiling you should consult the best practice guide of the compiler you intend to use (GCC, Intel Suite).
More information about the MPI versions of the GNU and Intel Compilers is available here:
Loading compilers as modules
Modules and loading of modules is described here for Lmod and here for traditional Environment Modules.
However, modules need to be mentioned, since on any system there's a pre-installed set of compilers (for C, C++ and usually Fortran), which are provided by the Linux distribution -- the so-called system compilers. Which however may lack certain options for optimization, for warnings or other features. On RedHat Enterprise Linux this is GNU compiler v8.3.1. Be advised to check out the newer compilers available as modules.
Since Fortran (and very old C++) requires compiling and linking libraries with the very same compiler, many libraries, first-and-foremost the MPI libraries need to be provided for specific versions of a compiler. On BwUniCluster_2.0, these provided libraries will only be visible to module avail, once a compiler is loaded. Hence, check out loading
$ module avail compiler/intel ... $ module load compiler/intel/2021.4.0 ... $ module avail
to see the available MPI modules.
All Intel, GCC and PGI have compilers for different programming languages which will be available after the module is loaded.
Linux Default Compiler
The default Compiler installed on all compute nodes is the GNU Compiler Collection (GCC) or in short GNU compiler.
- Don't get distracted with the available compiler modules.
- Only the modules are loading the complete environments needed.
Example
$ module purge # unload all modules $ module list # control No Modulefiles Currently Loaded. $ gcc --version # see version of default Linux GNU compiler gcc (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5) [...] $ module load compiler/gnu # load default GNU compiler module $ module list # control Currently Loaded Modulefiles: 1) compiler/gnu/10.2(default) $ gcc --version # now, check the current (loaded) module gcc (GCC) 10.2.0 [...]
Synoptical Tables
Compilers (no MPI)
Compiler Suite | Language | Command |
---|---|---|
Intel Composer • Best Practice Guides on Intel Compiler Software |
C | icc |
C++ | icpc | |
Fortran | ifort | |
GCC • Best Practice Guides on GNU Compiler Software |
C | gcc |
C++ | g++ | |
Fortran | gfortran | |
LLVM | C | clang |
C++ | clang++ | |
Fortran 77/90 | flang | |
PGI/NVIDIA | C | pgcc |
C++ | pgCC | |
Fortran 77/90 | pgf77 or pgf90 |
MPI compiler and Underlying Compilers
MPI implementations such as MPIch, Intel-MPI (derived from MPIch) or Open MPI provide compiler wrappers, easing the usage of MPI by providing the Include-Directory -I and required libraries as well as the MPI implementations library directorie -L for linking. The following table lists available MPI compiler commands and the underlying compilers, compiler families, languages, and application binary interfaces (ABIs) that they support.
MPI Compiler Command | Default Compiler | Supported Language(s) | Supported ABI's |
---|---|---|---|
Generic Compilers | |||
mpicc | gcc, cc | C | 32/64 bit |
mpicxx | g++ | C/C++ | 32/64 bit |
mpifc | gfortran | Fortran77/Fortran 95 | 32/64 bit |
GNU Compiler Versions 3 and higher | |||
mpigcc | gcc | C | 32/64 bit |
mpigxx | g++ | C/C++ | 32/64 bit |
mpif77 | g77 | Fortran 77 | 32/64 bit |
mpif90 | gfortran | Fortran 95 | 32/64 bit |
Intel Fortran, C++ Compilers Versions 13.1 through 14.0 and Higher | |||
mpiicc | icc | C | 32/64 bit |
mpiicpc | icpc | C++ | 32/64 bit |
impiifort | ifort | Fortran77/Fortran 95 | 32/64 bit |
How to use
The following compiler commands work for all the compilers in the list above even though the examples will be for icc only.
Commands
The typical introduction is a "Hello World" program. The following C source code shows best practices:
#include <stdio.h> // for printf
#include <stdlib.h> // for EXIT_SUCCESS and EXIT_FAILURE
int main (int argc, char * argv[]) { // std. definition of a program taking arguments
printf("Hello World\n"); // Unix Output is line-buffered, end line with New-line.
return EXIT_SUCCESS; // End program by returning 0 (No Error)
}
It may be compiled and linked with the single command
$ icc hello.c -o hello
to produce an executable named hello.
This process can be divided into two steps:
$ icc -c hello.c $ icc hello.o -o hello
When using libraries you must sometimes specify where the
- include files are (option -I) and where the
- library files are (option -L).
In addition you have to tell the compiler which
- library you want to use (option -l).
For example after loading the module numlib/fftw you can compile code for fftw using
$ icc -c hello.c -I$FFTW_INC_DIR $ icc hello.o -o hello -L$FFTW_LIB_DIR -lfftw3
When the program crashes or doesn't produce the expected output the compiler can help you by printing all warning messages -Wall and adding flags for debugging -g:
$ icc -Wall -g hello.c -o hello
Debugger
If the problem can't be solved this way you can inspect what exactly your program does using a debugger.
To use the debugger properly with your program you have to compile it with debug information (option -g):
Example
$ icc -g hello.c -o hello
Although the compiler option -Wall (and possibly others) should always be set, the -g option should only be passed for debugging purposes to find bugs. It may slow down execution and enlarges the binary due to debugging symbols.
Optimization
The usual and common way to compile your source is to apply compiler optimization.
Since there are many optimization options, as a start for now the optimization level -O2 is recommended:
$ icc -O2 hello.c -o hello
Beware: The optimization-flag used is a capital-O (like Otto) and not a 0 (Zero)!
All compilers offer a multitude of optimization options,
one may check the complete list of options with short explanation on GCC, LLVM and
Intel Suite using option -v --help:
$ icc -v --help | less $ gcc -v --help | less $ clang -v --help | less
Please note, that the optimization level -O2 produces code for a general instruction set. If you want to set the instruction set available, and take advantage of AVX2 or AVX512f, you have to either add the machine-dependent -mavx512f or set the specific architecture of your target processor. For BwUniCluster_2.0 this depends on whether you run your application on any node, then you would select the older Broadwell CPU, or whether You target the newer HPC nodes (which feature Xeon Gold 6230, aka "Cascade Lake" architecture).
$ gcc -O2 -o hello hello.c # General optimization for any architecture $ gcc -O2 -march=broadwell -o hello hello.c # Will work on any compute node on bwUniCluster 2.0 $ gcc -O2 -march=cascadelake -o hello hello.c # This may not run on Broadwell nodes
While adding -march=broadwell adds the compiler options such as -mavx -mavx2 -msse3 -msse4 -msse4.1 -msse4.2 -mssse3, adding -march=cascadelake will further this by -mavx512bw -mavx512cd -mavx512dq -mavx512f -mavx512vl -mavx512vnni -mfma, where -mfma is the setting for allowing fused-multiply-add. These options may provide considerable speed-up to your code as is. Please note however, that Cascade Lake may throttle the processor's clock speed, when executing AVX-512 instructions, possibly running slower than (older) AVX2 code paths would have. Further vectorization as described in the Best Practice Guides may help.
For GCC the options in use are best visible by calling gcc -O2 -fverbose-asm -S -o hello.S hello.c. The option -fverbose-asm stores all the options in the assembler file hello.S.
You should then pay attention to vectorization attained by the compiler -- and concentrate on the time-consuming loops, where the compiler was not able to vectorize. This information is available with the Intel compiler using -qopt-report=5 producing a lot of output in hello.optrpt, while GCC offers this information using -fopt-info-all
Warnings and Error detection
All compilers have improved tremendously with regards to analyzing and detecting suspicious code: do make use of such warnings and hints. The amount of false positives has reduced and it will make your code more accessible, less error-prone and more portable.
The typical warning flags are -Wall to turn on all warnings. However, there's multiple other worthwhile warnings, which are not covered (since they might increase false positives, or since they are not yet considered so prominent). E.g. -Wextra turns on several other warnings, which will in the above example show that neither argc nor argv have been used inside of main.
For LLVM's clang the flag -Weverything turns on all available warnings, albeit leading to many warnings on larger projects. However, the fix-it hints are very helpful as well.
All the compilers offer the flag -Werror which turns any warning (allowing completion of compilation) into hard errors.
Another powerful feature available in GNU- and LLVM-compilers is static code analysis', otherwise only available in Commercial tools, like [1]. Static code analysis evaluates each and every code path, making assumptions on input values and branches taken, detecting corner cases which might lead to real errors -- without having to actually execute this code path. For GCC this is turned on using -fanalyzer which will detect e.g. cases of memory usage after a free() of said memory and many others. For LLVM recompile your project using scan-build, e.g.:
$ scan-build make
This produces warnings on stdout, but more importantly scan reports in directory /scratch/scan-build-XXX, where XXX is date and time of the build. For example the output of Open MPI includes real issues of missed memory releases in error code paths: