Difference between revisions of "Software Modules Lmod"

From bwHPC Wiki
Jump to: navigation, search
(Module dependency)
 
(204 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
Software Module System - Lmod
 
Software Module System - Lmod
   
= PREFACE =
+
= Preface =
   
This guide describes the software environment and tools used to run applications on JUSTUS 2 system. It is intended as a general overview and introduction to the software system management on JUSTUS 2 for new users as well as for experienced users coming, e.g. from sites planted with different Environment modules package, as for example JUSTUS 1 system.
+
This guide provides a general overview and introduction to the software system management via Lmod on JUSTUS 2 for new users as well as for experienced users coming, e.g. from sites planted with different Environment modules systems.
   
  +
= Scientific Software management through the module system Lmod =
= Essential "JUSTUS 1 to JUSTUS 2" migration guide =
 
  +
The following sections covers the basic module commands needed to find and load the scientific applications installed on the JUSTUS 2.<br>
There exist several software implementations for a dynamical management of environmental variables throughout the HPC world.
 
  +
JUSTUS 1 hosted Environment Modules package based on Tcl, while there is a Lua based module system Lmod (https://lmod.readthedocs.io/en/latest/#) installed on JUSTUS 2.
 
  +
JUSTUS 2 uses Linux operating system. The standard Linux packages are installed on the front end nodes (login and visualisation nodes).
Despite of the various software solutions, the difference in handling of the modules from users' perspective of view stays tiny. The biggest change lies in the introducing of the partial software hierarchy implemented in modules system on JUSTUS 2.
 
  +
The '''scientific software is accessible via so called module system'''.<br>
<br>
 
  +
'''To find and load an scientific application, one needs to use module commands.''' For example the following command sequence:
<br>
 
=== Lmod commnads ===
 
The much of module commands and its functionality has remained same on JUSTUS 2, although [https://lmod.readthedocs.io/en/latest/# Lmod] offers new functions as for example ''module spider'' as an extension of "old" ''module avail'' functionality.
 
<br>
 
Thus, the default Gaussian module is initialized by
 
 
<pre>
 
<pre>
$ module load chem/gaussian
+
module load chem/gaussian/g16.C.01
  +
module list
  +
module help chem/gaussian/g16.C.01
  +
cp $GAUSSIAN_EXA_DIR/bwforcluster-gaussian-example.sbatch .
 
</pre>
 
</pre>
or with using of the shorthand version
 
 
<pre>
 
<pre>
  +
1. loads the module with gaussian software package of version 16, revision C.01
$ ml chem/gaussian
 
  +
2. prints out the list of currently load modules
  +
3. provides the user help for the particular gaussian module
  +
4. copies the template batch script which was specifically designed for submission of g16 jobs into SLURM workload manager on JUSTUS 2.
 
</pre>
 
</pre>
<br>
 
Full list of the Lmod module commands is available online on https://lmod.readthedocs.io/en/latest/010_user.html
 
<br>
 
<br>
 
Rather than the divergence of module commands, JUSTUS 1 users has to get a grasp of the modulefile system hierarchy built-in JUSTUS 2 software layout.
 
The layout is no more flat as on JUSTUS 1, however JUSTUS 2 adopts a partial module hierarchy for mpi modules.
 
<br>
 
=== Consequences of the partial module hierarchy for mpi modules ===
 
Due the flat layout of software modules on JUSTUS 1 ''module avail'' command showed all pre-installed software packages including those directly depended on another modules.
 
This is no more true on JUSTUS 2, in particular for mpi modules. mpi modules remains invisible for a user (prompted ''module avail'') until some compiler module has been loaded. Once the compiler module has been activated corresponding mpi modules, i.e. built with the particular compiler, become visible.<br>
 
<br>
 
E.g., with the originally empty list of the loaded modules, the module command
 
<pre>
 
$ module avail
 
</pre>
 
or its shorthand analogue
 
<pre>
 
$ ml av
 
</pre>
 
displays no mpi module available. After running
 
<pre>
 
ml compiler/intel/19.1
 
ml av
 
</pre>
 
mpi packages compatible with the intel 19.1 compiler becomes visible
 
<pre>
 
--------------------------------------------------------------- /opt/bwhpc/common/modulefiles/Compiler/intel/19.1 ---------------------------------------------------------------
 
mpi/impi/2019.7 mpi/openmpi/4.0
 
</pre>
 
in the list of the available software.
 
   
= MINIMALISTIC DESCRIPTION OF LMOD MODULE SYSTEM ON JUSTUS 2 =
 
   
  +
=== Why we use module system? Modules load scientific software ===
JUSTUS 2 system uses [https://lmod.readthedocs.io/en/latest/# Lmod (the Lua Based Module System)] environment management package to support dynamical modification of the user environment via modulefiles. Each modulefile contains information needed to configure the shell, i.e. its environmental variables, required to run a particular software application. To make major changes in the users' environment, such as modifications of PATH, LD_LIBRARY_PATH, CC or FC variables for, e.g. a switching to a different compiler, use the appropriate single or sequence of modules commands instead of a reconfiguration of the shell by hand.
 
  +
The module system on JUSTUS 2 is managed by '''Lmod''' (https://lmod.readthedocs.io/en/latest/).
  +
The module system incorporates majority of the computational software available - this includes among others compilers, mpi libraries, numerical libraries, computational
  +
chemistry packages, python specific libraries etc.. <br><br>'''The programs managed by the module system are by default not utilizable. It has to be "loaded" to become executable.'''
  +
<br><br>
  +
The use of '''module system''' provide among others the following '''functionalities''':
  +
  +
'''1.''' When '''loading a module''', it automatically '''sets the appropriate environment variables required by the application to run properly'''.<br>
  +
'''2.''' It also takes care about the '''module dependency'''. It either loads all additional modules required for the application, or it informs the user if additional dependency modules need to be manually loaded.<br>
  +
'''3.''' It '''prevents loading of modules''' that could be '''in conflicts''' and can '''cause instability or unexpected behavior'''.<br>
 
<br>
 
<br>
  +
  +
Among the main functionalities of Lmod belongs '''module load''' to make variety of the software packages pre-installed on the cluster accessible. '''This is feasible by only a single command''':
 
<br>
 
<br>
E.g. in order to activate the default vasp module run a single command:
 
 
<pre>
 
<pre>
$ module load chem/vasp
+
module load <module_name>
 
</pre>
 
</pre>
  +
The activation is realized by '''dynamical modification of the user's shell environment'''. This simply includes '''adding new paths''' to bin directories with the specific software '''into the PATH environmental variable'''. Typically, Lmod modifies '''PATH''' and '''LD_LIBRARY_PATH''' as well as it '''sets new variables''' as, for example '''<SOFTWARE_NAME>_EXA_DIR''' containing path to directory with the examples for a specific software.
or its shorthand variant
 
  +
<br><br>
  +
Example: compare the content of $PATH environmental variable before and after the load of the gaussian module:
  +
<br>Before the load of gaussian module:
 
<pre>
 
<pre>
  +
echo $PATH
$ml chem/vasp
 
  +
/home/software/common/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
 
</pre>
 
</pre>
For an activation of the module of intel mpi built with the corresponding intel compiler a sequence of two module commands
 
 
<pre>
 
<pre>
  +
which g16
$ module load compiler/intel
 
  +
/usr/bin/which: no g16 in (/home/software/common/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin)
$ module load mpi/impi
 
 
</pre>
 
</pre>
has to be run.
 
 
=== Online user guide of Lmod ===
 
The complete user guide can be found on Lmod websites https://lmod.readthedocs.io/en/latest/010_user.html
 
 
=== Which shells supports module commands? ===
 
So far Bash is only supported shell on JUSTUS 2 to interpret module commands.
 
 
=== Most basic tasks for Module System ===
 
Lmod offers more than 25 sub-commands plus various options to manage the modulefile system installed on JUSTUS 2. See, e.g. output of "module --help" command. Large majority of users will use only couple of them. A complete list of module sub-commands can be displayed by entering "module --help" command or in [https://lmod.readthedocs.io/en/latest/010_user.html Lmod online documentation]. The following text lists only a couple of them.
 
 
==== Loading and unloading modulefiles ====
 
To load and unload a module of a specific package use "module load category/package" and "module unload category/package", respectively. Thus, e.g, to load default version of VASP which belongs into category chem, enter
 
 
<pre>
 
<pre>
$ module load chem/VASP
+
ml chem/gaussian
 
</pre>
 
</pre>
  +
After gaussian is loaded:
and for unloading of the package use
 
 
<pre>
 
<pre>
  +
echo $PATH
$ module unload chem/VASP
 
  +
/.../chem/gaussian/g16.C.01/x86_64-Intel-avx2-source/g16/bsd:/.../chem/gaussian/g16.C.01/x86_64-Intel-avx2-source/g16:/home/software/common/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
 
</pre>
 
</pre>
 
==== Complete unloading of all modulefiles ====
 
To clean the environment from all entries set by lmod use "module purge" command.
 
 
<pre>
 
<pre>
  +
which g16
$ module purge
 
  +
/.../chem/gaussian/g16.C.01/x86_64-Intel-avx2-source/g16/g16
 
</pre>
 
</pre>
Beware that the "module purge" does not always restore the environmental variables that existed prior entering first "module load" command. I.e., some of the entries in PATH, or environmental variables, etc. originally existed in the environment might be cleaned by "module purge" if those would be duplicates of the variables set by any modulefiles that was loaded.
 
   
==== What is Loaded Now? ====
+
=== Basic functions of Lmod and commands ===
  +
<br>The module system has other useful capabilities then just the managing the environment.<br><br>
To see which modules are currently loaded in your environment, enter the command "module list".
 
  +
''' (i) to list available software'''<br>
 
<pre>
 
<pre>
$ module list
+
module available
 
</pre>
 
</pre>
  +
'''<br> (ii) to load (activate) modules (particular software)'''<br>
 
==== Which modulefiles/software is available? ====
 
To see what modulefiles are available on JUSTUS 2, you can enter the following command:
 
 
<pre>
 
<pre>
$ module avail
+
module load
 
</pre>
 
</pre>
  +
'''<br> (iii) to unload (inactivate) modules (particular software)'''<br>
or alternatively
 
 
<pre>
 
<pre>
  +
module unload
$ ml av
 
 
</pre>
 
</pre>
which comes with lmod to save typewriting.
 
 
==== Module specific help ====
 
To access the module specific help, use the "module help" command. For example, to see the module help associated with the default VASP module, enter this command:
 
 
<pre>
 
<pre>
$ module help chem/vasp
+
module purge
 
</pre>
 
</pre>
  +
''' (iv) to list currently loaded modules'''<br>
It is always a good practice to read through all the info provided by a specific module help since this serves as an important communication channel between a user and the person who install the piece of a software.
 
 
== Semi hierarchical layout of modules on JUSTUS 2 ==
 
 
=== Module hierarchy in Lmod ===
 
The structure of software modules on JUSTUS 2 exploits a "semi" hierarchical structure. This is slightly different from what can be seen on another HPC systems with "full" hierarchical structure. The typical systems with full hierarchy put compiler modules (i.e., intel, gcc) in the uppermost (Core) level, depending libraries (e.g., MPI) on the second level, and more depending libraries on a third level. As a consequence, not all the modules contained in the module system are initially visible, namely the modules putted in the second and third layer. Only after a loading a compiler module, the modules of the second layer directly depending on the particular compiler will become available. And similarly, loading an MPI module will make the modules of the third layer depending on the loaded MPI library visible.<br>
 
 
=== Semi hierarchy on JUSTUS 2 ===
 
JUSTUS 2 adopted the hierarchical structure of the modules layout only partially. In particular, there is only "Core" and the "second" level presented and there are only mpi modules contained in the second level. All other modules, i.e. for example those from the "chem" sub-cathegory such as ''vasp'', ''turbomole'', or ''gaussian'', or those located in the "numlib" sub-cathegory such as ''mkl'' or ''python_numpy'', are embodied in the "Core" level.
 
<br>
 
 
=== Module dependency ===
 
The adopted hierarchy models is not the only tool handling the module dependency. As a matter of fact, most of the modules on JUSTUS 2 require a provision of functionalities from another modules, albeit located in the "Core" level. Such provisioning is implemented in a modulefile either automatically without a need of any action from the user (the depending modulefile, while loading, loads all additional modules automatically) or the depending modulefile, while loading, informs the user about necessity to pre-load additional modules if those has not been activated yet (in this case the user must repeat the loading operation). Which of the solution is applied rests with the decision of the person who built the particular module.<br>
 
<br>
 
An example of module with the implemented automated pre-loading is ''orca'' module. With the pre-emptied list of the loading modules, i.e.
 
 
<pre>
 
<pre>
  +
module list
$ml
 
 
</pre>
 
</pre>
  +
shows
 
  +
'''<br> (v) to search through all packages within the module system'''<br>
 
<pre>
 
<pre>
  +
module available
No modules loaded
 
 
</pre>
 
</pre>
, the command sequence
 
 
<pre>
 
<pre>
  +
module spider
$ ml chem/orca
 
$ ml
 
 
</pre>
 
</pre>
shows
 
 
<pre>
 
<pre>
  +
module keyword
Currently Loaded Modules:
 
1) compiler/intel/19.1 2) chem/orca/4.2.1
 
 
</pre>
 
</pre>
  +
'''<br> (vi) to provide specific help information for a particular module system<br>'''
 
= General Introduction =
 
'''Environment Modules''', or short '''Modules''' are the means by which most of the installed scientific software is provided on JUSTUS 2.
 
<br>
 
The use of different compilers, libraries and software packages requires users to set up a specific session environment suited for the program they want to run. JUSTUS 2 provides users with the possibility to load and unload complete environments for compilers, libraries and software packages by a single command.
 
<br>
 
<br>
 
 
= General Description =
 
The Environment ''Modules'' package enables dynamic modification of your environment by the
 
use of so-called ''modulefiles''. A ''modulefile'' contains information to configure the shell
 
for a program/software . Typically, a modulefile contains instructions that alter or set shell
 
environment variables, such as PATH and MANPATH, to enable access to various installed
 
software.
 
<br>
 
One of the key features of using the Environment ''Modules'' software is to allow multiple versions of the same software to be used in your environment in a controlled manner.
 
For example, two different versions of the Intel C compiler can be installed on the system at the same time - the version used is based upon which Intel C compiler modulefile is loaded.
 
<br>
 
The software stack of JUSTUS 2 provides a number of modulefiles. You can also
 
create your own modulefiles. ''Modulefiles'' may be shared by many users on a system, and
 
users may have their own collection of modulefiles to supplement or replace the shared
 
modulefiles.
 
<br>
 
A modulefile does not provide configuration of your environment until it is explicitly loaded,
 
i.e., the specific modulefile for a software product or application must be loaded in your environment before the configuration information in the modulefile is effective.
 
<br>
 
If you want to see which modules are loaded you must execute
 
''''module list''''.
 
<br>
 
 
<pre>
 
<pre>
$ module list
+
module help
Currently Loaded Modules:
 
1) compiler/intel/19.1 2) mpi/impi/2019 3) numlib/mkl/2019
 
 
</pre>
 
</pre>
<br>
 
 
= Usage =
 
Lmod on JUSTUS 2: A New Environment Module System from http://lmod.readthedocs.org/en/latest/ is installed.
 
== Documentation ==
 
Execute ''''module help'''' or ''''man module'''' for help on how to use ''Modules'' software.
 
 
<pre>
 
<pre>
$ module help
+
module whatis
Usage: module [options] sub-command [args ...]
 
 
Options:
 
-h -? -H --help This help message
 
-s availStyle --style=availStyle Site controlled avail style: system (default: system)
 
--regression_testing Lmod regression testing
 
-D Program tracing written to stderr
 
--debug=dbglvl Program tracing written to stderr (where dbglvl is a number 1,2,3)
 
--pin_versions=pinVersions When doing a restore use specified version, do not follow defaults
 
-d --default List default modules only when used with avail
 
-q --quiet Do not print out warnings
 
--expert Expert mode
 
-t --terse Write out in machine readable format for commands: list, avail, spider, savelist
 
--initial_load loading Lmod for first time in a user shell
 
--latest Load latest (ignore default)
 
--ignore_cache Treat the cache file(s) as out-of-date
 
--novice Turn off expert and quiet flag
 
--raw Print modulefile in raw output when used with show
 
-w twidth --width=twidth Use this as max term width
 
-v --version Print version info and quit
 
-r --regexp use regular expression match
 
--gitversion Dump git version in a machine readable way and quit
 
--dumpversion Dump version in a machine readable way and quit
 
--check_syntax --checkSyntax Checking module command syntax: do not load
 
--config Report Lmod Configuration
 
--config_json Report Lmod Configuration in json format
 
--mt Report Module Table State
 
--timer report run times
 
--force force removal of a sticky module or save an empty collection
 
--redirect Send the output of list, avail, spider to stdout (not stderr)
 
--no_redirect Force output of list, avail and spider to stderr
 
--show_hidden Avail and spider will report hidden modules
 
--spider_timeout=timeout a timeout for spider
 
-T --trace
 
 
module [options] sub-command [args ...]
 
 
Help sub-commands:
 
------------------
 
help prints this message
 
help module [...] print help message from module(s)
 
 
Loading/Unloading sub-commands:
 
-------------------------------
 
load | add module [...] load module(s)
 
try-load | try-add module [...] Add module(s), do not complain if not found
 
del | unload module [...] Remove module(s), do not complain if not found
 
swap | sw | switch m1 m2 unload m1 and load m2
 
purge unload all modules
 
refresh reload aliases from current list of modules.
 
update reload all currently loaded modules.
 
 
Listing / Searching sub-commands:
 
---------------------------------
 
list List loaded modules
 
list s1 s2 ... List loaded modules that match the pattern
 
avail | av List available modules
 
avail | av string List available modules that contain "string".
 
spider List all possible modules
 
spider module List all possible version of that module file
 
spider string List all module that contain the "string".
 
spider name/version Detailed information about that version of the module.
 
whatis module Print whatis information about module
 
keyword | key string Search all name and whatis that contain "string".
 
 
Searching with Lmod:
 
--------------------
 
All searching (spider, list, avail, keyword) support regular expressions:
 
 
 
-r spider '^p' Finds all the modules that start with `p' or `P'
 
-r spider mpi Finds all modules that have "mpi" in their name.
 
-r spider 'mpi$ Finds all modules that end with "mpi" in their name.
 
 
Handling a collection of modules:
 
--------------------------------
 
save | s Save the current list of modules to a user defined "default" collection.
 
save | s name Save the current list of modules to "name" collection.
 
reset The same as "restore system"
 
restore | r Restore modules from the user's "default" or system default.
 
restore | r name Restore modules from "name" collection.
 
restore system Restore module state to system defaults.
 
savelist List of saved collections.
 
describe | mcc name Describe the contents of a module collection.
 
disable name Disable (i.e. remove) a collection.
 
 
Deprecated commands:
 
--------------------
 
getdefault [name] load name collection of modules or user's "default" if no name given.
 
===> Use "restore" instead <====
 
setdefault [name] Save current list of modules to name if given, otherwise save as the default list for you the user.
 
===> Use "save" instead. <====
 
 
Miscellaneous sub-commands:
 
---------------------------
 
is-loaded modulefile return a true status if module is loaded
 
is-avail modulefile return a true status if module can be loaded
 
show modulefile show the commands in the module file.
 
use [-a] path Prepend or Append path to MODULEPATH.
 
unuse path remove path from MODULEPATH.
 
tablelist output list of active modules as a lua table.
 
 
Important Environment Variables:
 
--------------------------------
 
LMOD_COLORIZE If defined to be "YES" then Lmod prints properties and warning in color.
 
 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
Lmod Web Sites
 
 
Documentation: http://lmod.readthedocs.org
 
Github: https://github.com/TACC/Lmod
 
Sourceforge: https://lmod.sf.net
 
TACC Homepage: https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
 
 
To report a bug please read http://lmod.readthedocs.io/en/latest/075_bug_reporting.html
 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
 
Modules based on Lua: Version 8.2 (8.2-1-g9c98036c) 2019-10-30 11:17 -05:00
 
by Robert McLay mclay@tacc.utexas.edu
 
 
 
 
</pre>
 
</pre>
For help on particular version of ''Module'', e.g. Intel default compiler version, execute
 
''''module help compiler/intel''''.
 
<pre>
 
$ module help compiler/intel
 
---------------------- Module Specific Help for "compiler/intel/19.1" ----------------------
 
Intel(R) Compilers 19.1 for Linux*
 
For details see: https://software.intel.com/en-us/intel-compilers
 
In case of problems, please contact: Hartmut Häfner <hartmut.haefner@kit.edu>
 
SCC support end: 2022-12-31
 
</pre>
 
<br>
 
=== Online Documentation ===
 
[http://lmod.readthedocs.org Lmod: A New Environment Module System]
 
<br>
 
<br>
 
   
== Display all available Modules ==
+
== Elementary Lmod Commands ==
  +
The module commands below might be used interactively (in shells' current session), as well as in the shell scripts, in particular in the sbatch scripts used for submission of the computational jobs into SLURM (workload manager on JUSTUS2).
Available ''Module'' are modulefiles that can be loaded by the user. A ''Module'' must be loaded before it provides changes to your environment, as described in the introduction to this section. You can display all available ''Modules'' on the system by executing:
 
  +
=== List of available software modules ===
 
<pre>
 
<pre>
$ module avail
+
module available
 
</pre>
 
</pre>
  +
alternatively also in a short form
The short form the command is:
 
 
<pre>
 
<pre>
$ module av
+
ml av
 
</pre>
 
</pre>
Available ''Modules'' can be also displayed in different modes, such as
 
* each ''Module'' per one line
 
<pre>
 
$ module -t avail
 
</pre>
 
Some modules may not be available right now, because their requirements are not met. To get a complete list of all possible modules use the [[#Display all possible Modules|module spider command]].
 
<br>
 
<br>
 
 
== Module categories, versions and defaults ==
 
The ForHLR clusters traditionally provide a large variety of software and software versions. Therefore ''Module'' are divided in category folders containing subfolders of modulefiles again containing modulefile versions, and must be addressed as follows:
 
category/softwarename/version
 
For instance all versions of the Intel compiler belong to the category of compilers, thus the corresponding modulefiles are placed under the category ''compiler'' and ''intel''.
 
<br>
 
In case of multiple software versions, one version will be always defined as the '''default'''
 
version. The ''Module'' of the default can be addressed by simply omitting the version number:
 
category/softwarename
 
<br>
 
   
  +
=== Module naming convention: Category/Name/Version ===
== Finding software Modules ==
 
  +
On JUSTUS 2 (similarly as on other HPC sites of bwHPC), the software modules are grouped into
Currently all JUSTUS 2 software packages are assigned to the following ''Module'' categories (???):
 
  +
several categories:
<!-- add wiki category for each of those, possibly just as a link -->
 
 
<!--* [[:Category:Chemistry_software|chem]]-->
 
<!--* [[:Category:Chemistry_software|chem]]-->
 
* chem
 
* chem
Line 380: Line 137:
 
<!--* [[:Category:Visualization|vis]]-->
 
<!--* [[:Category:Visualization|vis]]-->
 
* vis
 
* vis
  +
<!--* [[:Category:Mathematical ecosystems|math]]-->
You can selectively list software in one of those categories using, e.g. for the category "compiler"
 
  +
* math
  +
  +
This makes it easier for
  +
users to get oriented within the module system. For example Gaussian 16 program allowing
  +
to calculate electronic structure of molecules is found in the category chem (together with other
  +
programs used by theoretical chemists).
  +
<br>
  +
Each the category is further divided according to software packages and those finally according to software versions.
  +
<br>
  +
The full '''name of a module''' always consists of three parts '''category, name, and version''' separated by slash '''category/name/version''' . Consequently, the
  +
full name of the module with Gaussian 16 package is '''chem/gaussian/g16.C.01'''. Analogously, gnu compiler of version 10.2 is addressed as '''compiler/gnu/10.2'''.
  +
<br><br>
  +
See, for example, all modules of category chem with:
 
<pre>
 
<pre>
  +
ml av chem
$ module avail compiler/
 
 
</pre>
 
</pre>
Searches are looking for a substring starting at the begin of the name, so this would list all software in categories starting with a "c"
 
 
<pre>
 
<pre>
  +
---------------------------------------------- /opt/bwhpc/common/modulefiles/Core----------------------------------------------------------------------------------
$ module avail c
 
  +
chem/adf/2019.304 chem/gaussian/g16.C.01 chem/molpro/2020.1 (D) chem/orca/5.0.1-xtb-6.4.1 chem/tmolex/4.6 (D)
  +
chem/ams/2020.101 chem/gaussview/6.1.1 chem/namd/2.14 chem/orca/5.0.1 (D) chem/turbomole/7.4.1
  +
chem/ams/2020.103 chem/gromacs/2020.2 chem/nbo/6.0.18_i4 chem/quantum_espresso/6.5 chem/turbomole/7.5 (D)
  +
chem/ams/2021.102 (D) chem/gromacs/2020.4 chem/nbo/6.0.18_i8 (D) chem/quantum_espresso/6.7_openmp-5 chem/vasp/5.4.4.3.16052018
  +
chem/cfour/2.1_openmpi chem/gromacs/2021.1 (D) chem/openbabel/3.1.1 chem/quantum_espresso/6.7 (D) chem/vmd/1.9.3
  +
chem/cp2k/7.1 chem/jmol/14.31.3 chem/openmolcas/19.11 chem/schrodinger/2020-2 chem/xtb/6.3.3
  +
chem/cp2k/8.0_devel (D) chem/lammps/stable_3Mar2020 chem/openmolcas/21.06 (D) chem/schrodinger/2021-1 (D) chem/xtb/6.4.1 (D)
  +
chem/dalton/2020.0 chem/molcas/8.4 chem/openmolcas/21.10 chem/siesta/4.1-b4
  +
chem/dftbplus/20.2.1-cpu chem/molden/5.9 chem/orca/4.2.1-xtb-6.3.3 chem/siesta/4.1.5 (D)
  +
chem/gamess/2020.2 chem/molpro/2019.2.3 chem/orca/4.2.1 chem/tmolex/4.5.2
 
</pre>
 
</pre>
  +
or, analogously, all available versions of intel compilers:
while this would find nothing
 
  +
<br>
 
<pre>
 
<pre>
  +
ml av compiler/intel
$ module avail hem
 
  +
</pre>
  +
<pre>
  +
--------------------------------------------------------------------------------/opt/bwhpc/common/modulefiles/Core--------------------------------------------------------------
  +
compiler/intel/19.0 compiler/intel/19.1 compiler/intel/19.1.2 (D)
 
</pre>
 
</pre>
<br>
 
   
== Loading Modules ==
+
=== Load specific software ===
You can load a ''Module'' software in to your environment to enable easier access to software that
 
you want to use by executing:
 
 
<pre>
 
<pre>
$ module load category/softwarename/version
+
module load <module_name>
 
</pre>
 
</pre>
  +
or shortly
or
 
 
<pre>
 
<pre>
  +
ml <module_name>
$ module add category/softwarename/version
 
  +
</pre>
  +
For example to load gaussian of version 16 one has to run
  +
<pre>
  +
ml chem/gaussian/g16.C.01
  +
</pre>
  +
=== List of the loaded modules ===
  +
<pre>
  +
module list
  +
</pre>
  +
or simply
  +
<pre>
  +
ml
 
</pre>
 
</pre>
Loading a ''Module'' in this manner affects ONLY your environment for the current session.
 
<br>
 
<br>
 
=== Loading conflicts ===
 
You can not load different versions of the same software at the same time! Loading the Intel compiler in version X while Intel compiler in version Y is loaded leads to an automatic unloading of Intel compiler in version Y.
 
<br>
 
<br>
 
   
=== Showing the changes introduced by a Module ===
+
=== Default module version ===
  +
In case of there is multiple software versions, one version is always pre-determined as the '''default'''
Loading a ''Module'' will change the environment of the current shell session. For instance the $PATH variable will be expanded by the software's binary directory. Other ''Module'' variables may even change the behavior of the current shell session or the software program(s) in a more drastic way.
 
  +
version. To address a default version, ''version'' can be omitted in the module identifier.
<br>
 
  +
For example, the loading of the default intel compiler module is realized via
<br>
 
All the changes to the current shell session to be invoked by loading the ''Module'' can be reviewed using
 
<br>
 
''''module show category/softwarename/version''''.
 
<br>
 
<br>
 
'''Example (Intel compiler)'''
 
 
<pre>
 
<pre>
$ module show compiler/intel
+
ml compiler/intel
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
/opt/bwhpc/common/modulefiles/Core/compiler/intel/19.1.lua:
 
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
setenv("INTEL_LICENSE_FILE","28518@scclic1.scc.kit.edu")
 
setenv("AR","/opt/intel/compilers_and_libraries_2020/linux/bin/intel64/xiar")
 
setenv("CC","/opt/intel/compilers_and_libraries_2020/linux/bin/intel64/icc")
 
setenv("CXX","/opt/intel/compilers_and_libraries_2020/linux/bin/intel64/icpc")
 
setenv("F77","/opt/intel/compilers_and_libraries_2020/linux/bin/intel64/ifort")
 
setenv("FC","/opt/intel/compilers_and_libraries_2020/linux/bin/intel64/ifort")
 
setenv("CFLAGS","-O2 -xCORE-AVX2")
 
setenv("CXXFLAGS","-O2 -xCORE-AVX2")
 
setenv("FFLAGS","-O2 -xCORE-AVX2")
 
setenv("FCFLAGS","-O2 -xCORE-AVX2")
 
setenv("INTEL_VERSION","19.1.0.166")
 
setenv("INTEL_HOME","/opt/intel/compilers_and_libraries_2020/linux")
 
setenv("INTEL_BIN_DIR","/opt/intel/compilers_and_libraries_2020/linux/bin/intel64")
 
setenv("INTEL_LIB_DIR","/opt/intel/compilers_and_libraries_2020/linux/lib/intel64")
 
setenv("INTEL_INC_DIR","/opt/intel/compilers_and_libraries_2020/linux/include")
 
setenv("INTEL_MAN_DIR","/opt/intel/compilers_and_libraries_2020/linux/man/common")
 
setenv("INTEL_DOC_DIR","/opt/intel/compilers_and_libraries_2020/linux/documentation/en")
 
setenv("GDB_VERSION","19.1.0.166")
 
setenv("GDB_HOME","/opt/intel/debugger_2020/gdb/intel64")
 
setenv("GDB_BIN_DIR","/opt/intel/debugger_2020/gdb/intel64/bin")
 
setenv("GDB_LIB_DIR","/opt/intel/debugger_2020/libipt/intel64/lib")
 
setenv("GDB_INC_DIR","/opt/intel/debugger_2020/gdb/intel64/include")
 
setenv("GDB_INF_DIR","/opt/intel/documentation_2020/en/debugger/gdb-ia/info")
 
setenv("GDB_MAN_DIR","/opt/intel/documentation_2020/en/debugger/gdb-ia/man")
 
setenv("KMP_AFFINITY","noverbose,granularity=core,respect,warnings,compact,1")
 
prepend_path("PATH","/opt/intel/compilers_and_libraries_2020/linux/bin/intel64")
 
prepend_path("MANPATH","/opt/intel/compilers_and_libraries_2020/linux/man/common")
 
prepend_path("LD_LIBRARY_PATH","/opt/intel/compilers_and_libraries_2020/linux/lib/intel64")
 
whatis("Sets up Intel C/C++ and Fortran compiler version 19.1 (Intel(R) Compilers 19.1 for Linux*) - supported by SCC till 2022-12-31!")
 
help([[Intel(R) Compilers 19.1 for Linux*
 
For details see: https://software.intel.com/en-us/intel-compilers
 
In case of problems, please contact: Hartmut Häfner <hartmut.haefner@kit.edu>
 
SCC support end: 2022-12-31]])
 
prepend_path("MODULEPATH","/software/bwhpc/common/modulefiles/Compiler/intel/19.1")
 
family("compiler")
 
 
</pre>
 
</pre>
<br>
+
<pre>
  +
ml
<br>
 
   
=== Modules depending on Modules ===
+
Currently Loaded Modules:
  +
1) compiler/intel/19.1.2
Some program ''Modules'' depend on libraries to be loaded to the user environment. Therefore the
 
  +
</pre>
corresponding ''Modules'' of the software must be loaded together with the ''Modules'' of
 
the libraries.
 
<br>
 
By default such software ''Modules'' try to load required ''Modules'' and corresponding versions automatically.
 
<br>
 
<br>
 
<br>
 
   
  +
=== Unload a specific software from the environment ===
== Unloading Modules ==
 
To unload or remove a software ''Module'' execute:
 
 
<pre>
 
<pre>
$ module unload category/softwarename/version
+
module unload <module_name>
  +
</pre>
  +
or equivalently
  +
<pre>
  +
ml -<module_name>
  +
</pre>
  +
for example to unload previously loaded vasp module chem/vasp/5.4.4.3.16052018
  +
use
  +
<pre>
  +
ml -chem/vasp/5.4.4.3.16052018
  +
</pre>
  +
=== Unload all the loaded modules ===
  +
<pre>
  +
$ module purge
 
</pre>
 
</pre>
 
or
 
or
 
<pre>
 
<pre>
  +
ml purge
$ module remove category/softwarename/version
 
 
</pre>
 
</pre>
  +
=== Providing a specific help for a particular module ===
<br>
 
  +
<pre>
  +
module help <module_name>
  +
</pre>
  +
or
  +
<pre>
  +
ml help <module_name>
  +
</pre>
  +
=== Software job examples and batch script templates ===
  +
Majority of the software modules provides examples, including job queueing system examples (batch scripts)
  +
for slurm. A full path the directory with examples is normally contained in <SOFTWARE_NAME>_EXA_DIR
  +
environmental variable. For example the examples for Gromacs-2021.1 are located in (after the loading of the module).
  +
<pre>
  +
ml chem/gromacs/2021.1
  +
</pre>
  +
<pre>
  +
echo $GROMACS_EXA_DIR/
  +
/opt/bwhpc/common/chem/gromacs/2021.1-openmpi-4.0/bwhpc-examples/
  +
</pre>
  +
<pre>
  +
ls $GROMACS_EXA_DIR
  +
GROMACS_TestCaseA Performance-Tuning-and-Optimization-of-GROMACS.pdf README
  +
</pre>
  +
<pre>
  +
ls $GROMACS_EXA_DIR/GROMACS_TestCaseA/
  +
gromacs-2021.1_gpu.slurm gromacs-2021.1.slurm ion_channel.tpr
  +
</pre>
  +
Users may make a copy of these examples and use it as template for their own job scripts:
  +
<pre>
  +
cp $GROMACS_EXA_DIR/GROMACS_TestCaseA/gromacs-2021.1_gpu.slurm .
  +
</pre>
  +
'''Note: All the batch scripts examples are fully functional, i.e. the example scripts could be directly submitted into the queuing system, to launch a test job.'''
  +
Typically, the scripts launch a short, simple calculation of the given software. Moreover, most of the sbatch scripts contain
  +
general submit instructions, as well as hints specific for the particular program.
   
=== Unloading all loaded modules ===
+
=== Searching through module names ===
==== Purge ====
 
Unloading a ''Module'' that has been loaded by default makes it inactive for the current session only - it will be reloaded the next time you log in.
 
<br>
 
In order to remove all previously loaded software modules from your environment issue the command 'module purge'.
 
<br>
 
<u>Example</u>
 
 
<pre>
 
<pre>
$ module list
+
module available <module_name>
Currently Loaded Modules:
 
1) compiler/intel/19.1 2) mpi/impi/2019 3) numlib/mkl/2019
 
$
 
$ module purge
 
$ module list
 
No modules loaded
 
$
 
 
</pre>
 
</pre>
  +
or shortly
<font color>Beware!</font>
 
<br>
+
<pre>
  +
ml av <module_name>
'module purge' is working without any further inquiry.
 
<br>
+
</pre>
  +
For example, searching for python modules is realized via
<br>
 
  +
<pre>
  +
ml av python
  +
</pre>
  +
with the following output:
  +
<pre>
  +
----------------------------------------------------------------------------------- /opt/bwhpc/common/modulefiles/Core ------------------------------------------------------------------------------------
  +
devel/python/3.8.3 lib/python_matplotlib/3.2.2_numpy-1.19.0_python-3.8.3 numlib/python_numpy/1.19.0_python-3.8.3 numlib/python_scipy/1.5.0_numpy-1.19.0_python-3.8.3
   
  +
Use "module spider" to find all possible modules and extensions.
== Display your loaded Modules ==
 
  +
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys"
All ''Modules'' that are currently loaded for you can be displayed by the
 
  +
</pre>
command ''''module list''''. [[#Purge|See example above]].
 
<br>
 
Note: You only have to load further ''Modules'', if you want to use additional software
 
packages or to change the version of an already loaded software.
 
<br>
 
<br>
 
   
  +
=== What does this software do? Command when you don't know this software ===
== Display all possible Modules ==
 
Modulefiles can be searched by the user. You can dipslay all possible modules by executing:
 
 
<pre>
 
<pre>
  +
ml whatis <modulename>
$ module spider
 
  +
</pre>
  +
provides the short description of the software package.
   
  +
=== Finding detailed information about a specific module ===
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
  +
<pre>
The following is a list of the modules and extensions currently available:
 
  +
module spider <searching_pattern>
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
  +
</pre>
cae/abaqus: cae/abaqus/2018, cae/abaqus/2019
 
  +
or just
  +
<pre>
  +
ml spider <searching_pattern>
  +
</pre>
   
  +
=== Extended searching through entire module system ===
cae/adina: cae/adina/9.1.2
 
  +
<pre>
  +
module keyword <searching_pattern>
  +
</pre>
  +
For example, to find out which modules contain fftw library:
  +
<pre>
  +
ml keyword fftw
  +
</pre>
  +
which gives the following info:
  +
<pre>
  +
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
   
  +
The following modules match your search criteria: "fftw"
cae/ansys: cae/ansys/19.2, cae/ansys/2019R3, cae/ansys/2020R1
 
  +
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
   
  +
numlib/mkl: numlib/mkl/2019, numlib/mkl/2020, numlib/mkl/2020.2
cae/comsol: cae/comsol/5.4, cae/comsol/5.5
 
   
  +
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
cae/cst: cae/cst/2018
 
  +
</pre>
   
  +
== Best Practices when working with modules ==
cae/lsdyna: cae/lsdyna/901
 
  +
=== Always load modules with the entire module name ===
  +
The software stack is updated regularly. The adding of the new software version usually revokes which version is marked as default.
  +
The newer software is not always backwards compatible, including the existing scripts, workflow, or even input files.
  +
Therefore it is strongly recommended to avoid the loading based on just category and software name. Instead, one should always use the entire module name (including the version) to make sure the same module is loaded each time.
   
  +
=== Load only those modules that are needed for the current application ===
cae/openfoam: cae/openfoam/v1912, cae/openfoam/2.4.x, cae/openfoam/6, cae/openfoam/7
 
  +
Only load modules that are needed for the current script or workflow you are running, to reduce the chance of unexpected behavior caused by module conflicts.
   
  +
'''Typical error''' sometimes seen on the cluster by loading of vasp module is this:
cae/paraview: cae/paraview/5.8
 
  +
<pre>
  +
ml compiler/intel/19.1.2
  +
ml mpi/impi/2019.8
  +
ml chem/vasp/5.4.4.3.16052018
  +
</pre>
   
  +
'''The correct way''' is indeed only:
cae/starccm+: cae/starccm+/14.02.010, cae/starccm+/2019.2.1
 
  +
<pre>
  +
ml chem/vasp/5.4.4.3.16052018
  +
</pre>
   
  +
=== Do not use module commands in .bashrc, .bash_profile etc. scripts ===
cae/starcd: cae/starcd/4.28
 
  +
Avoid including “module load” commands in your .bashrc or .bash_profile files. As an alternative, create a bash script with the module load commands and source it each time, to load the modules needed.
   
  +
=== Use 'module help' command ===
compiler/clang: compiler/clang/9.0
 
  +
see [[Software_Modules_Lmod#Providing_a_specific_help_for_a_particular_module]]
   
  +
=== Check content of $<SOFTWARE_NAME>_EXA_DIR folder ===
compiler/gnu: compiler/gnu/9.2
 
  +
see [[//wiki.bwhpc.de/e/Software_Modules_Lmod#Software_job_examples_and_batch_script_templates]]
   
  +
=== Use 'ml purge' in sbatch scripts before the first 'ml load' ===
compiler/intel: compiler/intel/18.0, compiler/intel/19.0, compiler/intel/19.1
 
  +
The environment in effect at the time of the sbatch, salloc, or srun commands is executed are propagated
  +
to the spawned processes, i.e. also to the job-script. Consequently, should be some module loaded at the
  +
time of the 'sbatch <job-script>' command execution, its state, i.e. "loaded", as well as the values of
  +
the set environmental variables will be propagated with the job.<br><br>
  +
Thus, consider to put 'ml purge' command as the first module command when you are designing your job-scripts.
  +
This might prevent variety of module conflict situations.
   
  +
Imagine for example, in the following scenario
compiler/pgi: compiler/pgi/2019
 
  +
<pre>
  +
On the login node:
   
  +
ml compiler/intel/19.1.2
devel/cmake: devel/cmake/3.16
 
  +
salloc --nodes=1 --ntasks-per-node=1
   
  +
... waiting for the allocation of the resources ...
devel/cuda: devel/cuda/9.2, devel/cuda/10.0, devel/cuda/10.2
 
  +
Once on the compute node execute
   
  +
ml compiler/gnu/10.2
devel/gdb: devel/gdb/9.1
 
  +
</pre>
  +
the load of compiler/gnu/10.2 module on the compute node fails with following error:
  +
<pre>
  +
Lmod has detected the following error: Cannot load module "compiler/gnu/10.2" because these module(s) are loaded:
  +
compiler/intel
   
  +
While processing the following module(s):
devel/python: devel/python/3.7.4_gnu_9.2, devel/python/3.8.1_gnu_9.2, devel/python/3.8.1_intel_19.1
 
  +
Module fullname Module Filename
  +
--------------- ---------------
  +
compiler/gnu/10.2 /opt/bwhpc/common/modulefiles/Core/compiler/gnu/10.2.lua
   
  +
[ul_l_tkz12@login02 ~]$ ml
math/R: math/R/3.6.3
 
   
  +
Currently Loaded Modules:
math/julia: math/julia/1.3.1
 
  +
1) compiler/intel/19.1.2
  +
</pre>
   
  +
== Useful Extras ==
mpi/impi: mpi/impi/2018, mpi/impi/2019, mpi/impi/2020
 
  +
=== Conflicts between modules ===
  +
Some modules cannot be loaded together at the same time. For example two different versions of the same package cannot
  +
be activated simultaneously. The modules might already built-in this functionality. In such circumstances, Lmod, during the loading, either
  +
prints an error message and no module is loaded, or the module is reloaded - the old module is unloaded and only the new module is become activated.<br><br>
  +
'''Example of two versions of the intel compiler - module reload:'''
  +
<pre>
  +
ml compiler/intel/19.1
  +
</pre>
  +
<pre>
  +
ml
   
  +
Currently Loaded Modules:
mpi/openmpi: mpi/openmpi/4.0
 
  +
1) compiler/intel/19.1
  +
</pre>
  +
<pre>
  +
ml compiler/intel/19.1.2
   
  +
The following have been reloaded with a version change:
numlib/mkl: numlib/mkl/2018, numlib/mkl/2019, numlib/mkl/2020
 
  +
1) compiler/intel/19.1 => compiler/intel/19.1.2
  +
</pre>
  +
<pre>
  +
ml
   
  +
Currently Loaded Modules:
numlib/python_numpy: numlib/python_numpy/1.17.2_python_3.7.4_gnu_9.2
 
  +
1) compiler/intel/19.1.2
  +
</pre>
  +
<br><br>
  +
'''Example of two different compilers intel and gnu triggers the module conflict with the error during the load of gnu - the new module is not loaded:'''
   
  +
<pre>
numlib/python_scipy: numlib/python_scipy/1.3.1_numpy_1.17.2_python_3.7.4_gnu_9.2
 
  +
ml compiler/intel/19.1.2
  +
</pre>
  +
<pre>
  +
ml
   
  +
Currently Loaded Modules:
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
  +
1) compiler/intel/19.1.2
  +
</pre>
  +
<pre>
  +
ml compiler/gnu/10.2
  +
Lmod has detected the following error: Cannot load module "compiler/gnu/10.2" because these module(s) are loaded:
  +
compiler/intel
   
  +
While processing the following module(s):
To learn more about a package execute:
 
  +
Module fullname Module Filename
  +
--------------- ---------------
  +
compiler/gnu/10.2 /opt/bwhpc/common/modulefiles/Core/compiler/gnu/10.2.lua
  +
</pre>
  +
<pre>
  +
ml
   
  +
Currently Loaded Modules:
$ module spider Foo
 
  +
1) compiler/intel/19.1.2
  +
</pre>
  +
<br>
  +
'''Solution:''' intel compiler must be unloaded prior the load of gnu module.
  +
<pre>
  +
ml -compiler/intel/19.1.2
  +
</pre>
  +
<pre>
  +
ml compiler/gnu/10.2
  +
</pre>
  +
<pre>
  +
ml
   
  +
Currently Loaded Modules:
where "Foo" is the name of a module.
 
  +
1) compiler/gnu/10.2
  +
</pre>
   
  +
=== Module dependencies (why there is no mpi module?) ===
To find detailed information about a particular package you
 
  +
Some modules can depends on other modules. Typically, many modules depends on mpi library, Mpi library depends on a compiler, etc.. The user does not need
must specify the version if there is more than one version:
 
  +
to care about these fundamental dependencies. Majority of modules automatically take care about loading of all necessary packages it is depending on.
  +
However, there is an eminent exception - mpi library. While the most of the installed parallel applications exist for only one compiler-mpi combination,
  +
there are variety of mpi libraries of the same versions built with different compilers. For example there are two sets of OpenMPI 4.0 modules for intel and gnu compilers.
  +
Thus, an user who wants to load a specific mpi must chose (load) a particular compiler prior the mpi module load. Note, that the mpi modules also remains
  +
"invisible" for "module av <mpi_name>" command until a certain compiler is not loaded. This due to the module hierarchy of Lmod. More details about the hierarchy
  +
is below in [[Software_Modules_Lmod#Semi_hierarchical_layout_of_modules_on_JUSTUS_2]]
   
$ module spider Foo/11.1
 
   
  +
==== Consequences of the partial module hierarchy for mpi modules ====
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
  +
Mpi modules remains invisible for a user (prompted ''module avail'') until some compiler module has been loaded. Once the compiler module has been activated corresponding mpi modules, i.e. built with the particular compiler, become visible.<br>
  +
<br>
  +
E.g., with the originally empty list of the loaded modules, the module command
  +
<pre>
  +
$ module avail
 
</pre>
 
</pre>
  +
or its shorthand analogue
''''module spider name/version'''' : If you search the full name and version of the module, the search gives detailed information about that module version.
 
  +
<pre>
  +
$ ml av
  +
</pre>
  +
displays no mpi module available. After running
 
<pre>
 
<pre>
  +
ml compiler/intel/19.1
$ module spider devel/cmake
 
  +
ml av
  +
</pre>
  +
mpi packages compatible with the intel 19.1 compiler becomes visible
  +
<pre>
  +
------------ /opt/bwhpc/common/modulefiles/Compiler/intel/19.1 --------------------
  +
mpi/impi/2019.7 mpi/openmpi/4.0
  +
</pre>
  +
in the list of the available software.
   
  +
=== Online user guide of Lmod ===
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
  +
The complete user guide can be found on Lmod websites [https://lmod.readthedocs.io/en/latest/010_user.html]
devel/cmake: devel/cmake/3.16
 
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
   
  +
=== Additional Module System tasks ===
This module can be loaded directly: module load devel/cmake/3.16
 
  +
Lmod offers more than 25 sub-commands plus various options to manage the modulefile system installed on JUSTUS 2. See, e.g. output of "module --help" command. Large majority of users will use only couple of them. A complete list of module sub-commands can be displayed by entering "module --help" command or in [https://lmod.readthedocs.io/en/latest/010_user.html Lmod online documentation]. The following text lists only a couple of them.
   
  +
== Other topics ==
Help:
 
  +
=== Which shells supports module commands? ===
Home page: https://www.cmake.org
 
  +
So far Bash is only supported shell on JUSTUS 2 to interpret module commands.
Online Documentation: https://www.cmake.org/HTML/Documentation.html
 
Local Documentation: /opt/bwhpc/common/devel/cmake/3.16.4/docFAQ: https://gitlab.kitware.com/cmake/community/wikis/FAQ
 
   
  +
=== Semi hierarchical layout of modules on JUSTUS 2 ===
   
  +
==== Module hierarchy in Lmod ====
  +
The structure of software modules on JUSTUS 2 exploits a "semi" hierarchical structure. This is slightly different from what can be seen on another HPC systems with "full" hierarchical structure. The typical systems with full hierarchy put compiler modules (i.e., intel, gcc) in the uppermost (Core) level, depending libraries (e.g., MPI) on the second level, and more depending libraries on a third level. As a consequence, not all the modules contained in the module system are initially visible, namely the modules putted in the second and third layer. Only after a loading a compiler module, the modules of the second layer directly depending on the particular compiler will become available. And similarly, loading an MPI module will make the modules of the third layer depending on the loaded MPI library visible.<br>
  +
  +
==== Semi hierarchy of software stack on JUSTUS 2 ====
  +
JUSTUS 2 adopted the hierarchical structure of the modules layout only partially. In particular, there is only "Core" and the "second" level presented and there are only mpi modules contained in the second level. All other modules, i.e. for example those from the "chem" sub-cathegory such as ''vasp'', ''turbomole'', or ''gaussian'', or those located in the "numlib" sub-cathegory such as ''mkl'' or ''python_numpy'', are embodied in the "Core" level.
  +
<br>
  +
  +
==== Module dependency ====
  +
The adopted hierarchy models is not the only tool handling the module dependency. As a matter of fact, most of the modules on JUSTUS 2 require a provision of functionalities from another modules, albeit located in the "Core" level. Such provisioning is implemented in a modulefile either automatically without a need of any action from the user (the depending modulefile, while loading, loads all additional modules automatically) or the depending modulefile, while loading, informs the user about necessity to pre-load additional modules if those has not been activated yet (in this case the user must repeat the loading operation). Which of the solution is applied rests with the decision of the person who built the particular module.<br>
  +
<br>
  +
An example of module with the implemented automated pre-loading is ''orca'' module. With the pre-emptied list of the loading modules, i.e.
  +
<pre>
  +
ml
 
</pre>
 
</pre>
  +
shows
Moreover, you can see the dependencies of the module with using the same command. For example, if the following is executed, you can see which modules need to be loaded before loading the module mpi/impi/2019
 
 
<pre>
 
<pre>
  +
No modules loaded
$ module spider mpi/impi/2019
 
  +
</pre>
 
  +
, the command sequence
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
  +
<pre>
mpi/impi: mpi/impi/2019
 
  +
ml chem/orca
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 
  +
ml
 
  +
</pre>
You will need to load all module(s) on any one of the lines below before the "mpi/impi/2019" module is available to load.
 
  +
shows
 
  +
<pre>
compiler/clang/9.0
 
  +
Currently Loaded Modules:
compiler/gnu/9.2
 
  +
1) compiler/intel/19.1 2) chem/orca/4.2.1
compiler/intel/18.0
 
  +
</pre>
compiler/intel/19.0
 
  +
I.e., loading of the ''intel'' compiler is built-in the ''orca'' module.
compiler/intel/19.1
 
 
Help:
 
Intel(R) MPI Library
 
   
  +
=== Complete list of Lmod options and sub-commands ===
  +
The whole list of module options and all commands available can be displayed by running
  +
<pre>
  +
man module
  +
</pre>
  +
or
  +
<pre>
  +
module --help
 
</pre>
 
</pre>
<br>
 
   
= How do Modules work? =
+
=== How do Modules work? ===
 
The default shell on the bwHPC clusters is bash, so explanations and examples will be shown for bash. In general, programs cannot modify the environment of the shell they are being run from, so how can the module command do exactly that?
 
The default shell on the bwHPC clusters is bash, so explanations and examples will be shown for bash. In general, programs cannot modify the environment of the shell they are being run from, so how can the module command do exactly that?
 
<br>
 
<br>
Line 637: Line 551:
 
You can view its content using:
 
You can view its content using:
 
<pre>
 
<pre>
$ type module
+
type module
 
</pre>
 
</pre>
 
and you will get the following result:
 
and you will get the following result:
 
<pre>
 
<pre>
$ type module
+
type module
 
module is a function
 
module is a function
 
module ()
 
module ()
Line 653: Line 567:
 
<br>
 
<br>
 
----
 
----
[[Category:bwForCluster_JUSTUS_2|JUSTUS 2]]
 
 
[[#top|Back to top]]
 
[[#top|Back to top]]

Latest revision as of 10:07, 23 April 2024

Software Module System - Lmod

Contents

1 Preface

This guide provides a general overview and introduction to the software system management via Lmod on JUSTUS 2 for new users as well as for experienced users coming, e.g. from sites planted with different Environment modules systems.

2 Scientific Software management through the module system Lmod

The following sections covers the basic module commands needed to find and load the scientific applications installed on the JUSTUS 2.

JUSTUS 2 uses Linux operating system. The standard Linux packages are installed on the front end nodes (login and visualisation nodes). The scientific software is accessible via so called module system.
To find and load an scientific application, one needs to use module commands. For example the following command sequence:

module load chem/gaussian/g16.C.01
module list
module help chem/gaussian/g16.C.01
cp $GAUSSIAN_EXA_DIR/bwforcluster-gaussian-example.sbatch .
1. loads the module with gaussian software package of version 16, revision C.01
2. prints out the list of currently load modules
3. provides the user help for the particular gaussian module
4. copies the template batch script which was specifically designed for submission of g16 jobs into SLURM workload manager on JUSTUS 2.


2.1 Why we use module system? Modules load scientific software

The module system on JUSTUS 2 is managed by Lmod (https://lmod.readthedocs.io/en/latest/). The module system incorporates majority of the computational software available - this includes among others compilers, mpi libraries, numerical libraries, computational chemistry packages, python specific libraries etc..

The programs managed by the module system are by default not utilizable. It has to be "loaded" to become executable.

The use of module system provide among others the following functionalities:

1. When loading a module, it automatically sets the appropriate environment variables required by the application to run properly.
2. It also takes care about the module dependency. It either loads all additional modules required for the application, or it informs the user if additional dependency modules need to be manually loaded.
3. It prevents loading of modules that could be in conflicts and can cause instability or unexpected behavior.

Among the main functionalities of Lmod belongs module load to make variety of the software packages pre-installed on the cluster accessible. This is feasible by only a single command:

module load <module_name>

The activation is realized by dynamical modification of the user's shell environment. This simply includes adding new paths to bin directories with the specific software into the PATH environmental variable. Typically, Lmod modifies PATH and LD_LIBRARY_PATH as well as it sets new variables as, for example <SOFTWARE_NAME>_EXA_DIR containing path to directory with the examples for a specific software.

Example: compare the content of $PATH environmental variable before and after the load of the gaussian module:
Before the load of gaussian module:

echo $PATH
/home/software/common/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
which g16
/usr/bin/which: no g16 in (/home/software/common/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin)
ml chem/gaussian

After gaussian is loaded:

echo $PATH
/.../chem/gaussian/g16.C.01/x86_64-Intel-avx2-source/g16/bsd:/.../chem/gaussian/g16.C.01/x86_64-Intel-avx2-source/g16:/home/software/common/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin
which g16
/.../chem/gaussian/g16.C.01/x86_64-Intel-avx2-source/g16/g16

2.2 Basic functions of Lmod and commands


The module system has other useful capabilities then just the managing the environment.

(i) to list available software

module available


(ii) to load (activate) modules (particular software)

module load


(iii) to unload (inactivate) modules (particular software)

module unload
module purge

(iv) to list currently loaded modules

module list


(v) to search through all packages within the module system

module available
module spider
module keyword


(vi) to provide specific help information for a particular module system

module help
module whatis

2.3 Elementary Lmod Commands

The module commands below might be used interactively (in shells' current session), as well as in the shell scripts, in particular in the sbatch scripts used for submission of the computational jobs into SLURM (workload manager on JUSTUS2).

2.3.1 List of available software modules

module available

alternatively also in a short form

ml av

2.3.2 Module naming convention: Category/Name/Version

On JUSTUS 2 (similarly as on other HPC sites of bwHPC), the software modules are grouped into several categories:

  • chem
  • compiler
  • devel
  • lib
  • numlib
  • phys
  • system
  • vis
  • math

This makes it easier for users to get oriented within the module system. For example Gaussian 16 program allowing to calculate electronic structure of molecules is found in the category chem (together with other programs used by theoretical chemists).
Each the category is further divided according to software packages and those finally according to software versions.
The full name of a module always consists of three parts category, name, and version separated by slash category/name/version . Consequently, the full name of the module with Gaussian 16 package is chem/gaussian/g16.C.01. Analogously, gnu compiler of version 10.2 is addressed as compiler/gnu/10.2.

See, for example, all modules of category chem with:

ml av chem
---------------------------------------------- /opt/bwhpc/common/modulefiles/Core----------------------------------------------------------------------------------
   chem/adf/2019.304               chem/gaussian/g16.C.01             chem/molpro/2020.1        (D)    chem/orca/5.0.1-xtb-6.4.1                 chem/tmolex/4.6            (D)
   chem/ams/2020.101               chem/gaussview/6.1.1               chem/namd/2.14                   chem/orca/5.0.1                    (D)    chem/turbomole/7.4.1
   chem/ams/2020.103               chem/gromacs/2020.2                chem/nbo/6.0.18_i4               chem/quantum_espresso/6.5                 chem/turbomole/7.5         (D)
   chem/ams/2021.102        (D)    chem/gromacs/2020.4                chem/nbo/6.0.18_i8        (D)    chem/quantum_espresso/6.7_openmp-5        chem/vasp/5.4.4.3.16052018
   chem/cfour/2.1_openmpi          chem/gromacs/2021.1         (D)    chem/openbabel/3.1.1             chem/quantum_espresso/6.7          (D)    chem/vmd/1.9.3
   chem/cp2k/7.1                   chem/jmol/14.31.3                  chem/openmolcas/19.11            chem/schrodinger/2020-2                   chem/xtb/6.3.3
   chem/cp2k/8.0_devel      (D)    chem/lammps/stable_3Mar2020        chem/openmolcas/21.06     (D)    chem/schrodinger/2021-1            (D)    chem/xtb/6.4.1             (D)
   chem/dalton/2020.0              chem/molcas/8.4                    chem/openmolcas/21.10            chem/siesta/4.1-b4
   chem/dftbplus/20.2.1-cpu        chem/molden/5.9                    chem/orca/4.2.1-xtb-6.3.3        chem/siesta/4.1.5                  (D)
   chem/gamess/2020.2              chem/molpro/2019.2.3               chem/orca/4.2.1                  chem/tmolex/4.5.2

or, analogously, all available versions of intel compilers:

ml av compiler/intel
--------------------------------------------------------------------------------/opt/bwhpc/common/modulefiles/Core--------------------------------------------------------------
   compiler/intel/19.0    compiler/intel/19.1    compiler/intel/19.1.2 (D)

2.3.3 Load specific software

module load <module_name>

or shortly

ml <module_name>

For example to load gaussian of version 16 one has to run

ml chem/gaussian/g16.C.01

2.3.4 List of the loaded modules

module list

or simply

ml

2.3.5 Default module version

In case of there is multiple software versions, one version is always pre-determined as the default version. To address a default version, version can be omitted in the module identifier. For example, the loading of the default intel compiler module is realized via

ml compiler/intel
ml

Currently Loaded Modules:
  1) compiler/intel/19.1.2

2.3.6 Unload a specific software from the environment

module unload <module_name>

or equivalently

ml -<module_name>

for example to unload previously loaded vasp module chem/vasp/5.4.4.3.16052018 use

ml -chem/vasp/5.4.4.3.16052018

2.3.7 Unload all the loaded modules

$ module purge

or

ml purge

2.3.8 Providing a specific help for a particular module

module help <module_name>

or

ml help <module_name>

2.3.9 Software job examples and batch script templates

Majority of the software modules provides examples, including job queueing system examples (batch scripts) for slurm. A full path the directory with examples is normally contained in <SOFTWARE_NAME>_EXA_DIR environmental variable. For example the examples for Gromacs-2021.1 are located in (after the loading of the module).

ml chem/gromacs/2021.1
echo $GROMACS_EXA_DIR/
/opt/bwhpc/common/chem/gromacs/2021.1-openmpi-4.0/bwhpc-examples/
ls $GROMACS_EXA_DIR
GROMACS_TestCaseA  Performance-Tuning-and-Optimization-of-GROMACS.pdf  README
ls $GROMACS_EXA_DIR/GROMACS_TestCaseA/
gromacs-2021.1_gpu.slurm  gromacs-2021.1.slurm  ion_channel.tpr

Users may make a copy of these examples and use it as template for their own job scripts:

cp $GROMACS_EXA_DIR/GROMACS_TestCaseA/gromacs-2021.1_gpu.slurm .

Note: All the batch scripts examples are fully functional, i.e. the example scripts could be directly submitted into the queuing system, to launch a test job. Typically, the scripts launch a short, simple calculation of the given software. Moreover, most of the sbatch scripts contain general submit instructions, as well as hints specific for the particular program.

2.3.10 Searching through module names

module available <module_name>

or shortly

ml av <module_name>

For example, searching for python modules is realized via

ml av python

with the following output:

----------------------------------------------------------------------------------- /opt/bwhpc/common/modulefiles/Core ------------------------------------------------------------------------------------
   devel/python/3.8.3    lib/python_matplotlib/3.2.2_numpy-1.19.0_python-3.8.3    numlib/python_numpy/1.19.0_python-3.8.3    numlib/python_scipy/1.5.0_numpy-1.19.0_python-3.8.3

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys"

2.3.11 What does this software do? Command when you don't know this software

ml whatis <modulename>

provides the short description of the software package.

2.3.12 Finding detailed information about a specific module

module spider <searching_pattern>

or just

ml spider <searching_pattern>

2.3.13 Extended searching through entire module system

module keyword <searching_pattern>

For example, to find out which modules contain fftw library:

ml keyword fftw

which gives the following info:

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

The following modules match your search criteria: "fftw"
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

  numlib/mkl: numlib/mkl/2019, numlib/mkl/2020, numlib/mkl/2020.2

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

2.4 Best Practices when working with modules

2.4.1 Always load modules with the entire module name

The software stack is updated regularly. The adding of the new software version usually revokes which version is marked as default. The newer software is not always backwards compatible, including the existing scripts, workflow, or even input files. Therefore it is strongly recommended to avoid the loading based on just category and software name. Instead, one should always use the entire module name (including the version) to make sure the same module is loaded each time.

2.4.2 Load only those modules that are needed for the current application

Only load modules that are needed for the current script or workflow you are running, to reduce the chance of unexpected behavior caused by module conflicts.

Typical error sometimes seen on the cluster by loading of vasp module is this:

ml compiler/intel/19.1.2
ml mpi/impi/2019.8
ml chem/vasp/5.4.4.3.16052018

The correct way is indeed only:

ml chem/vasp/5.4.4.3.16052018

2.4.3 Do not use module commands in .bashrc, .bash_profile etc. scripts

Avoid including “module load” commands in your .bashrc or .bash_profile files. As an alternative, create a bash script with the module load commands and source it each time, to load the modules needed.

2.4.4 Use 'module help' command

see Software_Modules_Lmod#Providing_a_specific_help_for_a_particular_module

2.4.5 Check content of $<SOFTWARE_NAME>_EXA_DIR folder

see [[1]]

2.4.6 Use 'ml purge' in sbatch scripts before the first 'ml load'

The environment in effect at the time of the sbatch, salloc, or srun commands is executed are propagated to the spawned processes, i.e. also to the job-script. Consequently, should be some module loaded at the time of the 'sbatch <job-script>' command execution, its state, i.e. "loaded", as well as the values of the set environmental variables will be propagated with the job.

Thus, consider to put 'ml purge' command as the first module command when you are designing your job-scripts. This might prevent variety of module conflict situations.

Imagine for example, in the following scenario

On the login node:

ml compiler/intel/19.1.2
salloc --nodes=1 --ntasks-per-node=1

... waiting for the allocation of the resources ...
Once on the compute node execute

ml compiler/gnu/10.2

the load of compiler/gnu/10.2 module on the compute node fails with following error:

Lmod has detected the following error:  Cannot load module "compiler/gnu/10.2" because these module(s) are loaded:
   compiler/intel

While processing the following module(s):
    Module fullname    Module Filename
    ---------------    ---------------
    compiler/gnu/10.2  /opt/bwhpc/common/modulefiles/Core/compiler/gnu/10.2.lua

[ul_l_tkz12@login02 ~]$ ml

Currently Loaded Modules:
  1) compiler/intel/19.1.2

2.5 Useful Extras

2.5.1 Conflicts between modules

Some modules cannot be loaded together at the same time. For example two different versions of the same package cannot be activated simultaneously. The modules might already built-in this functionality. In such circumstances, Lmod, during the loading, either prints an error message and no module is loaded, or the module is reloaded - the old module is unloaded and only the new module is become activated.

Example of two versions of the intel compiler - module reload:

ml compiler/intel/19.1
ml

Currently Loaded Modules:
  1) compiler/intel/19.1
ml compiler/intel/19.1.2 

The following have been reloaded with a version change:
  1) compiler/intel/19.1 => compiler/intel/19.1.2
ml

Currently Loaded Modules:
  1) compiler/intel/19.1.2



Example of two different compilers intel and gnu triggers the module conflict with the error during the load of gnu - the new module is not loaded:

ml compiler/intel/19.1.2
ml

Currently Loaded Modules:
  1) compiler/intel/19.1.2
ml compiler/gnu/10.2 
Lmod has detected the following error:  Cannot load module "compiler/gnu/10.2" because these module(s) are loaded:
   compiler/intel

While processing the following module(s):
    Module fullname    Module Filename
    ---------------    ---------------
    compiler/gnu/10.2  /opt/bwhpc/common/modulefiles/Core/compiler/gnu/10.2.lua
ml

Currently Loaded Modules:
  1) compiler/intel/19.1.2


Solution: intel compiler must be unloaded prior the load of gnu module.

ml -compiler/intel/19.1.2
ml compiler/gnu/10.2
ml

Currently Loaded Modules:
  1) compiler/gnu/10.2

2.5.2 Module dependencies (why there is no mpi module?)

Some modules can depends on other modules. Typically, many modules depends on mpi library, Mpi library depends on a compiler, etc.. The user does not need to care about these fundamental dependencies. Majority of modules automatically take care about loading of all necessary packages it is depending on. However, there is an eminent exception - mpi library. While the most of the installed parallel applications exist for only one compiler-mpi combination, there are variety of mpi libraries of the same versions built with different compilers. For example there are two sets of OpenMPI 4.0 modules for intel and gnu compilers. Thus, an user who wants to load a specific mpi must chose (load) a particular compiler prior the mpi module load. Note, that the mpi modules also remains "invisible" for "module av <mpi_name>" command until a certain compiler is not loaded. This due to the module hierarchy of Lmod. More details about the hierarchy is below in Software_Modules_Lmod#Semi_hierarchical_layout_of_modules_on_JUSTUS_2


2.5.2.1 Consequences of the partial module hierarchy for mpi modules

Mpi modules remains invisible for a user (prompted module avail) until some compiler module has been loaded. Once the compiler module has been activated corresponding mpi modules, i.e. built with the particular compiler, become visible.

E.g., with the originally empty list of the loaded modules, the module command

$ module avail

or its shorthand analogue

 
$ ml av

displays no mpi module available. After running

ml compiler/intel/19.1
ml av

mpi packages compatible with the intel 19.1 compiler becomes visible

------------ /opt/bwhpc/common/modulefiles/Compiler/intel/19.1 --------------------
   mpi/impi/2019.7    mpi/openmpi/4.0

in the list of the available software.

2.5.3 Online user guide of Lmod

The complete user guide can be found on Lmod websites [2]

2.5.4 Additional Module System tasks

Lmod offers more than 25 sub-commands plus various options to manage the modulefile system installed on JUSTUS 2. See, e.g. output of "module --help" command. Large majority of users will use only couple of them. A complete list of module sub-commands can be displayed by entering "module --help" command or in Lmod online documentation. The following text lists only a couple of them.

2.6 Other topics

2.6.1 Which shells supports module commands?

So far Bash is only supported shell on JUSTUS 2 to interpret module commands.

2.6.2 Semi hierarchical layout of modules on JUSTUS 2

2.6.2.1 Module hierarchy in Lmod

The structure of software modules on JUSTUS 2 exploits a "semi" hierarchical structure. This is slightly different from what can be seen on another HPC systems with "full" hierarchical structure. The typical systems with full hierarchy put compiler modules (i.e., intel, gcc) in the uppermost (Core) level, depending libraries (e.g., MPI) on the second level, and more depending libraries on a third level. As a consequence, not all the modules contained in the module system are initially visible, namely the modules putted in the second and third layer. Only after a loading a compiler module, the modules of the second layer directly depending on the particular compiler will become available. And similarly, loading an MPI module will make the modules of the third layer depending on the loaded MPI library visible.

2.6.2.2 Semi hierarchy of software stack on JUSTUS 2

JUSTUS 2 adopted the hierarchical structure of the modules layout only partially. In particular, there is only "Core" and the "second" level presented and there are only mpi modules contained in the second level. All other modules, i.e. for example those from the "chem" sub-cathegory such as vasp, turbomole, or gaussian, or those located in the "numlib" sub-cathegory such as mkl or python_numpy, are embodied in the "Core" level.

2.6.2.3 Module dependency

The adopted hierarchy models is not the only tool handling the module dependency. As a matter of fact, most of the modules on JUSTUS 2 require a provision of functionalities from another modules, albeit located in the "Core" level. Such provisioning is implemented in a modulefile either automatically without a need of any action from the user (the depending modulefile, while loading, loads all additional modules automatically) or the depending modulefile, while loading, informs the user about necessity to pre-load additional modules if those has not been activated yet (in this case the user must repeat the loading operation). Which of the solution is applied rests with the decision of the person who built the particular module.

An example of module with the implemented automated pre-loading is orca module. With the pre-emptied list of the loading modules, i.e.

ml

shows

No modules loaded

, the command sequence

ml chem/orca
ml

shows

Currently Loaded Modules:
  1) compiler/intel/19.1   2) chem/orca/4.2.1 

I.e., loading of the intel compiler is built-in the orca module.

2.6.3 Complete list of Lmod options and sub-commands

The whole list of module options and all commands available can be displayed by running

man module

or

module --help

2.6.4 How do Modules work?

The default shell on the bwHPC clusters is bash, so explanations and examples will be shown for bash. In general, programs cannot modify the environment of the shell they are being run from, so how can the module command do exactly that?
The module command is not a program, but a bash-function. You can view its content using:

type module

and you will get the following result:

type module
module is a function
module ()
{
    eval $($LMOD_CMD bash "$@");
    [ $? = 0 ] && eval $(${LMOD_SETTARG_CMD:-:} -s sh)
}

In this function, lmod is called. Its output to stdout is then executed inside your current shell using the bash-internal eval command. As a consequence, all output that you see from the module is transmitted via stderr (output handle 2) or in so


Back to top