Gromacs - bwHPC Wiki Gromacs - bwHPC Wiki


From bwHPC Wiki
Jump to: navigation, search
Description Content
module load chem/gromacs
Availability Up-to date availability
License Free software, distributed under the GNU Lesser General
Public License. more...
Citing Cite (some of) the GROMACS papers when you
publish your results (introduction page iv, Citation information)
Links Gromacs Homepage | Gromacs Manual
Graphical Interface No
Included modules compiler/intel | mpi/impi | numlib/mkl | devel/cmake
Plugins Gridcount 1.4
News Update for Gromacs Series 5.1 (new binary files and examples)

1 Description

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

GROMACS supports all the usual algorithms you expect from a modern molecular dynamics implementation, (check the online reference or manual for details), but there are also quite a few features that make it stand out from the competition.

Much more detailed informations about Gromacs is available at the About Gromacs Website.

2 Versions and Availability

A list of versions currently available on all bwHPC-C5-Clusters can be obtained from the

Cluster Information System CIS

On the command line interface of any bwHPC cluster you'll get a list of available versions by using the command 'module avail chem/gromacs'.

-------------- /opt/bwhpc/common/modulefiles --------------
chem/gromacs/4.5.3_gridcount-1.4     chem/gromacs/5.0
chem/gromacs/4.6.2     chem/gromacs/5.0.2
chem/gromacs/4.6.5     chem/gromacs/5.0.5

3 License

Free software, distributed under the GNU Lesser General Public License (LGPL) Version 2.1.
GROMACS is free software, distributed under the GNU Lesser General Public License (LGPL) Version 2.1. GROMACS includes optional code covered by several different licences as described. The GROMACS package in its entirety may be copied, modified or distributed according to the conditions described in the documentation (see link below). However, in the interests of clarity and completeness, some individual parts of GROMACS that can be used under their respective licenses are also noted here, too.
See $GROMACS_HOME_DIR/share/gromacs/COPYING.

4 Usage

4.1 Loading the module

You can load the default version of Gromacs with the command
'module load chem/gromacs'.

$ module avail chem/gromacs
------------------ /opt/bwhpc/common/modulefiles -----------------
chem/gromacs/4.5.3_gridcount-1.4     chem/gromacs/5.0    
chem/gromacs/4.6.2     chem/gromacs/5.0.2
chem/gromacs/4.6.5     chem/gromacs/5.0.5
$ module load chem/gromacs
$ module list
Currently Loaded Modulefiles:
  1) compiler/intel/15.0(default)   3) numlib/mkl/11.2.3(default)
  2) mpi/openmpi/1.8-intel-15.0     4) chem/gromacs/5.1.2(default)

Here, we got the "default" version 5.1.2.
The module will try to load modules it needs to function (e.g. compiler, mpi, numlibs). If loading the module fails, check if you have already loaded one of those modules, but not in the version needed for Gromacs.
If you wish to load a specific (older) version (if available), you can do so using e.g. 'module load chem/gromacs/'version' to load the version you desires.

$ module load chem/gromacs/4.5.3_gridcount-1.4
Please do not use this module if you do not own a valid GROMACS group license.
Please cite GROMACS in your publications according to GROMACS documentation.
Please read 'module help chem/gromacs/4.5.3_gridcount-1.4' before using GROMACS.

Gromacs "version 4.5.3" with gridcount-1.4 is loaded.

4.2 Program Binaries

You can find the Gromacs binaries in the 'bin' folder of the Gromacs home destination folder.

4.2.1 Gromacs 5.0 series


After loading the Gromacs module (module load chem/gromacs/'version') this path is also set to the local $PATH- and $GROMACS_BIN_DIR environments.
Example with version 5.0.2:

$ ls -x $GROMACS_BIN_DIR                     do_dssp_mpi                    do_dssp_mpi_d      editconf_mpi
editconf_mpi_d               eneconv_mpi                    eneconv_mpi_d      g_anadock_mpi
g_anadock_mpi_d              g_anaeig_mpi                   g_anaeig_mpi_d     g_analyze_mpi
g_analyze_mpi_d              g_angle_mpi                    g_angle_mpi_d      g_bar_mpi
g_bar_mpi_d                  g_bond_mpi                     g_bond_mpi_d       g_bundle_mpi
g_bundle_mpi_d               g_chi_mpi                      g_chi_mpi_d        g_cluster_mpi
g_cluster_mpi_d              g_clustsize_mpi                g_clustsize_mpi_d  g_confrms_mpi
g_confrms_mpi_d              g_covar_mpi                    g_covar_mpi_d      g_current_mpi
g_current_mpi_d              g_density_mpi                  g_density_mpi_d    g_densmap_mpi
g_densmap_mpi_d              g_densorder_mpi                g_densorder_mpi_d  g_dielectric_mpi
g_dielectric_mpi_d           g_dipoles_mpi                  g_dipoles_mpi_d    g_disre_mpi
g_disre_mpi_d                g_dist_mpi                     g_dist_mpi_d       g_dos_mpi
g_dos_mpi_d                  g_dyecoupl_mpi                 g_dyecoupl_mpi_d   g_dyndom_mpi
g_dyndom_mpi_d               genbox_mpi                     genbox_mpi_d       genconf_mpi
genconf_mpi_d                g_enemat_mpi                   g_enemat_mpi_d     g_energy_mpi
g_energy_mpi_d               genion_mpi                     genion_mpi_d       genrestr_mpi
genrestr_mpi_d               g_filter_mpi                   g_filter_mpi_d     g_gyrate_mpi
g_gyrate_mpi_d               g_h2order_mpi                  g_h2order_mpi_d    g_hbond_mpi
g_hbond_mpi_d                g_helix_mpi                    g_helix_mpi_d      g_helixorient_mpi
g_helixorient_mpi_d          g_hydorder_mpi                 g_hydorder_mpi_d   g_lie_mpi
g_lie_mpi_d                  g_mdmat_mpi                    g_mdmat_mpi_d      g_mindist_mpi
g_mindist_mpi_d              g_morph_mpi                    g_morph_mpi_d      g_msd_mpi
g_msd_mpi_d                  gmxcheck_mpi                   gmxcheck_mpi_d     gmx-completion.bash
gmx-completion-gmx_mpi.bash  gmx-completion-gmx_mpi_d.bash  gmxdump_mpi        gmxdump_mpi_d
gmx_mpi                      gmx_mpi_d                      GMXRC              GMXRC.bash
[...] some lines removed here ... 
trjcat_mpi                   trjcat_mpi_d                   trjconv_mpi        trjconv_mpi_d
trjorder_mpi                 trjorder_mpi_d              xpm2ps_mpi

4.2.2 Gromacs 5.1 series

$ echo $GROMACS_BIN_DIR # example with version 5.1.2

After loading the Gromacs module (module load chem/gromacs/'version') this path is also set to the local $PATH- and $GROMACS_BIN_DIR environments.
Example with version 5.1.2:

$ ls -x $GROMACS_BIN_DIR		       gmx-completion.bash  gmx-completion-gmx_mpi.bash
gmx-completion-gmx_mpi_d.bash  gmx_mpi		    gmx_mpi_d
GMXRC			       GMXRC.bash	    GMXRC.csh

The new name of the main binary file is now "gmx_mpi(_d)".

4.2.3 Double Precision (MKL) with MPI

All binaries are compiled including MPI. This is indicated by the _mpi mark in the file name. A _d at the end of a file name indicates a double-precision version of the binary. e.g.: gmx_mpi_d is double precision with MPI.

4.2.4 Single Precision (MKL) with MPI

Without '_d' is single precision, but with MPI. e.g.: gmx_mpi

4.2.5 Including Gridcount/without MPI

Gridcount is an analysis tool for Gromacs that creates 3D (number) densities from molecular dynamics trajectories. Typically, this is used to look at the density of water or ions near proteins or in channels and pores. The package consists of two (three) programs, "g_ri3Dc" and "a_ri3Dc".

Click here for more infos about Gridcount and other plugins for Gromacs

Gridcount is included in the module 'chem/gromacs/4.5.3_gridcount-1.4' and is available on the bwUniCluster only.

$ module list
Currently Loaded Modulefiles:
  1) compiler/intel/13.1                3) numlib/mkl/11.0.5
  2) mpi/openmpi/1.6.5-intel-13.1       4) chem/gromacs/4.5.3_gridcount-1.4
a_gridcalc  a_ri3Dc  g_ri3Dc

$PATH and $LD_LIBRARY_PATH is expanded to include the $GROMACS_HOME_DIR/gridcount ($GRIDCOUNT_BIN_DIR) binaries when you load the above module.
This version is known to be more stable than recent MKL-versions of Gromacs. If you get problems with a big amount of calculations with newer versions, try this one!

4.3 Hints for using Gromacs

Taken from justus-gromacs-example.moab - file.

4.3.1 Parallel Execution

Increase the number of workers starting with 1, 2, 4 and 8 cores and check for a reasonable speedup.
If the speedup is not sufficient, it is better to run several smaller jobs with reasonable speedup side by side.
Please do not forget to adjust the memory specification when changing the number of workers.


Our Gromacs version is shared-memory parallel only. Therefore only one-node jobs make any sense.

4.3.2 Heap Memory Specification

Specify real heap memory per job (all workers=cores added together).
It requires a lot of experience to choose the right memory value.
Monitoring a job interactively might help to estimate the memory consumption. Requesting 500MB more than required is ok.
Hints for Gromacs: Rule of thumb in case of serial AND shared memory parallel jobs:
-l mem == %Mem in com-file + 2000MB
Example: If the Gromacs memory statement is "%mem=256mw" (== 2048MB), use around -l mem=4000mb as Moab memory specification.
Please do not request far more memory than required (~2000MB more is ok for Gromacs jobs). Shared memory jobs with N workers do _NOT_ require N-times, but only around 1.0 - 1.1 times the memory of a corresponding serial job.
The required memory might slightly depend on the number of workers (e.g. the "%Mem =" statement in Gromacs input file might need to be increased).

  • Current maximum: ~120GB per node (most nodes) / ~500GB per node (segment 3)
  • Current default: 4GB per job

4.3.3 Stack Memory Specification for bwForCluster

The bwForCluster default memory stack size (10M) is too low for Gromacs.
So the stack limit is set to 200M, a value suitable for most Gromacs jobs.
Typical Gromacs stack requirements are between 100M and 1000M.
Usually there is no need to change the script default value of 200M.
Nevertheless 200M might be still too low for some Gromacs jobs.
So increasing the value further to 1000M might be required then.
For memory stack specification see "ulimit -s" command below in this script.
Total required DISK SPACE for job (all files added together):

Searching an appropriate scratch size can be difficult. One strategy is to calculate smaller jobs and extrapolate to larger jobs or one can run a job for some time in the foreground and monitor the disk usage.

It requires experience with the scientific program to set this value.
Currently the maximal value is around 1.8TB.
Default: 32 GB (shared memory) Specify the required disk space in GB (e.g. -l gres=scratch:60 = 60GB):
-l gres=scratch:60

4.3.4 Job Runtime Specification

Job run time specified by time of real clock at wall (1:00:00 = 1 hour).
The time specification is independend of the number of parallel workers.
Again it requires experience to set this value appropriately.
It might help to run the job interactivly for some time and to monitor the convergence. Furthermore it might be possible to extrapolate the run time by looking at smaller test jobs.
Current maximum: See

Advice: Shorter jobs might get higher priorities.

4.3.5 Remarks


Please make sure, that the memory and core specifications are consistent in the queueing system script and the Gromacs command/input file:
Check specification of number of workers (== number of cores)
In case of parallel jobs ('%NProcShared' and 'PPN>1' present):
Make sure that the number of workers specified in the Gromacs input file e.g. %NProcShared=8 does match the number of workers specified in the queueing system script
Check memory specification
If no %Mem statment has been specified in the Gromacs input file Gromacs allocates the required memory automatically. In this case one should monitor an interactive Gromacs job to figure out the actual memory requirement.
Make sure that the memory specification in your Gromacs input file e.g. %Mem=2000MB does match the memory specification in the queueing system script
-l mem=4000mb ( == %Mem in com-file + 2000mb )

Windows user If you have transfered the Gromacs input file from a Windows computer to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) to Unix (only <LF>). Otherwise Gromacs will write strange error messages. Typical unix commands for that are: 'dos2unix' and 'unix2dos'.

Monitoring running jobs
If you want to monitor the program running (e.g. monitor the memory consumption or disk IO and CPU usage), you have to login to the job's execution node and change to TMP_WORK_DIR (the example jobs runs for a few minutes only).

4.4 Disk Usage

Scratch files are written to the current directory by default. Please change to a local directory or to your local workspace (preferred) before starting your calculations.

'gromacs_repo' is an example name of a repository you created by using the command 'ws_allocate'.

$ cd $(ws_find gromacs_repo)
['your-id'-gromacs_repo-0]$ pwd

5 Examples

5.1 bwHPC examples

You can copy a simple interactive example to your workspace and run it, using 'msub'

$ cd $(ws_find gromacs_repo)
$ cp -R $GROMACS_EXA_DIR/* .
$ cp {*}-gromacs-example.moab mygromacsjob.moab
§ vi mygromacsjob.moab         # do your own modifications
$ msub mygromacsjob.moab       # start job submission

{∗} = place holder for your compute cluster-name

5.1.1 Gromacs 5.0 series

Excerpt from Moab example submit script:

echo " "
echo "### Calling gromacs command ..."
echo " "
# -np $n_cores flag not needed and not supported in grompp_mpi_d starting 
# full paths are needed
# use suffix '*_mpi_d' for MPI (_mpi) and double precision (_d) binaries only
echo -n "grompp_mpi_d : "
mpiexec -n ${MOAB_PROCCOUNT} grompp_mpi_d -maxwarn 10 > grompp.out 2>&1
echo -n "mdrun_mpi_d : "
mpiexec -n ${MOAB_PROCCOUNT} mdrun_mpi_d > mdrun.out 2>&1

5.1.2 Gromacs 5.1 series

This is a version 5.1.2 example as featured in $GROMACS_EXA_DIR:

echo " "
echo "### Copying input files for job (if required):"
echo " "
cp -v ${GROMACS_EXA_DIR}/ion_channel.tpr .
ls -l
echo "### Calling gromacs command ..."
echo -n "gmx_mpi_d mdrun: "
mpirun -n $MOAB_PROCCOUNT gmx_mpi_d mdrun -s ion_channel.tpr \
    -maxh 0.50 -resethway -noconfout -nsteps 500 -g logfile -v > mdrun.out 2>&1

6 GROMACS-Specific Environments

6.1 Old Version 4.n

To see a list of all Gromacs environments set by the 'module load'-command use 'env | grep GROMACS'. Or use the command 'module display chem/gromacs/version'.
Examples. Revised display.
(Version 4.5.3 with Gridcount)

$ (env | grep GROMACS) && (env | grep GRIDCOUNT)

6.2 New Version 5.1.n

$ module display chem/gromacs

module-whatis	 Moleculare dynamics package GROMACS 5.1.2-openmpi-1.8-intel-15.0
    with MKL-toolkit. Single- (gmx_mpi) and double precision (gmx_mpi_d). 
GROMACS_VERSION 5.1.2-openmpi-1.8-intel-15.0 
GROMACS_HOME /opt/bwhpc/common/chem/gromacs/5.1.2-openmpi-1.8-intel-15.0 
GROMACS_BIN_DIR /opt/bwhpc/common/chem/gromacs/5.1.2-openmpi-1.8-intel-15.0/bin 
GROMACS_LIB_DIR /opt/bwhpc/common/chem/gromacs/5.1.2-openmpi-1.8-intel-15.0/lib 
GROMACS_MAN_DIR /opt/bwhpc/common/chem/gromacs/5.1.2-openmpi-1.8-intel-15.0/share/man 
GROMACS_EXA_DIR /opt/bwhpc/common/chem/gromacs/5.1.2-openmpi-1.8-intel-15.0/bwhpc-examples 

7 Version-Specific Information

For a more detailed information specific to a specific Gromacs version, see the information available via the module system with the command 'module help chem/gromacs/'version.
For a small abstract what Gromacs is about use the command 'module whatis chem/gromacs/'version

$ module whatis chem/gromacs/5.1.2
chem/gromacs/5.1.2   : Moleculare dynamics package 
   GROMACS 5.1.2-openmpi-1.8-intel-15.0 with MKL-toolkit. 
   Single- (gmx_mpi) and double precision (gmx_mpi_d).

$ module help chem/gromacs/5.1.2
----------- Module Specific Help for 'chem/gromacs/5.1.2' ---------
This module provides the molecular dynamics package GROMACS
version 5.1.2-openmpi-1.8-intel-15.0 (see also

BEWARE! New binary file in this version is now 'gmx_mpi(_d)'.

8 Useful links