BinAC/Software/Gromacs
The main documentation is available via |
Description | Content |
---|---|
module load | chem/gromacs |
License | Open-source software, distributed under the GNU Lesser General Public License (LGPL). More... |
Citing | Publications |
Links | Homepage | Documentation |
Graphical Interface | No |
Description
GROMACS is a versatile package to perform molecular dynamics simulations, i.e. it computes the Newtonian equations of motion for systems with hundreds to millions of particles.
License
GROMACS is free, open-source software released under the GNU Lesser General Public License (LGPL). Furthermore, GROMACS also includes optional code covered by several different licences. The GROMACS package in its entirety may be copied, modified or distributed according to the conditions described in its documentation.
Please cite GROMACS in your publications according to the references.
Usage
Program Binaries
The binary gmx_mpi is the molecular dynamics simulation suite containing all common GROMACS tools. It is compiled including MPI as indicated by the _mpi in the file name.
To get help using the GROMACS molecular dynamics simulation suite execute the following command:
$ gmx_mpi
To obtain a list of available commands, type
$ gmx_mpi help commands
For help on a command, use
$ gmx_mpi help <command>
with <command> specifying the desired command.
For help on selection syntax and usage, use
$ gmx_mpi help selections
Furthermore, a man page is available and can be accessed by typing:
$ man gmx
Double Precision
GROMACS is normally used in single-precision mode. However, if double precision is required, use the binary mdrun_double for your calculations. For help run
$ mdrun_double -h
GPU Acceleration
If GPGPUs are available on the cluster, you could substantially speed up your molecular dynamics simulations with GROMACS. To make use of GPU Acceleration, use the binary mdrun_gpu for your calculations. For help run
$ mdrun_gpu -h
On the bwUniCluster 2.0, you can turn off the following warning without a worry by setting the MPI option --mca btl_openib_warn_default_gid_prefix 0 behind mpirun/mpiexec.
WARNING: There are more than one active ports on host <...>, but the default subnet GID prefix was detected on more than one of these ports. If these ports are connected to different physical IB networks, this configuration will fail in Open MPI. This version of Open MPI requires that every physically separate IB subnet that is used between connected MPI processes must have different subnet ID values.
Hints for using GROMACS
Parallel Execution
It is usually more efficient to run several smaller jobs with reasonable speedup side by side. Check for a reasonable speedup by increasing the number of workers from 1, 2, 4, n cores. Do not forget to adjust the memory specification when changing the number of workers
Memory Management
It requires some experience to choose the perfect memory value. Monitoring a job interactively may help to estimate the memory consumption. Requesting 1GB more than the required amount is typically sufficient.
Runtime Specification
The wall-time limit is independent of the number of parallel workers. Selecting the best value requires some experience. In general, jobs with shorter wall times get higher priorities. It could help to run the job interactively for a small period of time to monitor its convergence. Furthermore, it may be advantageous to extrapolate the run time by looking at smaller test jobs.
Windows Users
If you transfer a GROMACS input file from a Windows computer to the cluster, make sure to convert the line breaks of Windows (<CR>+<LF>) to Unix (only <LF>). Otherwise, GROMACS will issue an error message. Typical Unix commands for this task are: 'dos2unix' and 'unix2dos'.