Siesta: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
No edit summary
Line 10: Line 10:
|-
|-
| License
| License
| [https://en.wikipedia.org/wiki/GNU_General_Public_License GPL]
| Free, open-source software, distributed under the GNU Lesser General Public License (LGPL). [[#License|More...]]
|-
|-
| Citing
| Citing
| [http://www.gromacs.org/Gromacs_papers Publications]
| [https://departments.icmab.es/leem/SIESTA_MATERIAL/Docs/Publications/index.html Publications]
|-
|-
| Links
| Links
| [http://www.gromacs.org/ Homepage] | [http://manual.gromacs.org/documentation/ Documentation]
| [https://departments.icmab.es/leem/siesta/ Homepage] | [https://departments.icmab.es/leem/SIESTA_MATERIAL/Docs/Manuals/manuals.html Documentation]
|-
|-
| Graphical Interface
| Graphical Interface

Revision as of 19:03, 11 October 2020

Description Content
module load chem/siesta
Availability BwForCluster_JUSTUS_2
License GPL
Citing Publications
Links Homepage | Documentation
Graphical Interface No

Description

SIESTA is a versatile package to perform molecular dynamics simulations, i.e. it computes the Newtonian equations of motion for systems with hundreds to millions of particles.

Availability

GROMACS is available on selected bwHPC-Clusters. A complete list of versions currently installed on the bwHPC-Clusters can be obtained from the Cluster Information System (CIS).

In order to check which versions of GROMACS are installed on the compute cluster, run the following command:

$ module avail chem/gromacs

License

GROMACS is free, open-source software released under the GNU Lesser General Public License (LGPL). Furthermore, GROMACS also includes optional code covered by several different licences. The GROMACS package in its entirety may be copied, modified or distributed according to the conditions described in its documentation.

Usage

Loading the module

You can load the default version of GROMACS with the following command:

$ module load chem/gromacs

The module will try to load all modules it needs to function (e.g., compiler, mpi, ...). If loading the module fails, check if you have already loaded one of those modules, but not in the version required by GROMACS.

If you wish to load another (older) version of GROMACS, you can do so using

$ module load chem/gromacs/<version>

with <version> specifying the desired version.

Please cite GROMACS in your publications according to the documentation.

Program Binaries

The binary gmx_mpi is the molecular dynamics simulation suite containing all common GROMACS tools. It is compiled including MPI as indicated by the _mpi in the file name.

To get help using the GROMACS molecular dynamics simulation suite execute the following command:

$ gmx_mpi

To obtain a list of available commands, type

$ gmx_mpi help commands

For help on a command, use

$ gmx_mpi help <command>

with <command> specifying the desired command.

For help on selection syntax and usage, use

$ gmx_mpi help selections

Furthermore, a man page is available and can be accessed by typing:

$ man gmx

Double Precision

GROMACS is normally used in single-precision mode. However, if double precision is required, use the binary mdrun_double for your calculations. For help run

$ mdrun_double -h

GPU Acceleration

If GPGPUs are available on the cluster, you could substantially speed up your molecular dynamics simulations with GROMACS. To make use of GPU Acceleration, use the binary mdrun_gpu for your calculations. For help run

$ mdrun_gpu -h

Hints for using GROMACS

Parallel Execution

It is usually more efficient to run several smaller jobs with reasonable speedup side by side. Check for a reasonable speedup by increasing the number of workers from 1, 2, 4, n cores. Do not forget to adjust the memory specification when changing the number of workers

Memory Management

It requires some experience to choose the perfect memory value. Monitoring a job interactively may help to estimate the memory consumption. Requesting 1GB more than the required amount is typically sufficient.

Runtime Specification

The wall-time limit is independent of the number of parallel workers. Selecting the best value requires some experience. In general, jobs with shorter wall times get higher priorities. It could help to run the job interactively for a small period of time to monitor its convergence. Furthermore, it may be advantageous to extrapolate the run time by looking at smaller test jobs.

Windows Users

If you transfer a GROMACS input file from a Windows computer to the cluster, make sure to convert the line breaks of Windows (<CR>+<LF>) to Unix (only <LF>). Otherwise, GROMACS will issue an error message. Typical Unix commands for this task are: 'dos2unix' and 'unix2dos'.

Examples

As with all processes that require more than a few minutes to run, non-trivial compute jobs must be submitted to the cluster queuing system.

Example scripts are available in the directory $GROMACS_EXA_DIR:

$ module show chem/gromacs                # show environment variables, which will be available after 'module load'
$ module load chem/gromacs                # load module
$ ls $GROMACS_EXA_DIR                     # show content of directory $GROMACS_EXA_DIR
$ cat $GROMACS_EXA_DIR/README             # show examples README

Run a first simple example job:

$ module load chem/gromacs                          # load module
$ WORKSPACE=`ws_allocate gromacs 3`                 # allocate workspace
$ cd $WORKSPACE                                     # change to workspace
$ cp -a $GROMACS_HOME/bwhpc-examples .              # copy example files to workspace
$ cd bwhpc-examples/GROMACS_TestCaseA               # change to test directory
$ sbatch gromacs-2020.2.slurm                       # submit job
$ squeue                                            # obtain JOBID
$ scontrol show job <JOBID>                         # check state of job
$ ls                                                # when job finishes the results will be visible in this directory

Useful links