Difference between revisions of "JUSTUS2/Software/NAMD"

From bwHPC Wiki
Jump to: navigation, search
(Useful links)
Line 137: Line 137:
 
* [https://de.wikipedia.org/wiki/NAMD Wikipedia article (german)]
 
* [https://de.wikipedia.org/wiki/NAMD Wikipedia article (german)]
 
----
 
----
[[Category:Chemistry software]][[Category:BwForCluster_Chemistry]][[Category:BwForCluster_JUSTUS_2]]
+
[[Category:Chemistry software]][[Category:BwForCluster_Chemistry]][[Category:BwForCluster_JUSTUS_2]][[Category:BwForCluster_MLS&WISO_Production]]

Revision as of 16:30, 7 June 2021

Description Content
module load chem/namd
Availability BwForCluster_JUSTUS_2 | BwForCluster_MLS&WISO_Production
License Freeware for non-commercial use. More...
Citing Publications
Links Homepage | Documentation
Graphical Interface No

1 Description

NAMD (NAnoscale Molecular Dynamics) is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.

2 Availability

NAMD is available on selected bwHPC-Clusters. A complete list of versions currently installed on the bwHPC-Clusters can be obtained from the Cluster Information System (CIS).

In order to check which versions of NAMD are installed on the compute cluster, run the following command:

$ module avail chem/namd

3 License

NAMD is available as freeware for non-commercial use by individuals, academic institutions, and corporations for in-house business uses. Please read the license for additional information about NAMD.

4 Usage

4.1 Loading the module

You can load the default version of NAMD with the following command:

$ module load chem/namd

The module will try to load all modules (i.e., compiler, MPI, MKL) it needs to function. If loading the module fails, check if you have already loaded one of those modules, but not in the version required by NAMD.

If you wish to load another (older) version of NAMD, you can do so using

$ module load chem/namd/<version>

with <version> specifying the desired version.

Please cite NAMD in your publications according to the references.

4.2 Program Binaries

The binary namd2 is main program of the NAMD package.

Usage: namd2 [options] [config file]
  
Example: namd2 +isomalloc_sync run.namd

For information about how to construct NAMD configuration files, please see the documentation. In addition to a configuration file, NAMD also needs a CHARMM force field in either CHARMM or X-PLOR format, an X-PLOR format PSF file describing the molecular structure, and the initial coordinates of the molecular system in the form of a PDB file.

The NAMD package includes several additional tools:

  • flipbinpdb -- flips byte-ordering of 8-byte doubles
  • flipdcd -- flips byte-ordering of DCD files
  • psfgen -- VMD psfgen plugin
  • sortreplicas -- un-shuffles replica trajectories to place same-temperature frames in the same file

4.2.1 Charm++

Charm++ is a parallel object-oriented programming paradigm based on C++ and developed in the Parallel Programming Laboratory at the University of Illinois at Urbana–Champaign. NAMD has been implemented using Charm++.

  • charmrun -- launches Charm++ programs

For help on usage, type

$ charmrun -help

4.2.2 GPU Acceleration

Currently not available.

4.3 Hints for using NAMD

4.3.1 Parallel Execution

It is usually more efficient to run several smaller jobs with reasonable speedup side by side. Check for a reasonable speedup by increasing the number of workers from 1, 2, 4, n cores. Do not forget to adjust the memory specification when changing the number of workers

4.3.2 Memory Management

It requires some experience to choose the perfect memory value. Monitoring a job interactively may help to estimate the memory consumption. Requesting 1GB more than the required amount is typically sufficient.

4.3.3 Runtime Specification

The wall-time limit is independent of the number of parallel workers. Selecting the best value requires some experience. In general, jobs with shorter wall times get higher priorities. It could help to run the job interactively for a small period of time to monitor its convergence. Furthermore, it may be advantageous to extrapolate the run time by looking at smaller test jobs.

5 Examples

As with all processes that require more than a few minutes to run, non-trivial compute jobs must be submitted to the cluster queuing system.

Example scripts are available in the directory $NAMD_EXA_DIR:

$ module show chem/namd                # show environment variables, which will be available after 'module load'
$ module load chem/namd                # load module
$ ls $NAMD_EXA_DIR                     # show content of directory $NAMD_EXA_DIR
$ cat $NAMD_EXA_DIR/README             # show examples README

Run a first simple example job:

$ module load chem/namd                          # load module
$ WORKSPACE=`ws_allocate namd 3`                 # allocate workspace
$ cd $WORKSPACE                                  # change to workspace
$ cp -a $NAMD_HOME/bwhpc-examples .              # copy example files to workspace
$ cd bwhpc-examples/apoa1                        # change to test directory
$ sbatch bwhpc_namd_apoa1.sh                     # submit job
$ squeue                                         # obtain JOBID
$ scontrol show job <JOBID>                      # check state of job
$ ls                                             # when job finishes the results will be visible in this directory

6 Useful links