Difference between revisions of "JUSTUS2/Software/Dalton"

From bwHPC Wiki
Jump to: navigation, search
(Description)
(FAQ)
(48 intermediate revisions by the same user not shown)
Line 23: Line 23:
 
= Description =
 
= Description =
   
'''Dalton''' (named after [https://en.wikipedia.org/wiki/John_Dalton John Dalton]) is an ab initio quantum chemistry computer program designed to to allow convenient, automated determination of a large number of molecular properties based on an HF, HF-srDFT, DFT, MP2, CC, CI, MCSCF, or MC-srDFT reference wave function. For additional information on features please visit [https://daltonprogram.org/features/ Description of the Dalton program features] web page.
+
'''Dalton''' (named after [https://en.wikipedia.org/wiki/John_Dalton John Dalton]) is an ab initio quantum chemistry computer program designed to to allow convenient, automated determination of a large number of molecular properties based on an HF, HF-srDFT, DFT, MP2, CC, CI, MCSCF, or MC-srDFT reference wave function. For additional information on features please visit the [https://daltonprogram.org/features/ Description of the Dalton suite features] web page.
   
 
= Availability =
 
= Availability =
Line 40: Line 40:
 
= Usage =
 
= Usage =
 
== Loading the module ==
 
== Loading the module ==
  +
You can load the default version of ''Dalton'' with the command:
 
  +
You can load the default version of Dalton with the following command:
 
<pre>
 
<pre>
 
$ module load chem/dalton
 
$ module load chem/dalton
 
</pre>
 
</pre>
   
The module will try to load modules it needs to function (e.g. compiler, mpi, numlibs).
+
The module will try to load all modules it needs to function (e.g., compiler, mpi, ...). If loading the module fails, check if you have already loaded one of those modules, but not in the version required by Dalton.
  +
If loading the module fails, check if you have already loaded one of those modules,
 
  +
If you wish to load another (older) version of Dalton, you can do so using
but not in the version needed for Dalton.
 
If you wish to load a specific (older) version (if available), you can do so using e.g.
 
'module load chem/dalton/version' to load the version you desires.
 
 
<pre>
 
<pre>
$ module avail chem/dalton
+
$ module load chem/dalton/<version>
------------------------ /opt/bwhpc/common/modulefiles -------------------------
 
chem/dalton/2013
 
$ module load chem/dalton/2013
 
 
</pre>
 
</pre>
  +
with <version> specifying the desired version.
  +
  +
Please cite Dalton in your publications according to the [https://daltonprogram.org/citation/ references].
  +
 
== Program Binaries ==
 
== Program Binaries ==
  +
You can find the Dalton binaries in the main folder of the Dalton system.
 
  +
The binary ''dalton'' is main program of the Dalton package.
After loading the Dalton module ('module load chem/dalton/version') it's path is
 
also set to the local $PATH- and [[#Dalton-Specific Environments|$DALTON_BIN_DIR environments]].
 
<br>
 
$DALTON_BIN_DIR=/opt/bwhpc/common/chem/
 
 
<pre>
 
<pre>
  +
Usage: dalton [options] dalinp{.dal} [molinp{.mol} [potinp{.pot}] [pcmsolver{.pcm}]]
$ : Example with Dalton 2013
 
  +
$ echo $DALTON_BIN_DIR
 
  +
Options:
/opt/bwhpc/common/chem/dalton/2013/bin
 
  +
-w dir : change job directory to dir (default: $HOME)
$ ls -F $DALTON_BIN_DIR
 
  +
-b dir : prepend dir to directory list for basis set searches (job directory and dalton basis library
basis/ DALTON.STAT GIT_HASH lsdalton.x* tools/
 
  +
are included automatically)
dalton* dalton.x* lsdalton* lslib_tester.x* VERSION
 
  +
-o file : redirect output from program to job directory with "file" as file name
  +
-ow : redirect output from program to job directory with standard file name
  +
-dal file : the dalton input file
  +
-mol file : the molecule input file
  +
-pot file : the potential input file (for .QM3, .QMMM and .PELIB)
  +
-pcm file : the pcm input file
  +
-ext log : change default output extension from ".out" to ".log"
  +
-nobackup : do not backup files, simply overwrite outputs
  +
-f dal_mol[_pot] : extract dal_mol[_pot].tar.gz archive from WRKDIR into DALTON_TMPDIR before calculation
  +
starts
  +
-noarch : do not create tar.gz archive
  +
-t dir : set scratch directory DALTON_TMPDIR; this script will append '/DALTON_scratch_<USERNAME>' to
  +
the path unless the path contains 'DALTON_scratch' or you explicitly set -noappend
  +
-d : delete job scratch directory before calculation starts
  +
-D : do not delete job scratch directory after calculation stops
  +
-noappend : do not append anything to the scratch directory; be careful with this option since by
  +
default scratch is wiped after calculation
  +
-get "file1 ..." : get files back from DALTON_TMPDIR after calculation stops
  +
-put "file1 ..." : put files to DALTON_TMPDIR before calculation starts
  +
-omp num : set the number of OpenMP threads. Note that Dalton is not OpenMP parallelized, however, this
  +
option can be used with e.g. threaded blas as MKL
  +
-N num |-np num : use num MPI processes (defaults to 1, illegal if DALTON_LAUNCHER specified)
  +
-cpexe : copy dalton.x to DALTON_TMPDIR before execution, either to global scratch (if
  +
DALTON_USE_GLOBAL_SCRATCH is set) or to local scratch on all nodes
  +
-rsh : use rsh/rcp for communication with MPI nodes (default: ssh/scp)
  +
-nodelist "node1 ..." : set nodelist DALTON_NODELIST, dalton.x will be copied to DALTON_TMPDIR on each node unless
  +
DALTON_USE_GLOBAL_SCRATCH is defined (the script uses PBS_NODEFILE or SLURM_NODELIST if
  +
available)
  +
-x dalmol1 dalmol2 : calculate NEXAFS spectrum from ground and core hole states
  +
-exe exec : change the executable from default ($DALTON_HOME/dalton/dalton.x) to exec
  +
-pg : do profiling with gprof
  +
-gb mem : set dalton max usable work memory to mem Gigabytes (mem integer)
  +
-mb mem : set dalton max usable work memory to mem Megabytes (mem integer)
  +
-ngb mem : set node max usable work memory to mem Gigabytes (mem integer)
  +
-nmb mem : set node max usable work memory to mem Megabytes (mem integer)
 
</pre>
 
</pre>
* A '/' at the end of a file name indicates a directory/folder.
 
* '*' is an executable file.
 
   
== Disk Usage ==
+
== Hints for using Dalton ==
Scratch files are written to the current directory by default.
 
<font color=red>Please change to a local directory or to '''your local workspace''' (preferred) before starting your calculations.</font>
 
   
  +
=== Input Files ===
'dalton_repo' is an example name of a repository you created by using the command 'ws_allocate'.
 
   
  +
For information about how to construct input files (dalinp{.dal} [molinp{.mol} [potinp{.pot}]]) for Dalton, please consult the [https://daltonprogram.org/manuals/dalton2020manual.pdf documentation].
  +
  +
=== Environment Variables ===
  +
  +
Environment variables understood by Dalton:
 
<pre>
 
<pre>
  +
DALTON_TMPDIR : scratch directory
$ cd $(ws_find dalton_repo)
 
  +
DALTON_USE_GLOBAL_SCRATCH : use global scratch directory, do not copy any files to worker nodes
['your-id'-dalton_repo-0]$ pwd
 
  +
DALTON_NODELIST : list of nodes, dalton.x will be copied to DALTON_TMPDIR on each node unless
/work/workspace/scratch/'your-id'-dalton_repo-0
 
  +
DALTON_USE_GLOBAL_SCRATCH is defined
['your-id'-dalton_repo-0]$
 
  +
DALTON_LAUNCHER : launcher for the dalton.x binary (if defined, -N flag not allowed)
 
</pre>
 
</pre>
<br>
 
= Examples =
 
== bwHPC examples ==
 
You can copy a simple interactive example to your '''workspace''' and run it, using 'msub'
 
<pre>
 
$ cd $(ws_find dalton_repo)
 
$ cp $DALTON_EXA_DIR/bwunicluster-dalton-example.moab .
 
$ cp bwunicluster-dalton-example.moab mydaltonjob.moab
 
§ vi mydaltonjob.moab # do your own modifications
 
$ msub mydaltonjob.moab # start job submission
 
</pre>
 
It is strongly recommended to use [[#Dalton-Specific Environments|$DALTON_EXA_DIR]] to find the bwHPC examples.
 
<br>
 
=== bwHPC Moab submit script ===
 
Here is an excerpt from the supplied bwunicluster-dalton-example.moab script as found in $DALTON_EXA_DIR.
 
{{bwFrameA|
 
<source lang="bash">
 
[...]
 
echo " "
 
echo "### Copying input test files for job (if required):"
 
echo " "
 
cp -vr ${DALTON_EXA_DIR}/ethane-blyp-benchmark.{dal,mol} .
 
   
  +
=== Disk Usage ===
echo " "
 
echo "### Redefine TMPDIR for storing temporary Dalton files ..."
 
echo " "
 
export TMPDIR="${TMP_WORK_DIR}/dalton_tmp"
 
export DALTON_TMPDIR="${TMP_WORK_DIR}/dalton_tmp"
 
echo "TMPDIR = ${TMPDIR}"
 
echo "DALTON_TMPDIR = ${TMPDIR}"
 
echo " "
 
echo "### Calling dalton command ..."
 
echo " "
 
   
  +
Scratch files are written to <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">$SCRATCH</span> by default. This configuration option can be changed by setting the environment variable <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">$DALTON_TMPDIR</span> (e.g., to a dedicated [[workspace]]) before starting your calculations with Dalton.
# Here is the main Dalton command showing how to run a regular
 
# Dalton job (mpirun is invoked within the 'dalton' script automatically).
 
# For details on the Dalton job see 'ethane-blyp-benchmark.info'.
 
# Example job input: ethane-blyp-benchmark.{dal,mol}
 
# Example job output: bwhpc-example.out ethane-blyp-benchmark.tar.gz.
 
export workdir=${TMP_WORK_DIR}
 
cd ${TMP_WORK_DIR}
 
time $DALTON_BIN_DIR/dalton -N ${MOAB_PROCCOUNT} -o bwhpc-example.out ethane-blyp-benchmark.dal ethane-blyp-benchmark.mol
 
[...]
 
</source>
 
}}
 
<br>
 
   
  +
= Examples =
= Dalton-Specific Environments =
 
  +
To see a list of all Dalton environments set by the 'module load'-command use 'env | grep DALTON'. Or use the command 'module display chem/dalton'.
 
  +
As with all processes that require more than a few minutes to run, non-trivial compute jobs must be submitted to the cluster queuing system.
  +
  +
Example scripts are available in the directory <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">$DALTON_EXA_DIR</span>:
 
<pre>
 
<pre>
  +
$ module show chem/dalton # show environment variables, which will be available after 'module load'
$ : Example with Dalton 2013
 
  +
$ module load chem/dalton # load module
$ env | grep DALTON
 
  +
$ ls $DALTON_EXA_DIR # show content of directory $DALTON_EXA_DIR
DALTON_VERSION=2013
 
DALTON_EXA_DIR=/opt/bwhpc/common/chem/dalton/2013/bwhpc-examples
+
$ cat $DALTON_EXA_DIR/README # show examples README
DALTON_BIN_DIR=/opt/bwhpc/common/chem/dalton/2013/bin
 
DALTON_HOME=/opt/bwhpc/common/chem/dalton/2013
 
 
</pre>
 
</pre>
<br>
 
= Version-Specific Information =
 
For a more detailed information specific to a specific Dalton version, see the information available via the module system with the command 'module help chem/dalton/version'.
 
<br>
 
For a small abstract what Dalton is about use the command 'module whatis chem/dalton/version'.
 
<br>
 
<pre>
 
$ : Dalton 2013 examples
 
$ module whatis chem/dalton
 
chem/dalton : Moleculare electronic structure package DALTON 2013
 
(command 'dalton' and 'lsdalton')
 
$ :
 
$ module help chem/dalton
 
----------- Module Specific Help for 'chem/dalton/2013' -----------
 
This module provides the quantum chemistry program Dalton
 
version 2013 (see also http://dirac.chem.sdu.dk/daltonprogram.org/).
 
   
  +
Run several example jobs on '''JUSTUS 2''':
The Dalton2013 suite consists of two separate executables,
 
  +
<pre>
'dalton' and 'lsdalton'. The DALTON code is a powerful tool
 
  +
$ module load chem/dalton # load module
for a wide range of molecular properties at different levels
 
  +
$ WORKSPACE=`ws_allocate dalton 3` # allocate workspace
of theory (e.g. MCSCF or CC), whereas LSDALTON is a linear-scaling
 
  +
$ cd $WORKSPACE # change to workspace
HF and DFT code suitable for large molecular systems.
 
  +
$ cp -a $DALTON_HOME/bwhpc-examples . # copy example files to workspace
[...]
 
  +
$ cd bwhpc-examples # change to test directory
Documentation:
 
  +
$ sbatch dalton-2020.0.slurm # submit job
* For 'dalton' command line options see output of 'dalton -h'.
 
  +
$ squeue # obtain JOBID
This includes the specification of the memory, the number of workers
 
  +
$ scontrol show job <JOBID> # check state of job
and how the tmp work dir is used.
 
  +
$ ls # when job finishes the results will be visible in this directory
* An Moab example script for dalton including a short
 
example calculation can be found here:
 
/opt/bwhpc/common/chem/dalton/2013/bwhpc-examples/bwunicluster-dalton-example.moab
 
You can copy the example and submit it to the queueing system.
 
[...]
 
 
</pre>
 
</pre>
  +
  +
= FAQ =
  +
  +
'''Q:''' What to do if my simulations abort with <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080"> MEMGET ERROR, insufficient work space in memory </span>?
  +
  +
'''A:''' Increase Dalton's usable work memory with either -mb or -gb on the command line.
  +
  +
= Useful links =
  +
  +
* [https://daltonprogram.org/manuals/dalton2020manual.pdf Documentation (english)]
  +
* [http://forum.daltonprogram.org Forum (english)]
  +
* [https://en.wikipedia.org/wiki/Dalton_(program) Wikipedia article (english)]
  +
* [https://daltonprogram.org/tools/ Plugins for Dalton (english)]
 
----
 
----
[[Category:Chemistry software]][[Category:bwUniCluster]]
+
[[Category:Chemistry software]][[Category:BwForCluster_Chemistry]][[Category:BwForCluster_JUSTUS_2]]

Revision as of 10:59, 2 June 2021

Description Content
module load chem/dalton
Availability BwForCluster_JUSTUS_2
License Open-source software, distributed under the GNU Lesser General Public License (LGPL). More...
Citing Publications
Links Homepage | Documentation
Graphical Interface No

1 Description

Dalton (named after John Dalton) is an ab initio quantum chemistry computer program designed to to allow convenient, automated determination of a large number of molecular properties based on an HF, HF-srDFT, DFT, MP2, CC, CI, MCSCF, or MC-srDFT reference wave function. For additional information on features please visit the Description of the Dalton suite features web page.

2 Availability

Dalton is available on selected bwHPC-Clusters. A complete list of versions currently installed on the bwHPC-Clusters can be obtained from the Cluster Information System (CIS).

In order to check which versions of Dalton are installed on the compute cluster, run the following command:

$ module avail chem/dalton

3 License

Dalton is free, open-source software released under the GNU Lesser General Public License (LGPL). Anyone interested in using the Dalton program suite must read the conditions described in its license agreement.

4 Usage

4.1 Loading the module

You can load the default version of Dalton with the following command:

$ module load chem/dalton

The module will try to load all modules it needs to function (e.g., compiler, mpi, ...). If loading the module fails, check if you have already loaded one of those modules, but not in the version required by Dalton.

If you wish to load another (older) version of Dalton, you can do so using

$ module load chem/dalton/<version>

with <version> specifying the desired version.

Please cite Dalton in your publications according to the references.

4.2 Program Binaries

The binary dalton is main program of the Dalton package.

Usage: dalton [options] dalinp{.dal} [molinp{.mol} [potinp{.pot}] [pcmsolver{.pcm}]]

Options:
 -w dir                : change job directory to dir (default: $HOME)
 -b dir                : prepend dir to directory list for basis set searches (job directory and dalton basis library
                         are included automatically)
 -o file               : redirect output from program to job directory with "file" as file name
 -ow                   : redirect output from program to job directory with standard file name
 -dal file             : the dalton input file
 -mol file             : the molecule input file
 -pot file             : the potential input file (for .QM3, .QMMM and .PELIB)
 -pcm file             : the pcm input file
 -ext log              : change default output extension from ".out" to ".log"
 -nobackup             : do not backup files, simply overwrite outputs
 -f dal_mol[_pot]      : extract dal_mol[_pot].tar.gz archive from WRKDIR into DALTON_TMPDIR before calculation
                         starts
 -noarch               : do not create tar.gz archive
 -t dir                : set scratch directory DALTON_TMPDIR; this script will append '/DALTON_scratch_<USERNAME>' to
                         the path unless the path contains 'DALTON_scratch' or you explicitly set -noappend
 -d                    : delete job scratch directory before calculation starts
 -D                    : do not delete job scratch directory after calculation stops
 -noappend             : do not append anything to the scratch directory; be careful with this option since by
                         default scratch is wiped after calculation
 -get "file1 ..."      : get files back from DALTON_TMPDIR after calculation stops
 -put "file1 ..."      : put files to DALTON_TMPDIR before calculation starts
 -omp num              : set the number of OpenMP threads. Note that Dalton is not OpenMP parallelized, however, this
                         option can be used with e.g. threaded blas as MKL
 -N num |-np num       : use num MPI processes (defaults to 1, illegal if DALTON_LAUNCHER specified)
 -cpexe                : copy dalton.x to DALTON_TMPDIR before execution, either to global scratch (if
                         DALTON_USE_GLOBAL_SCRATCH is set) or to local scratch on all nodes
 -rsh                  : use rsh/rcp for communication with MPI nodes (default: ssh/scp)
 -nodelist "node1 ..." : set nodelist DALTON_NODELIST, dalton.x will be copied to DALTON_TMPDIR on each node unless
                         DALTON_USE_GLOBAL_SCRATCH is defined (the script uses PBS_NODEFILE or SLURM_NODELIST if
                         available)
 -x dalmol1 dalmol2    : calculate NEXAFS spectrum from ground and core hole states
 -exe exec             : change the executable from default ($DALTON_HOME/dalton/dalton.x) to exec
 -pg                   : do profiling with gprof
 -gb mem               : set dalton max usable work memory to mem Gigabytes (mem integer)
 -mb mem               : set dalton max usable work memory to mem Megabytes (mem integer)
 -ngb mem              : set node max usable work memory to mem Gigabytes (mem integer)
 -nmb mem              : set node max usable work memory to mem Megabytes (mem integer)

4.3 Hints for using Dalton

4.3.1 Input Files

For information about how to construct input files (dalinp{.dal} [molinp{.mol} [potinp{.pot}]]) for Dalton, please consult the documentation.

4.3.2 Environment Variables

Environment variables understood by Dalton:

DALTON_TMPDIR             : scratch directory
DALTON_USE_GLOBAL_SCRATCH : use global scratch directory, do not copy any files to worker nodes
DALTON_NODELIST           : list of nodes, dalton.x will be copied to DALTON_TMPDIR on each node unless
                            DALTON_USE_GLOBAL_SCRATCH is defined
DALTON_LAUNCHER           : launcher for the dalton.x binary (if defined, -N flag not allowed)

4.3.3 Disk Usage

Scratch files are written to $SCRATCH by default. This configuration option can be changed by setting the environment variable $DALTON_TMPDIR (e.g., to a dedicated workspace) before starting your calculations with Dalton.

5 Examples

As with all processes that require more than a few minutes to run, non-trivial compute jobs must be submitted to the cluster queuing system.

Example scripts are available in the directory $DALTON_EXA_DIR:

$ module show chem/dalton                # show environment variables, which will be available after 'module load'
$ module load chem/dalton                # load module
$ ls $DALTON_EXA_DIR                     # show content of directory $DALTON_EXA_DIR
$ cat $DALTON_EXA_DIR/README             # show examples README

Run several example jobs on JUSTUS 2:

$ module load chem/dalton                           # load module
$ WORKSPACE=`ws_allocate dalton 3`                  # allocate workspace
$ cd $WORKSPACE                                     # change to workspace
$ cp -a $DALTON_HOME/bwhpc-examples .               # copy example files to workspace
$ cd bwhpc-examples                                 # change to test directory
$ sbatch dalton-2020.0.slurm                        # submit job
$ squeue                                            # obtain JOBID
$ scontrol show job <JOBID>                         # check state of job
$ ls                                                # when job finishes the results will be visible in this directory

6 FAQ

Q: What to do if my simulations abort with MEMGET ERROR, insufficient work space in memory ?

A: Increase Dalton's usable work memory with either -mb or -gb on the command line.

7 Useful links