JUSTUS2/Software/Dalton: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
Line 74: Line 74:
-ext log : change default output extension from ".out" to ".log"
-ext log : change default output extension from ".out" to ".log"
-nobackup : do not backup files, simply overwrite outputs
-nobackup : do not backup files, simply overwrite outputs
-f dal_mol[_pot] : extract dal_mol[_pot].tar.gz archive from WRKDIR into
-f dal_mol[_pot] : extract dal_mol[_pot].tar.gz archive from WRKDIR into DALTON_TMPDIR before calculation starts
DALTON_TMPDIR before calculation starts
-noarch : do not create tar.gz archive
-noarch : do not create tar.gz archive
-t dir : set scratch directory DALTON_TMPDIR; this script will append
-t dir : set scratch directory DALTON_TMPDIR; this script will append '/DALTON_scratch_<USERNAME>' to the path unless the path contains 'DALTON_scratch' or you explicitly set -noappend
'/DALTON_scratch_ul_cky67' to the path unless the path
contains 'DALTON_scratch' or you explicitly set -noappend
-d : delete job scratch directory before calculation starts
-d : delete job scratch directory before calculation starts
-D : do not delete job scratch directory after calculation stops
-D : do not delete job scratch directory after calculation stops
-noappend : do not append anything to the scratch directory; be careful
-noappend : do not append anything to the scratch directory; be careful with this option since by default scratch is wiped after calculation
with this option since by default scratch is wiped after calculation
-get "file1 ..." : get files back from DALTON_TMPDIR after calculation stops
-get "file1 ..." : get files back from DALTON_TMPDIR after calculation stops
-put "file1 ..." : put files to DALTON_TMPDIR before calculation starts
-put "file1 ..." : put files to DALTON_TMPDIR before calculation starts
-omp num : set the number of OpenMP threads. Note that Dalton is not OpenMP
-omp num : set the number of OpenMP threads. Note that Dalton is not OpenMP parallelized, however, this option can be used with e.g. threaded blas as MKL
parallelized, however, this option can be used with e.g. threaded blas as MKL
-N num |-np num : use num MPI processes (defaults to 1, illegal if DALTON_LAUNCHER specified)
-N num |-np num : use num MPI processes (defaults to 1, illegal if DALTON_LAUNCHER specified)
-cpexe : copy dalton.x to DALTON_TMPDIR before execution, either to global scratch
-cpexe : copy dalton.x to DALTON_TMPDIR before execution, either to global scratch (if DALTON_USE_GLOBAL_SCRATCH is set) or to local scratch on all nodes
(if DALTON_USE_GLOBAL_SCRATCH is set) or to local scratch on all nodes
-rsh : use rsh/rcp for communication with MPI nodes (default: ssh/scp)
-rsh : use rsh/rcp for communication with MPI nodes (default: ssh/scp)
-nodelist "node1 ..." : set nodelist DALTON_NODELIST, dalton.x will be copied to
-nodelist "node1 ..." : set nodelist DALTON_NODELIST, dalton.x will be copied to

Revision as of 17:02, 18 March 2021

Description Content
module load chem/dalton
Availability BwForCluster_JUSTUS_2
License Open-source software, distributed under the GNU Lesser General Public License (LGPL). More...
Citing Publications
Links Homepage | Documentation
Graphical Interface No

Description

Dalton (named after John Dalton) is an ab initio quantum chemistry computer program designed to to allow convenient, automated determination of a large number of molecular properties based on an HF, HF-srDFT, DFT, MP2, CC, CI, MCSCF, or MC-srDFT reference wave function. For additional information on features please visit the Description of the Dalton suite features web page.

Availability

Dalton is available on selected bwHPC-Clusters. A complete list of versions currently installed on the bwHPC-Clusters can be obtained from the Cluster Information System (CIS).

In order to check which versions of Dalton are installed on the compute cluster, run the following command:

$ module avail chem/dalton

License

Dalton is free, open-source software released under the GNU Lesser General Public License (LGPL). Anyone interested in using the Dalton program suite must read the conditions described in its license agreement.

Usage

Loading the module

You can load the default version of Dalton with the following command:

$ module load chem/dalton

The module will try to load all modules it needs to function (e.g., compiler, mpi, ...). If loading the module fails, check if you have already loaded one of those modules, but not in the version required by Dalton.

If you wish to load another (older) version of Dalton, you can do so using

$ module load chem/dalton/<version>

with <version> specifying the desired version.

Please cite Dalton in your publications according to the references.

Program Binaries

The binary dalton is main program of the Dalton package.

Usage: dalton [options] dalinp{.dal} [molinp{.mol} [potinp{.pot}] [pcmsolver{.pcm}]]

Options:
 -w dir                : change job directory to dir (default: $HOME)
 -b dir                : prepend dir to directory list for basis set searches (job directory and dalton basis library
                         are included automatically)
 -o file               : redirect output from program to job directory with "file" as file name
 -ow                   : redirect output from program to job directory with standard file name
 -dal file             : the dalton input file
 -mol file             : the molecule input file
 -pot file             : the potential input file (for .QM3, .QMMM and .PELIB)
 -pcm file             : the pcm input file
 -ext log              : change default output extension from ".out" to ".log"
 -nobackup             : do not backup files, simply overwrite outputs
 -f dal_mol[_pot]      : extract dal_mol[_pot].tar.gz archive from WRKDIR into DALTON_TMPDIR before calculation starts
 -noarch               : do not create tar.gz archive
 -t dir                : set scratch directory DALTON_TMPDIR; this script will append '/DALTON_scratch_<USERNAME>' to the path unless the path contains 'DALTON_scratch' or you explicitly set -noappend
 -d                    : delete job scratch directory before calculation starts
 -D                    : do not delete job scratch directory after calculation stops
 -noappend             : do not append anything to the scratch directory; be careful with this option since by default scratch is wiped after calculation
 -get "file1 ..."      : get files back from DALTON_TMPDIR after calculation stops
 -put "file1 ..."      : put files to DALTON_TMPDIR before calculation starts
 -omp num              : set the number of OpenMP threads. Note that Dalton is not OpenMP parallelized, however, this option can be used with e.g. threaded blas as MKL
 -N num |-np num       : use num MPI processes (defaults to 1, illegal if DALTON_LAUNCHER specified)
 -cpexe                : copy dalton.x to DALTON_TMPDIR before execution, either to global scratch (if DALTON_USE_GLOBAL_SCRATCH is set) or to local scratch on all nodes
 -rsh                  : use rsh/rcp for communication with MPI nodes (default: ssh/scp)
 -nodelist "node1 ..." : set nodelist DALTON_NODELIST, dalton.x will be copied to
                    DALTON_TMPDIR on each node unless DALTON_USE_GLOBAL_SCRATCH is
                    defined (the script uses PBS_NODEFILE or SLURM_NODELIST if available)
 -x dalmol1 dalmol2    : calculate NEXAFS spectrum from ground and core hole states
 -exe exec             : change the executable from default (/lustre/home/software/bwhpc/common/chem/dalton/2020.0-intel-19.1.2-impi-2019.8/dalton/dalton.x) to exec
 -pg                   : do profiling with gprof
 -gb mem               : set dalton max usable work memory to mem Gigabytes (mem integer)
 -mb mem               : set dalton max usable work memory to mem Megabytes (mem integer)
 -ngb mem              : set node max usable work memory to mem Gigabytes (mem integer)
 -nmb mem              : set node max usable work memory to mem Megabytes (mem integer)

For information about how to construct Dalton input files (dalinp{.dal} [molinp{.mol} [potinp{.pot}]]), please see the documentation.

Disk Usage

Scratch files are written to the current directory by default. Please change to a local directory or to your local workspace (preferred) before starting your calculations.

'dalton_repo' is an example name of a repository you created by using the command 'ws_allocate'.

$ cd $(ws_find dalton_repo)
['your-id'-dalton_repo-0]$ pwd
/work/workspace/scratch/'your-id'-dalton_repo-0
['your-id'-dalton_repo-0]$ 


Examples

As with all processes that require more than a few minutes to run, non-trivial compute jobs must be submitted to the cluster queuing system.

Example scripts are available in the directory $DALTON_EXA_DIR:

$ module show chem/dalton                # show environment variables, which will be available after 'module load'
$ module load chem/dalton                # load module
$ ls $DALTON_EXA_DIR                     # show content of directory $DALTON_EXA_DIR
$ cat $DALTON_EXA_DIR/README             # show examples README

Run several example jobs on JUSTUS 2:

$ module load chem/dalton                           # load module
$ WORKSPACE=`ws_allocate dalton 3`                  # allocate workspace
$ cd $WORKSPACE                                     # change to workspace
$ cp -a $DALTON_HOME/bwhpc-examples .               # copy example files to workspace
$ cd bwhpc-examples                                 # change to test directory
$ sbatch dalton-2020.0.slurm                        # submit job
$ squeue                                            # obtain JOBID
$ scontrol show job <JOBID>                         # check state of job
$ ls                                                # when job finishes the results will be visible in this directory

Useful links