JUSTUS2/Software/Turbomole: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(11 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{Softwarepage|chem/turbomole}}

{| width=600px class="wikitable"
{| width=600px class="wikitable"
|-
|-
Line 5: Line 7:
| module load
| module load
| chem/turbomole
| chem/turbomole
|-
| Availability
| [[bwUniCluster]] | [[BwForCluster_Chemistry]]
|-
|-
| License
| License
| [http://www.turbomole-gmbh.com/how-to-buy.html Commerical]
| [https://www.turbomole.org/turbomole/order-turbomole/ Commerical]
|-
|-
| Citing
| Citing
| [http://www.turbomole-gmbh.com/manuals/version_6_5/Documentation_html/node5.html See Turbomole manual]
| [https://www.turbomole.org/turbomole/turbomole-documentation/ See Turbomole manual]
|-
|-
| Links
| Links
| [http://www.turbomole.com Homepage] | [http://www.turbomole-gmbh.com/turbomole-manuals.html Documentation]
| [https://www.turbomole.com Homepage] | [https://www.turbomole-gmbh.com/turbomole-manuals.html Documentation]
|-
| Graphical Interface
| No (Yes, for generating input)
|-
|-
| User Forum
| User Forum
| [http://www.turbo-forum.com/ external]
| [https://www.turbo-forum.com/ external]
|}
|}
= Description =
= Description =
Line 31: Line 27:
* property and spectra calculations such as IR, UV/VIS, Raman, and CD;
* property and spectra calculations such as IR, UV/VIS, Raman, and CD;
* approximations like resolution-of-the-identity (RI) to speed-up the calculations without introducing uncontrollable or unknown errors; as well as
* approximations like resolution-of-the-identity (RI) to speed-up the calculations without introducing uncontrollable or unknown errors; as well as
* parallel versions (OpenMP, Fork, MPI and Global Arrays) for almost all kind of jobs.
* parallel versions for all kind of jobs.
For more information on Turbmole's features please visit [http://www.turbomole-gmbh.com/program-overview.html http://www.turbomole-gmbh.com/program-overview.html].
For more information on Turbmole's features please visit [https://www.turbomole-gmbh.com/program-overview.html http://www.turbomole-gmbh.com/program-overview.html].
<br>
<br>
<br>
<br>


= Versions and Availability =
= Versions and Availability =

A current list of the versions available on the bwUniCluster and bwForClusters can be obtained from the<br>
Cluster Information System: [https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/chem/turbomole CIS Information on Turbomole] or here:
{{#widget:Iframe
|url=https://cis-hpc.uni-konstanz.de/prod.cis/bwUniCluster/chem/turbomole
|width=99%
|height=250
|border=0
}}
On the command line interface (CLI) of a particular bwHPC cluster a list of all available Turbomole versions can be inquired as followed
On the command line interface (CLI) of a particular bwHPC cluster a list of all available Turbomole versions can be inquired as followed
<pre>
<pre>
$ module avail chem/turbomole
$ module avail chem/turbomole
</pre>
</pre>

A current list of the versions available on the bwUniCluster and bwForClusters can be found here:
https://www.bwhpc.de/software.php

== bwUniCluster 2.0 ==

* Turbomole 7.4.1

== bwForCluster JUSTUS 2 ==

* Turbomole 7.5
* Turbomole 7.4.1


== Parallel computing ==
== Parallel computing ==
The Turbomole ''Module'' subsumes all available parallel computing variants of Turbomole's binaries. Turbomole defines the following parallel computing variants:
The Turbomole ''Module'' subsumes all available parallel computing variants of Turbomole's binaries. Turbomole defines the following parallel computing variants:
* SMP = Shared-memory parallel computing based on OpenMP and Fork() with the latter using separated address spaces.
* SMP = Shared-memory parallel computing based on OpenMP and Fork() with the latter using separated address spaces.
* MPI = Message passing interface protocol based parallel computing
* MPI = Message passing interface protocol based parallel computing
However only one of the parallel variants or the sequential variant can be loaded at once and most Turbomole's binaries support only 1 or 2 of the parallelization variants. Like for Turbomole installations without the ''Module'' system of the bwHPC clusters, the variants have to be triggered by the environment variable $PARA_ARCH.
* GA = Global arrays, API for "shared-memory" programming for distributed-memory computers which can be used e.g. to complement MPI.
However only one of the 3 parallel variants or the sequential variant can be loaded at once and most Turbomole's binaries support only 1 or 2 of the parallelization variants. Like for Turbomole installations without a ''Module'' system, the variants have to be triggered by the environment variable $PARA_ARCH.
<br>
<br>
<br>
<br>
Line 59: Line 63:
= Usage =
= Usage =
== Before loading the Module ==
== Before loading the Module ==
Before loading the Turbomole ''Module'' the parallel computing variant has to be defined via the environment variable $PARA_ARCH using the abbreviations SMP, MPI or GA, e.g.:
Before loading the Turbomole ''Module'' the parallel computing variant has to be defined via the environment variable $PARA_ARCH using the abbreviations SMP or MPI, e.g.:
<pre>
<pre>
$ export PARA_ARCH=MPI
$ export PARA_ARCH=MPI
Line 70: Line 74:
$ module load chem/turbomole
$ module load chem/turbomole
</pre>
</pre>
The Turbomole ''Module'' does not depend on any other ''Module'', but on the variable $PARA_ARCH. Moreover, Turbomole provides its own libraries regarding ''OpenMP'', ''Fork()'', ''MPI'', and ''Global Array'' based parallelization.
The Turbomole ''Module'' does not depend on any other ''Module'', but on the variable $PARA_ARCH. Moreover, Turbomole provides its own libraries regarding ''OpenMP'', ''Fork()'', and ''MPI'' based parallelization.
If you wish to load a specific (older) version you can do so by executing e.g.:
If you wish to load a specific (older) version you can do so by executing e.g.:
<pre>
<pre>
$ module load chem/turbomole/6.5
$ module load chem/turbomole/7.4.1
</pre>
</pre>
to load the version 6.5
to load the version 7.4.1
<br>
<br>
== Switching between different parallel variants ==
== Switching between different parallel variants ==
Line 88: Line 92:
{| width=750px class="wikitable"
{| width=750px class="wikitable"
|- style=text-align:left"
|- style=text-align:left"
! Binary !! Features !! OpenMP !! Fork !! MPI !! GA
! Binary !! Features !! OpenMP !! Fork !! MPI
|- style="vertical-align:top;"
|- style="vertical-align:top;"
| define
| define
| Interactive input generator
| Interactive input generator
| no
| no
| no
| no
| no
Line 102: Line 105:
| yes
| yes
| yes
| yes
| no
|- style="vertical-align:top;"
|- style="vertical-align:top;"
| grad
| grad
| Gradient calculations
| Gradient calculations
| no
| yes
| yes
| yes
| yes
| no
| yes
|- style="vertical-align:top;"
|- style="vertical-align:top;"
| ridft
| ridft
| Energy calc. with fast Coulomb approximation
| Energy calc. with fast Coulomb approximation
| no
| no
| yes
| yes
| yes
| yes
| yes
Line 120: Line 120:
| rdgrad
| rdgrad
| Gradient calc. with fast Coulomb approximation
| Gradient calc. with fast Coulomb approximation
| no
| yes
| yes
| yes
| yes
Line 130: Line 129:
| no
| no
| yes
| yes
| no
|- style="vertical-align:top;"
|- style="vertical-align:top;"
| statpt
| statpt
| Hessian and coordinate update for stationary point search
| Hessian and coordinate update for stationary point search
| no
| no
| no
| no
| no
Line 141: Line 138:
| aoforce
| aoforce
| Analytic calculation of force constants, vibrational frequencies and IR intensities
| Analytic calculation of force constants, vibrational frequencies and IR intensities
| no
| yes
| yes
| no
| yes
| no
| yes
|- style="vertical-align:top;"
|- style="vertical-align:top;"
| escf
| escf
| Calc. of time dependent and dielectric properties
| Calc. of time dependent and dielectric properties
| no
| yes
| yes
| no
| yes
| no
| yes
|- style="vertical-align:top;"
|- style="vertical-align:top;"
| egrad
| egrad
| gradients and first-order properties of excited states
| gradients and first-order properties of excited states
| no
| yes
| yes
| no
| no
| no
| yes
|- style="vertical-align:top;"
|- style="vertical-align:top;"
| odft
| odft
Line 165: Line 159:
| no
| no
| no
| no
| st no
|}
|}
For the complete set of binaries and more detailed description of their features read [http://www.turbomole-gmbh.com/manuals/version_6_5/Documentation_html/node8.html here].
For the complete set of binaries and more detailed description of their features read [https://www.turbomole.org/turbomole/turbomole-documentation/ here].
<br>
<br>

== Turbomole tools ==
== Turbomole tools ==
Turbomole's tool set contains scripts and binaries that help to prepare, execute workflows (such as geometry optimisation) and postprocess results.
Turbomole's tool set contains scripts and binaries that help to prepare, execute workflows (such as geometry optimisation) and postprocess results.
Line 204: Line 198:
<br>
<br>
== Disk Usage ==
== Disk Usage ==
By default, scratch files of Turbomole binaries are placed in the directory of Turbmole binary execution. Please do not run your Turbomole calculations in your $HOME or $WORK directory.
By default, scratch files of Turbomole binaries are placed in the directory of Turbomole binary execution. Please do not run your Turbomole calculations in your $HOME or $WORK directory.
<br>
<br>
<br>
<br>
Line 210: Line 204:
== Single node jobs ==
== Single node jobs ==
=== Template provided by Turbomole Module ===
=== Template provided by Turbomole Module ===
The Turbomole ''Module'' provides a simple Moab example of Cubane (C8H8) that runs an energy relaxation via MPI parallel ''dscf'' using 4 cores on a single node and its local file system. To simply run the example do the following steps:
The Turbomole ''Module'' provides a simple job script example of Cubane (C8H8) that runs an energy relaxation via MPI parallel ''dscf'' using 4 cores on a single node and its local file system. To simply run the example do the following steps:
<pre>
<pre>
$ module load chem/turbomole
$ module load chem/turbomole
Line 216: Line 210:
$ cd ~/Turbomole-examples/
$ cd ~/Turbomole-examples/
$ cp -r $TURBOMOLE_EXA_DIR/* ~/Turbmole-example/
$ cp -r $TURBOMOLE_EXA_DIR/* ~/Turbmole-example/
$ msub bwHPC_turbomole_single-node_example.sh
$ sbatch bwHPC_turbomole_single-node_tmpdir_example.sh
</pre>
</pre>
The last step submits the job example script bwHPC_turbomole_single-node_example.sh to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[BwUniCluster_File_System#File_Systems|local file system]] of that particular compute node.
The last step submits the job example script bwHPC_turbomole_single-node_example.sh to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[BwUniCluster_File_System#File_Systems|local file system]] of that particular compute node.
<br>
<br>
=== Geometry optimization ===
=== Geometry optimization ===
To do a geometry optimization of the [[#Single node job template provided by Turbomole Module|previous job example]] modify bwHPC_turbomole_single-node_example.sh by replacing the following line
To do a geometry optimization of the [[#Single node job template provided by Turbomole Module|previous job example]] modify bwHPC_turbomole_single-node_tmpdir_example.sh by replacing the following line
<source lang="bash">
<source lang="bash">
time dscf > dscf.out 2>&1
time dscf > dscf.out 2>&1
Line 229: Line 223:
time jobex -dscf -keep 2>&1
time jobex -dscf -keep 2>&1
</source>
</source>
and submit the modified script to the queueing system via [[Batch Jobs|msub]].
and submit the modified script to the queueing system on bwForCluster JUSTUS 2 via [[Slurm_JUSTUS_2|sbatch]] and bwUniCluster also via [[BwUniCluster_2.0_Slurm_common_Features|sbatch]], respectively.
The Turbomole command ''jobex'' controls the call of all the required Turbomole binaries for the geometry optimization.
The Turbomole command ''jobex'' controls the call of all the required Turbomole binaries for the geometry optimization.
<br>
<br>
Line 241: Line 235:


= Version-Specific Information =
= Version-Specific Information =
For specific information about the Turbomole version (e.g. 6.6) to be loaded do the following:
For specific information about the Turbomole version (e.g. 7.5) to be loaded do the following:
<pre>
<pre>
module help chem/turbomole/6.6
module help chem/turbomole/7.5
</pre>
</pre>


----
----
[[Category:Chemistry software]][[Category:bwUniCluster]][[Category:bwForCluster_Chemistry]]

Latest revision as of 11:31, 23 April 2024

The main documentation is available via module help chem/turbomole on the cluster. Most software modules for applications provide working example batch scripts.


Description Content
module load chem/turbomole
License Commerical
Citing See Turbomole manual
Links Homepage | Documentation
User Forum external

Description

Turbomole is a general purpose quantum chemistry software package for ab initio electronic structure calculations and provides:

  • ground state calculations for methods such as Hartree-Fock, DFT, MP2, and CCSD(T);
  • excited state calculations at different levels such as full RPA, TDDFT, CIS(D), CC2, an ADC(2);
  • geometry optimizations, transition state searches, molecular dynamics calculations;
  • property and spectra calculations such as IR, UV/VIS, Raman, and CD;
  • approximations like resolution-of-the-identity (RI) to speed-up the calculations without introducing uncontrollable or unknown errors; as well as
  • parallel versions for all kind of jobs.

For more information on Turbmole's features please visit http://www.turbomole-gmbh.com/program-overview.html.


Versions and Availability

On the command line interface (CLI) of a particular bwHPC cluster a list of all available Turbomole versions can be inquired as followed

$ module avail chem/turbomole

A current list of the versions available on the bwUniCluster and bwForClusters can be found here: https://www.bwhpc.de/software.php

bwUniCluster 2.0

  • Turbomole 7.4.1

bwForCluster JUSTUS 2

  • Turbomole 7.5
  • Turbomole 7.4.1


Parallel computing

The Turbomole Module subsumes all available parallel computing variants of Turbomole's binaries. Turbomole defines the following parallel computing variants:

  • SMP = Shared-memory parallel computing based on OpenMP and Fork() with the latter using separated address spaces.
  • MPI = Message passing interface protocol based parallel computing

However only one of the parallel variants or the sequential variant can be loaded at once and most Turbomole's binaries support only 1 or 2 of the parallelization variants. Like for Turbomole installations without the Module system of the bwHPC clusters, the variants have to be triggered by the environment variable $PARA_ARCH.

Usage

Before loading the Module

Before loading the Turbomole Module the parallel computing variant has to be defined via the environment variable $PARA_ARCH using the abbreviations SMP or MPI, e.g.:

$ export PARA_ARCH=MPI

will later load the MPI binary variants. If the variable $PARA_ARCH is not defined or empty, the sequential binary variants will be active once the Turbomole Module is loaded.

Loading the Module

You can load the default version of Turbomole with the command:

$ module load chem/turbomole

The Turbomole Module does not depend on any other Module, but on the variable $PARA_ARCH. Moreover, Turbomole provides its own libraries regarding OpenMP, Fork(), and MPI based parallelization. If you wish to load a specific (older) version you can do so by executing e.g.:

$ module load chem/turbomole/7.4.1

to load the version 7.4.1

Switching between different parallel variants

To switch between the different parallel variants provided by the Turbomole Module, simply define the new parallel variant via $PARA_ARCH and load the Module again. Note that for switching between the parallel variants unloading of the Turbomole Module is not required. For instance to change to the MPI variant, execute:

$ export PARA_ARCH=MPI
$ module load chem/turbomole


Turbomole binaries

The Turbomole software package consists of a set of stand-alone program binaries providing different features and parallelization support:

Binary Features OpenMP Fork MPI
define Interactive input generator no no no
dscf Energy calculations yes yes yes
grad Gradient calculations yes yes yes
ridft Energy calc. with fast Coulomb approximation no yes yes
rdgrad Gradient calc. with fast Coulomb approximation yes yes yes
ricc2 Electronic excitation energies, transition moments and properties of excited states yes no yes
statpt Hessian and coordinate update for stationary point search no no no
aoforce Analytic calculation of force constants, vibrational frequencies and IR intensities yes yes yes
escf Calc. of time dependent and dielectric properties yes yes yes
egrad gradients and first-order properties of excited states yes no yes
odft Orbital-dependent energy calc. yes no no

For the complete set of binaries and more detailed description of their features read here.

Turbomole tools

Turbomole's tool set contains scripts and binaries that help to prepare, execute workflows (such as geometry optimisation) and postprocess results.

Script Type Description
x2t Preparation Converts XYZ coordinates into Turbomole coordinates.
sdg Preparation Shows data group from control file: for example$ sdg coordshows current Turbomole coordinates used.
jobex Optimization workflow Shell script that controls and executes automatic optimizations of molecular geometry parameters.
tm2molden Postprocessing Creates a molden format input file for the Molden program.
eiger Postprocessing Script front-end of program eigerf to display HOMO-LUMO gap and MO eigenvalues.

For the complete set of tools and more detailed description of their features read here.

Disk Usage

By default, scratch files of Turbomole binaries are placed in the directory of Turbomole binary execution. Please do not run your Turbomole calculations in your $HOME or $WORK directory.

Examples

Single node jobs

Template provided by Turbomole Module

The Turbomole Module provides a simple job script example of Cubane (C8H8) that runs an energy relaxation via MPI parallel dscf using 4 cores on a single node and its local file system. To simply run the example do the following steps:

$ module load chem/turbomole
$ mkdir -vp ~/Turbomole-example/
$ cd ~/Turbomole-examples/
$ cp -r $TURBOMOLE_EXA_DIR/* ~/Turbmole-example/
$ sbatch bwHPC_turbomole_single-node_tmpdir_example.sh

The last step submits the job example script bwHPC_turbomole_single-node_example.sh to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the local file system of that particular compute node.

Geometry optimization

To do a geometry optimization of the previous job example modify bwHPC_turbomole_single-node_tmpdir_example.sh by replacing the following line

time dscf > dscf.out 2>&1

with

time jobex -dscf -keep 2>&1

and submit the modified script to the queueing system on bwForCluster JUSTUS 2 via sbatch and bwUniCluster also via sbatch, respectively. The Turbomole command jobex controls the call of all the required Turbomole binaries for the geometry optimization.

Turbomole-Specific Environment Variables

To see a list of all Turbomole environments set by the 'module load'-command do the following:

module show chem/turbomole 2>&1 | grep -E '(setenv|prepend-path)'


Version-Specific Information

For specific information about the Turbomole version (e.g. 7.5) to be loaded do the following:

module help chem/turbomole/7.5