https://wiki.bwhpc.de/wiki/api.php?action=feedcontributions&user=C+Mosch&feedformat=atombwHPC Wiki - User contributions [en]2021-05-06T21:54:54ZUser contributionsMediaWiki 1.31.14https://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8311Gaussview2021-02-16T14:17:26Z<p>C Mosch: /* Usage */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* prepare Gaussian calculations (create and save Gaussian input files with all parameters)<br />
* monitor progress of running Gaussian calculations (login to the compute node and change to the temporary job directory)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup QM/MM calculations<br />
For more information on features please visit GaussView's [http://gaussian.com/gaussview6/ ''GaussView Documentation''] web page.<br />
<br> <br><br />
'''Please keep in mind: NEVER EVER start long running large calculations (>10 minutes) interactively on login or visualization nodes.<br />
Use GaussView only for preparing calculations (job input) or analysing results (job output).'''<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available can be displayed after login to the bwForCluster by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
Please make sure that the speed up of the calculation is reasonable when requesting<br />
more cores. When doubling the number of cores, the speed up should be at least a<br />
factor 1.7, better 1.8.<br />
<br />
Please keep in mind that our Gaussian version is only shared-memory parallel.<br />
Therefore requesting more than one node does not make any sense.<br />
<br />
= Usage =<br />
<br />
There are two methods to start GaussView remotely on JUSTUS 2:<br />
* '''Remote-VNC''' method (recommended): First set up a VNC (Virtual Network Computing) remote desktop connection to one of the JUSTUS 2 login or visualization nodes. To help you with that we have pre-installed [[TigerVNC]] on JUSTUS 2. See the [[TigerVNC]] documentation for details. Thereafter you can open a terminal window in that VNC session, load the GaussView module and start GaussView via command ''gaussview''.<br />
* '''X-forwarding''' method (usually slow): First log in to the JUSTUS 2 login or visualization nodes with X-forwarding enabled (i.e. ''ssh -X''). Then load the GaussView module and start GaussView via command ''gaussview''. Major disadvantage: X-forwarding requires a very fast network connection with low latency. Usually home or Wi-Fi networks are too slow.<br />
<br><br />
'''NEVER start long running large calculations interactively on login or visualization nodes. Use GaussView to prepare and save the input. Then submit a [[Gaussian]] job with the input created by GaussView.'''<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
It is recommended to always specify the full module name including<br />
the version of the module, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/6.1.1<br />
</pre><br />
to load version ''6.1.1'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Slurm option '''--ntasks-per-node''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When done with setting up of the job, save the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8310Gaussview2021-02-16T14:16:22Z<p>C Mosch: /* Usage */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* prepare Gaussian calculations (create and save Gaussian input files with all parameters)<br />
* monitor progress of running Gaussian calculations (login to the compute node and change to the temporary job directory)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup QM/MM calculations<br />
For more information on features please visit GaussView's [http://gaussian.com/gaussview6/ ''GaussView Documentation''] web page.<br />
<br> <br><br />
'''Please keep in mind: NEVER EVER start long running large calculations (>10 minutes) interactively on login or visualization nodes.<br />
Use GaussView only for preparing calculations (job input) or analysing results (job output).'''<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available can be displayed after login to the bwForCluster by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
Please make sure that the speed up of the calculation is reasonable when requesting<br />
more cores. When doubling the number of cores, the speed up should be at least a<br />
factor 1.7, better 1.8.<br />
<br />
Please keep in mind that our Gaussian version is only shared-memory parallel.<br />
Therefore requesting more than one node does not make any sense.<br />
<br />
= Usage =<br />
<br />
There are two methods to start GaussView remotely on JUSTUS 2:<br />
* '''Remote-VNC''' method (recommended): First set up a VNC (Virtual Network Computing) remote desktop connection to one of the JUSTUS 2 login or visualization nodes. To help you with that we have pre-installed [[TigerVNC]] on JUSTUS 2. See the [[TigerVNC]] documentation for details. Thereafter you can open a terminal window in that VNC session, load the GaussView module and start GaussView via command ''gaussview''.<br />
* '''X-forwarding''' method (usually slow): First log in to the JUSTUS 2 login or visualization nodes with X-forwarding enabled (i.e. ''ssh -X''). Then load the GaussView module and start GaussView via command ''gaussview''. Major disadvantage: X-forwarding requires a very fast network connection with low latency. Usually home or Wi-Fi networks are too slow.<br />
<br><br />
'''Do NOT start long running large calculations interactively on login or visualization nodes. Use GaussView to prepare and save the input. Then submit the [[Gaussian]] job with the input to the queueing system.'''<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
It is recommended to always specify the full module name including<br />
the version of the module, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/6.1.1<br />
</pre><br />
to load version ''6.1.1'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Slurm option '''--ntasks-per-node''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When done with setting up of the job, save the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8309Gaussview2021-02-16T14:15:00Z<p>C Mosch: /* Description */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* prepare Gaussian calculations (create and save Gaussian input files with all parameters)<br />
* monitor progress of running Gaussian calculations (login to the compute node and change to the temporary job directory)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup QM/MM calculations<br />
For more information on features please visit GaussView's [http://gaussian.com/gaussview6/ ''GaussView Documentation''] web page.<br />
<br> <br><br />
'''Please keep in mind: NEVER EVER start long running large calculations (>10 minutes) interactively on login or visualization nodes.<br />
Use GaussView only for preparing calculations (job input) or analysing results (job output).'''<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available can be displayed after login to the bwForCluster by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
Please make sure that the speed up of the calculation is reasonable when requesting<br />
more cores. When doubling the number of cores, the speed up should be at least a<br />
factor 1.7, better 1.8.<br />
<br />
Please keep in mind that our Gaussian version is only shared-memory parallel.<br />
Therefore requesting more than one node does not make any sense.<br />
<br />
= Usage =<br />
<br />
There are two methods to start GaussView remotely on JUSTUS 2:<br />
* '''Remote-VNC''' method (recommended): First set up a VNC (Virtual Network Computing) remote desktop connection to one of the JUSTUS 2 login or visualization nodes. To help you with that we have pre-installed [[TigerVNC]] on JUSTUS 2. See the [[TigerVNC]] documentation for details. Thereafter you can open a terminal window in that VNC session, load the GaussView module and start GaussView via command ''gaussview''.<br />
* '''X-forwarding''' method (usually slow): First log in to the JUSTUS 2 login or visualization nodes with X-forwarding enabled (i.e. ''ssh -X''). Then load the GaussView module and start GaussView via command '''gaussview'''. Major disadvantage: X-forwarding requires a very fast network connection with low latency. Usually home or Wi-Fi networks are too slow.<br />
<br><br />
'''Do NOT start long running large calculations interactively on login or visualization nodes. Use GaussView to prepare and save the input. Then submit the [[Gaussian]] job with the input to the queueing system.'''<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
It is recommended to always specify the full module name including<br />
the version of the module, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/6.1.1<br />
</pre><br />
to load version ''6.1.1'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Slurm option '''--ntasks-per-node''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When done with setting up of the job, save the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8308Gaussview2021-02-16T14:12:37Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* prepare Gaussian calculations (create and save Gaussian input files with all parameters)<br />
* monitor progress of running Gaussian calculations (login to the compute node and change to the temporary job directory)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup QM/MM calculations<br />
For more information on features please visit GaussView's [http://gaussian.com/gaussview6/ ''GaussView Documentation''] web page.<br />
<br><br />
'''NEVER EVER start long running large calculations interactively on login or visualization nodes.'''<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available can be displayed after login to the bwForCluster by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
Please make sure that the speed up of the calculation is reasonable when requesting<br />
more cores. When doubling the number of cores, the speed up should be at least a<br />
factor 1.7, better 1.8.<br />
<br />
Please keep in mind that our Gaussian version is only shared-memory parallel.<br />
Therefore requesting more than one node does not make any sense.<br />
<br />
= Usage =<br />
<br />
There are two methods to start GaussView remotely on JUSTUS 2:<br />
* '''Remote-VNC''' method (recommended): First set up a VNC (Virtual Network Computing) remote desktop connection to one of the JUSTUS 2 login or visualization nodes. To help you with that we have pre-installed [[TigerVNC]] on JUSTUS 2. See the [[TigerVNC]] documentation for details. Thereafter you can open a terminal window in that VNC session, load the GaussView module and start GaussView via command ''gaussview''.<br />
* '''X-forwarding''' method (usually slow): First log in to the JUSTUS 2 login or visualization nodes with X-forwarding enabled (i.e. ''ssh -X''). Then load the GaussView module and start GaussView via command '''gaussview'''. Major disadvantage: X-forwarding requires a very fast network connection with low latency. Usually home or Wi-Fi networks are too slow.<br />
<br><br />
'''Do NOT start long running large calculations interactively on login or visualization nodes. Use GaussView to prepare and save the input. Then submit the [[Gaussian]] job with the input to the queueing system.'''<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
It is recommended to always specify the full module name including<br />
the version of the module, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/6.1.1<br />
</pre><br />
to load version ''6.1.1'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Slurm option '''--ntasks-per-node''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When done with setting up of the job, save the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8307Gaussview2021-02-16T14:09:15Z<p>C Mosch: /* Description */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* prepare Gaussian calculations (create and save Gaussian input files with all parameters)<br />
* monitor progress of running Gaussian calculations (login to the compute node and change to the temporary job directory)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup QM/MM calculations<br />
For more information on features please visit GaussView's [http://gaussian.com/gaussview6/ ''GaussView Documentation''] web page.<br />
<br><br />
There are two methods to start GaussView remotely on JUSTUS 2:<br />
* '''Remote-VNC''' method (recommended): First set up a VNC (Virtual Network Computing) remote desktop connection to one of the JUSTUS 2 login or visualization nodes. To help you with that we have pre-installed [[TigerVNC]] on JUSTUS 2. See the [[TigerVNC]] documentation for details. Thereafter you can open a terminal window in that VNC session, load the GaussView module and start GaussView via command ''gaussview''.<br />
* '''X-forwarding''' method (usually slow): First log in to the JUSTUS 2 login or visualization nodes with X-forwarding enabled (i.e. ''ssh -X''). Then load the GaussView module and start GaussView via command '''gaussview'''. Major disadvantage: X-forwarding requires a very fast network connection with low latency. Usually home or Wi-Fi networks are too slow.<br />
<br><br />
'''Do NOT start long running larg calculations interactively on login or visualization nodes. Use GaussView to prepare and save the input. Then submit the [[Gaussian]] job with the input to the queueing system.'''<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available can be displayed after login to the bwForCluster by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
Please make sure that the speed up of the calculation is reasonable when requesting<br />
more cores. When doubling the number of cores, the speed up should be at least a<br />
factor 1.7, better 1.8.<br />
<br />
Please keep in mind that our Gaussian version is only shared-memory parallel.<br />
Therefore requesting more than one node does not make any sense.<br />
<br />
= Usage =<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
It is recommended to always specify the full module name including<br />
the version of the module, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/6.1.1<br />
</pre><br />
to load version ''6.1.1'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Slurm option '''--ntasks-per-node''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When done with setting up of the job, save the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access&diff=8292BwForCluster User Access2021-02-16T10:29:21Z<p>C Mosch: /* Personal registration at a bwForCluster - account creation */</p>
<hr />
<div>Granting user access to a bwForCluster requires 3 steps:<br />
<br />
# [[File:Zas assignment icon.svg|25px|]] Become part of an ''rechenvorhaben (RV)'' either by joining as a coworker or creating a new one <br />
# [[File:bwfor entitlement icon.svg|25px|]] Permission from your university to use a bwForCluster called [[BwForCluster_Entitlement| bwForCluster entitlement]] <br />
# [[#Personal registration at bwForCluster | Personal registration at the cluster site ]] based on approved ''RV'' [[File:Zas assignment icon.svg|25px|]] and issued bwForCluster entitlement [[File:bwfor entitlement icon.svg|25px|]].<br />
<br />
<br />
{| style="width: 100%; border-spacing: 5px;"<br />
| style="text-align:center; color:#000;vertical-align:middle;font-size:75%;" |[[File:Bwforreg.svg|center|border|500px|]] <br />
|-<br />
| style="text-align:center; color:#000;vertical-align:middle;" |<br />
|}<br />
<br />
<br />
<br><br />
Step 1 and 2 can be done at the same time. When both are finished, you can do step 3. Which cluster you will get access to depends on your research area and is decided in step 1.<br />
<br />
<br />
= <b> ''RV'' registration at ''ZAS'' </b>=<br />
You will have to register a "rechenvorhaben" in which you shortly describe your group's compute activities and the resources you need. Any amount of co-workers can then join your RV without having to register another RV. <br />
<br />
== Register a new "RV" ==<br />
<br />
Typically done only by the leader of a scientific work group or the senior scientist of a research group/collaboration.<br />
<br />
If you register your own RV, you will be:<br />
# held accountable for the co-workers in the RV<br />
# asked to provide information for the two reports required by the DFG for their funding of bwFor clusters<br />
# likely asked for a contribution to a future DFG grant proposal for a new bwFor cluster in your area of research ("wissenschaftliches Beiblatt")<br />
<br />
Please follow the steps at [[bwForCluster RV registration]]<br />
<br />
== Become Coworker of an "RV"==<br />
<br />
Your advisor (the "RV" responsible") will provide you with the following data on the RV:<br />
* acronym<br />
* password<br />
<br />
To become coworker of an ''RV'', please login at<br />
* https://www.bwhpc-c5.de/en/ZAS/bwforcluster_collaboration.php<br />
and provide acronym and password. You will be assigned to the 'RV' as a member.<br />
<br />
After submitting the request you will receive an email from ''ZAS'' about the further steps (i.e. [[#Personal registration at bwForCluster | personal registration at assigned bwForCluster]]). <br />
The RV owner and any managers will be notified automatically. <br />
You can see your RV memberships at https://www.bwhpc-c5.de/en/ZAS/info_rv.php<br />
<br><br />
<br />
= <b> Permission of Your University ("bwForCluster entitlement") </b> =<br />
<br />
Your own high school or university has to grant you permission to use a bwForCluster. <br />
<br />
Getting permission from your university to calculate on bwForClusters is independent of an RV and can be done before or while getting an RV. If you are only creating an RV for your research group but do '''not''' plan to use the cluster yourself, you do not need to do this step. <br />
<br />
Each university has their own procedure. <br />
<br />
The page [[BwForCluster_Entitlement]] contains a list of participating universities and links to instructions on how to get an bwForCluster entitlement at each of them.<br />
<br />
= <b> Personal registration at a bwForCluster - account creation</b> = <br />
<br />
<b>Prerequisites for successful account creation:</b><br />
* [[BwForCluster_User_Access#RV_registration_at_ZAS|Membership in an RV (belonging to the bwForCluster you plan to join)]].<br />
* [[BwForCluster_User_Access#Permission_of_Your_University_.28.22bwForCluster_entitlement.22.29|bwForCluster entitlement (assigned by your university)]].<br />
<br />
Once you have registered your own RV (''rechenvorhaben'')<br />
or a membership in an RV, you will receive<br />
an email with a website to create an account for yourself<br />
on that cluster. This email will send you to one of the following websites:<br />
<br />
Available bwForCluster registration servers (service providers):<br />
{| style="border:3px solid darkgray; margin: 1em auto 1em auto;" width="72%"<br />
|- <br />
!scope="row" {{Darkgray}} | Cluster topic and location<br />
!scope="row" {{Darkgray}} | Registration server (for account creation)<br />
|-<br />
| bwForCluster JUSTUS 2 Ulm for Computational Chemistry and Quantum Sciences<br />
| https://login.bwidm.de <br />
|-<br />
| bwForCluster MLS&WISO (Production and Development)<br />
| https://bwservices.uni-heidelberg.de<br />
|-<br />
| bwForCluster NEMO Freiburg<br />
| https://bwservices.uni-freiburg.de<br />
|-<br />
| bwForCluster BinAC Tübingen<br />
| https://bwservices.uni-tuebingen.de<br />
|}<br />
<br />
In this chapter, a user account will be created at the cluster based on your personal credentials. <br />
<br />
After having completed chapters 1 and 2 (RV approval and bwForCluster entitlement) please visit the<br />
<br />
* bwForCluster ''service provider'' registration website (see table above or email after RV approval):<br />
*# Select your home organization from the list of organizations and click '''Proceed'''.<br />
*# You will be redirected to the ''Identity Provider'' of your home organization.<br />
*# Enter your home-organizational user ID (might be user name, email, ...) and password and click '''Login''' or '''Anmelden'''.<br />
*# When doing this for the first time you need to accept that your personal data is transferred to the ''service provider''.<br />
*# You will be redirected back to the cluster registration website.<br />
*# '''JUSTUS 2 only''': This step is required before registration for JUSTUS 2: To improve security a 2-factor authentication mechanism (2FA) is being enforced. You can manage your 2FA tokens by clicking on [https://bwidm.scc.kit.edu/user/twofa.xhtml this link] or on '''My Tokens''' in the main menu of the JUSTUS 2 registration website. The instructions for registering a new 2FA token can be found on the following page: [[BwForCluster User Access/2FA Tokens]]. Please create at least one 2FA token before proceeding with JUSTUS 2 registration.<br />
*# Select '''Service description''' within the box of your '''designated cluster'''. If the cluster is not visible, the reasons are either a missing entitlement or you are not member in an RV assigned to that cluster - see prerequisites at the beginning of this chapter.<br />
*# Click on the '''Register''' link below the service description to register for this cluster.<br />
*# Make sure all requirements are met by checking the '''Requirements''' box at the top. If the requirements are not met you might be able to correct the issue by following the instructions. In all other cases please [http://www.support.bwhpc-c5.de open a ticket at the bwSupport Portal]. [[File:BwUniCluster 2.0 access login bwidm registration requirements.png|center|border|]]<br />
*# Read and accept the terms and conditions of use ('''[v] I have read and accepted the terms of use''') and click on button '''Register'''. When requirements are missing, i.e. a missing second factor for authentication, you may need to correct that before being able to click on button "Register".<br />
*# Click on '''Set Service Password''' and set a password for the cluster. '''Note: Setting a SERVICE password is MANDATORY for access to any bwForCluster. Using the password of your home organization is not accepted anymore.'''<br />
*# Finally you will receive an email with instructions how to login to the cluster. Please wait at least 15 minutes before trying to login. More details about cluster login can be found in the next chapter. '''Note: Carefully read the email send by the registration server after account registration.'''<br />
*# '''Note:''' You can return to the registration website at any time, in order to review your registration details, change/reset your service password or deregister from the service by yourself.<br />
<br />
= <b> Login to bwForCluster </b> = <br />
<br />
Personalized details about how to login to the cluster are included<br />
in an email send after registration at the bwForCluster service provider.<br />
<br />
General instructions for the bwForCluster login can be found here:<br />
{| style="border:3px solid darkgray; margin: 1em auto 1em auto;" width="73%"<br />
|- <br />
!scope="row" {{Darkgray}} | Cluster topic and location<br />
!scope="row" {{Darkgray}} | Login instructions<br />
|-<br />
| bwForCluster Chemistry JUSTUS 2 Ulm for Computational Chemistry and Quantum Sciences<br />
| [[BwForCluster_JUSTUS2_Login|bwForCluster JUSTUS 2 Login]]<br />
|-<br />
| bwForCluster MLS&WISO Production<br />
| [[BwForCluster_MLS&WISO_Production_Login|bwForCluster MLS&WISO Production Login]]<br />
|-<br />
| bwForCluster MLS&WISO Development<br />
| [[BwForCluster_MLS&WISO_Development_Login|bwForCluster MLS&WISO Development Login]]<br />
|-<br />
| bwForCluster NEMO Freiburg<br />
| [[bwForCluster NEMO Login]] <br />
|-<br />
| bwForCluster BinAC Tübingen<br />
| [[bwForCluster BinAC Login]] <br />
<br />
|}<br />
<br />
<br><br />
<br />
= <b> Costs and Funding </b> =<br />
The usage of bwForCluster is free of charge. bwForClusters are customized to the requirements of particular research areas. <br />
<br />
bwForClusters are financed by the DFG (German Research Foundation) and by the Ministry of Science, Research and Arts of Baden-Württemberg based on scientifc grant proposal (compare proposals guidelines as per Art. 91b GG).<br />
<br />
----<br />
[[Category:Access|bwForCluster]][[Category:bwForCluster_Chemistry]][[Category:bwForCluster_MLS&WISO_Production]][[Category:bwForCluster_MLS&WISO_Development]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=BwForCluster_User_Access/2FA_Tokens&diff=8291BwForCluster User Access/2FA Tokens2021-02-16T10:28:49Z<p>C Mosch: </p>
<hr />
<div><br />
----<br />
<br />
'''Currently this description is only valid for bwForCluster JUSTUS 2.'''<br />
<br />
'''When there is an old useless software token (i.e. not initialized, smartphone lost, etc) or an old dysfunctional backup TAN list (i.e. no printout), delete those BEFORE setting up a new TOTP token and a backup TAN lists.'''<br />
<br />
'''After cleaning up your old tokens, FIRST set up a software/hardware TOTP token. THEN create and print a backup TAN list in addition.'''<br />
<br />
----<br />
<br />
<br />
To improve security a '''2-factor authentication mechanism (2FA)''' is being enforced for logins to the bwForCluster. In addition to the service password a second value, the '''second factor''', has to be entered on every login.<br />
<br />
<br />
= How 2FA works =<br />
<br />
A six-digit auto-generated time-dependent '''One-Time Passwords''' (TOTP) is used. These TOTPs are generated either by a special hardware device ('''hardware token''') or by an application running on a smartphone or computer ('''software token''').<br />
<br />
The token has to be synchronized with a central server (a shared secret is exchanged) before it can be used for authentication and then generates an endless stream of six-digit values (TOTP values) which can only be used once and are only valid during a very short interval of time. This makes it much harder for potential attackers to access the HPC system, even if they know the regular service password.<br />
<br />
Typically a new TOTP value is generated every 30 seconds. When the current TOTP value has once been used successfully for a login, it is depleted and one has to wait up to 30 seconds for the next TOTP value.<br />
<br />
== Hardware tokens ==<br />
<br />
Generally hardware tokens are available for users from the Karlsruhe Institute of Technology (KIT). Every KIT member automatically gets a hardware token when joining the KIT.<br />
<br />
<br/><br />
<br />
[[File:2fa token code.jpg|center|frame|Hardware Token used at KIT]]<br />
<br />
<br/><br />
<br />
== Software tokens ==<br />
<br />
Software tokens (typically an app on your smartphone) are available for all bwHPC users.<br />
<br />
'''It is very important that the device that generates the One-Time Passwords and the device which is used to log into the bwForCluster are not the same.''' Otherwise an attacker who gains access to your system can steal both the service password and the secret key of the software token application, which allows them to generate One-Time Passwords and log into the HPC system without your knowledge.<br />
<br />
The most common solution is to use a mobile device (e.g. your smartphone or tablet) as a software token by installing one of the following apps:<br />
<br />
* FreeOTP for [https://play.google.com/store/apps/details?id=org.fedorahosted.freeotp Android] or [https://apps.apple.com/de/app/freeotp-authenticator/id872559395 iOS]<br />
<br />
* Google Authenticator for [https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2 Android] or [https://apps.apple.com/de/app/google-authenticator/id388497605 iOS]<br />
<br />
* Microsoft Authenticator for [https://play.google.com/store/apps/details?id=com.azure.authenticator Android] and [https://apps.apple.com/de/app/microsoft-authenticator/id983156458 iOS]<br />
<br />
* LastPass Authenticator for [https://play.google.com/store/apps/details?id=com.lastpass.authenticator Android], [https://apps.apple.com/de/app/lastpass-authenticator/id1079110004 iOS] or [https://www.microsoft.com/de-de/p/lastpass-authenticator/9nblggh5l9d7?activetab=pivot:overviewtab Windows]<br />
<br />
* Yubico Authenticator for [https://play.google.com/store/apps/details?id=com.yubico.yubioath Android] or [https://apps.apple.com/us/app/yubico-authenticator/id1476679808 iOS]<br />
<br />
* andOTP Authenticator for [https://play.google.com/store/apps/details?id=org.shadowice.flocke.andotp Android]<br />
<br />
* Aegis Authenticator for [https://play.google.com/store/apps/details?id=com.beemdevelopment.aegis Android]<br />
<br />
* Authy for [https://authy.com/download/ Mac, Windows or Linux]<br />
<br />
* GNOME Authenticator for [https://github.com/bilelmoussaoui/Authenticator/ Linux]<br />
<br />
Open source apps for Android are available via f-droid store: [https://f-droid.org/en/packages/org.shadowice.flocke.andotp/ andOTP Authenticator], [https://f-droid.org/en/packages/com.beemdevelopment.aegis/ Aegis Authenticator] and [https://f-droid.org/en/packages/com.yubico.yubioath/ Yubico Authenticator].<br />
<br />
These are only suggestions. You can use any application compatible with the [https://tools.ietf.org/html/rfc6238 TOTP] standard.<br />
<br/><br />
<br/><br />
<br />
[[File:Freeotp-example.png|center|frame|An example of the FreeOTP app on Android, displaying generated One-Time Passwords for various services]]<br />
<br />
= Token Management =<br />
<br />
'''bwForCluster tokens''' are generally managed via the '''Index (old: User) -> My Tokens''' menu entry on the central [https://login.bwidm.de/ bwIDM registration server]. Here you can register, activate, deactivate and delete tokens.<br />
<br />
Usually a TOTP token that has been connected to your account is valid for all clusters on the registration server. Thus you can use the same TOTP token for bwUniCluster 2.0 and JUSTUS 2.<br />
<br />
'''KIT users''' can also re-use their existing hardware and software tokens for the HPC systems.<br />
<br />
'''In addition to your regular hardware/software token we recommend to register at least one backup TAN list''', i.e. for emergencies when access to your regular hardware/software token has been lost. You should print and store the backup TAN list at a safe location.<br />
<br />
See chapter 2.1 for details about how to register a new software/hardware token and chapter 2.2 for details about how to create and use a backup TAN list.<br />
<br />
<br />
== Registering a new Software or Hardware Token ==<br />
<br />
1. First you need to login to the [https://login.bwidm.de/ bwIDM registration server] (if you are not logged in already).<br />
<br />
2. After login please select entry '''My Tokens''' in the '''Index (old: User)''' menu.<br />
<br />
3. If you have not setup any tokens yet you can directly proceed with step 4 below (since token management is accessible directly for users without tokens). Otherwise (a token is already connected to your account) you need to log into the token management by entering a TOTP and clicking on '''Check'''.<br />
<br />
4. Registering a new software token starts with a click on '''New smartphone token'''.<br />
<br />
[[File:JUSTUS-2-2FA-create-a-new-token-here.png|center|]]<br />
<br />
5. A new windows opens. Click on '''Start''' to generate a new '''QR code'''. This may take a while.<br />
<br />
'''NOTE: The QR code contains a key which has to remain secret. Only use the QR code to link your software token app with bwIDM in the next step. Do not save the QR code, print it out or share it with someone else. You can always generate more codes later.'''<br />
<br />
[[File:BwUniCluster 2.0 2fa register new qr.png|center|]]<br />
<br />
6. Start the software token app on your separate device and scan the QR code. The exact process is a little bit different in every app, but is usually started by pressing on a button with a plus (+) sign or an icon of a QR code.<br />
<br />
7. Once the QR code has been loaded into your software token app there should be a new entry called '''bwIDM'''.<br />
<br />
8. Generate an One-Time-Password by pressing on this entry or selecting the appropriate button/menu item. You will receive a six-digit code. Enter this code into the field labeled "Current code:" in your bwIDM browser window to prove that the connection has worked and then click on '''Check'''.<br />
<br />
9. If everything worked as expected, you will be returned to the '''My Tokens''' screen and there will be a new entry for your Software Token:<br />
<br />
[[File:BwUniCluster 2.0 2fa register new success.png|center|]]<br />
<br />
10. Repeat the process to register additional tokens.<br />
<br />
11. '''Please register at least a Backup TAN list in addition to the hardware/software token you plan to use regularly (see chapter 2.2 for that).''' If you only register a single token and happen to lose access to it, e.g. because you lose your device, uninstall the software token application or data gets deleted/corrupted, you will neither be able to log into the cluster system nor register a new token (see chapter 3 on how to request a reset of all tokens).<br />
<br />
12. If you are in the process to register for a bwForCluster, then you can proceed with the next step [https://wiki.bwhpc.de/e/BwForCluster_User_Access#Personal_registration_at_a_bwForCluster_-_account_creation on the registration server].<br />
<br />
== Generate a new Backup TAN List ==<br />
<br />
1. First you need to login to the [https://login.bwidm.de/ bwIDM registration server] (if you are not logged in already).<br />
<br />
2. After login please select entry '''My Tokens''' in the '''Index (old: User)''' menu.<br />
<br />
3. If you have not setup any tokens yet you can directly proceed with step 4 below (since token management is accessible directly for users without tokens). Otherwise (a token is already connected to your account) you need to log into the token management by entering a TOTP and clicking on '''Check'''.<br />
<br />
4. To generate a backup TAN list, click on '''Create new TAN list'''.<br />
<br />
[[File:JUSTUS-2-2FA-create-a-new-token-here.png|center|]]<br />
<br />
5. A new window opens. Click on button '''Start''' to create the backup TAN list.<br />
<br />
6. If everything worked as expected, the new '''Backup TAN list''' is displayed:<br />
<br />
[[File:JUSTUS-2-2FA-backup-TAN-list.png|center|]]<br />
<br />
7. Click on button '''Show TANs''' and then on button '''Print'''. Print the TAN list and store the printout at a safe location.<br />
<br />
8. The 8 digit TANs of the backup TAN list can be used in precisely the same way as the TOTP values created with your hardware or software token. Remember: Once used, the TAN is not valid any more.<br />
<br />
= Recovery when access to all tokens has been lost =<br />
<br />
To gain access again after you have lost access to all your tokens including the backup TAN list, please contact the [[BwForCluster_JUSTUS_Contact_and_Support|JUSTUS 2 support]] via bwSupport Portal '''(NOT email)''' and ask for reset of all your TOTP tokens. This process will take its time.<br />
<br />
= Deactivating a Token =<br />
<br />
Click on the '''Disable''' button next to the Token entry on the '''My Tokens''' screen.<br />
<br />
= Deleting a Token =<br />
<br />
After a Token has been disabled a new button labeled '''Delete''' will appear. Click on it to delete the Token.</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8257Gaussview2021-01-26T11:57:08Z<p>C Mosch: /* Submitting Gaussian jobs to the queueing system */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* set up, start, monitor and control Gaussian calculations (create Gaussian input files with all parameters)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup and control QM/MM calculations<br />
For more information on features please visit GaussView's [http://gaussian.com/gaussview6/ ''GaussView Documentation''] web page.<br />
<br><br />
When running GaussView on one of the login nodes via X-forwarding please do not start long Gaussian calculations interactively in the foreground.<br />
Use GaussView to save the Gaussian command file to disk and submit it as ordinary [[Gaussian]] job.<br />
<br><br />
When running GaussView via VNC, e.g. via [[TigerVNC]], on a compute node, you may run longer interactive GaussView calculations.<br />
Please do not start interactive jobs occupying more cores than have been requested when starting the interactive VNC session.<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available can be displayed after login to the bwForCluster by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
Please make sure that the speed up of the calculation is reasonable when requesting<br />
more cores. When doubling the number of cores, the speed up should be at least a<br />
factor 1.7, better 1.8.<br />
<br />
Please keep in mind that our Gaussian version is only shared-memory parallel.<br />
Therefore requesting more than one node does not make any sense.<br />
<br />
= Usage =<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
It is recommended to always specify the full module name including<br />
the version of the module, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/6.1.1<br />
</pre><br />
to load version ''6.1.1'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Slurm option '''--ntasks-per-node''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When done with setting up of the job, save the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8256Gaussview2021-01-26T11:56:06Z<p>C Mosch: /* Starting Gaussian jobs interactively via GaussView */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* set up, start, monitor and control Gaussian calculations (create Gaussian input files with all parameters)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup and control QM/MM calculations<br />
For more information on features please visit GaussView's [http://gaussian.com/gaussview6/ ''GaussView Documentation''] web page.<br />
<br><br />
When running GaussView on one of the login nodes via X-forwarding please do not start long Gaussian calculations interactively in the foreground.<br />
Use GaussView to save the Gaussian command file to disk and submit it as ordinary [[Gaussian]] job.<br />
<br><br />
When running GaussView via VNC, e.g. via [[TigerVNC]], on a compute node, you may run longer interactive GaussView calculations.<br />
Please do not start interactive jobs occupying more cores than have been requested when starting the interactive VNC session.<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available can be displayed after login to the bwForCluster by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
Please make sure that the speed up of the calculation is reasonable when requesting<br />
more cores. When doubling the number of cores, the speed up should be at least a<br />
factor 1.7, better 1.8.<br />
<br />
Please keep in mind that our Gaussian version is only shared-memory parallel.<br />
Therefore requesting more than one node does not make any sense.<br />
<br />
= Usage =<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
It is recommended to always specify the full module name including<br />
the version of the module, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/6.1.1<br />
</pre><br />
to load version ''6.1.1'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Slurm option '''--ntasks-per-node''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When set up of the job is done one may store the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8255Gaussview2021-01-26T11:54:05Z<p>C Mosch: /* Loading the module and starting GaussView (within VNC session) */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* set up, start, monitor and control Gaussian calculations (create Gaussian input files with all parameters)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup and control QM/MM calculations<br />
For more information on features please visit GaussView's [http://gaussian.com/gaussview6/ ''GaussView Documentation''] web page.<br />
<br><br />
When running GaussView on one of the login nodes via X-forwarding please do not start long Gaussian calculations interactively in the foreground.<br />
Use GaussView to save the Gaussian command file to disk and submit it as ordinary [[Gaussian]] job.<br />
<br><br />
When running GaussView via VNC, e.g. via [[TigerVNC]], on a compute node, you may run longer interactive GaussView calculations.<br />
Please do not start interactive jobs occupying more cores than have been requested when starting the interactive VNC session.<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available can be displayed after login to the bwForCluster by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
Please make sure that the speed up of the calculation is reasonable when requesting<br />
more cores. When doubling the number of cores, the speed up should be at least a<br />
factor 1.7, better 1.8.<br />
<br />
Please keep in mind that our Gaussian version is only shared-memory parallel.<br />
Therefore requesting more than one node does not make any sense.<br />
<br />
= Usage =<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
It is recommended to always specify the full module name including<br />
the version of the module, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/6.1.1<br />
</pre><br />
to load version ''6.1.1'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Moab option '''PPN''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When set up of the job is done one may store the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8254Gaussview2021-01-26T11:31:12Z<p>C Mosch: /* Versions and Availability */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* set up, start, monitor and control Gaussian calculations (create Gaussian input files with all parameters)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup and control QM/MM calculations<br />
For more information on features please visit GaussView's [http://gaussian.com/gaussview6/ ''GaussView Documentation''] web page.<br />
<br><br />
When running GaussView on one of the login nodes via X-forwarding please do not start long Gaussian calculations interactively in the foreground.<br />
Use GaussView to save the Gaussian command file to disk and submit it as ordinary [[Gaussian]] job.<br />
<br><br />
When running GaussView via VNC, e.g. via [[TigerVNC]], on a compute node, you may run longer interactive GaussView calculations.<br />
Please do not start interactive jobs occupying more cores than have been requested when starting the interactive VNC session.<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available can be displayed after login to the bwForCluster by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
Please make sure that the speed up of the calculation is reasonable when requesting<br />
more cores. When doubling the number of cores, the speed up should be at least a<br />
factor 1.7, better 1.8.<br />
<br />
Please keep in mind that our Gaussian version is only shared-memory parallel.<br />
Therefore requesting more than one node does not make any sense.<br />
<br />
= Usage =<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
If you wish to load a specific version you may do so by specifying the version<br />
explicitly, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/5.0.9<br />
</pre><br />
to load version ''5.0.9'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Moab option '''PPN''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When set up of the job is done one may store the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8253Gaussview2021-01-26T11:26:00Z<p>C Mosch: /* Description */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* set up, start, monitor and control Gaussian calculations (create Gaussian input files with all parameters)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup and control QM/MM calculations<br />
For more information on features please visit GaussView's [http://gaussian.com/gaussview6/ ''GaussView Documentation''] web page.<br />
<br><br />
When running GaussView on one of the login nodes via X-forwarding please do not start long Gaussian calculations interactively in the foreground.<br />
Use GaussView to save the Gaussian command file to disk and submit it as ordinary [[Gaussian]] job.<br />
<br><br />
When running GaussView via VNC, e.g. via [[TigerVNC]], on a compute node, you may run longer interactive GaussView calculations.<br />
Please do not start interactive jobs occupying more cores than have been requested when starting the interactive VNC session.<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained from the [https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussview Cluster Information System (CIS)]:<br />
{{#widget:Iframe<br />
|url=https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussview<br />
|width=99%<br />
|height=120<br />
|border=0<br />
}}<br />
On the command line of a particular bwHPC cluster a list of all available versions is displayed by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
= Usage =<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
If you wish to load a specific version you may do so by specifying the version<br />
explicitly, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/5.0.9<br />
</pre><br />
to load version ''5.0.9'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Moab option '''PPN''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When set up of the job is done one may store the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8252Gaussview2021-01-26T11:20:41Z<p>C Mosch: /* Description */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (advanced molecule editor)<br />
* set up, start, monitor and control Gaussian calculations (create Gaussian input files with all parameters)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup and control QM/MM calculations<br />
For more information on features please visit GaussView's [http://www.gaussian.com/g_prod/gv5b.htm ''Visualizing Molecules & Reactions with GaussView''] web page.<br />
<br><br />
When running GaussView on one of the login nodes via X-forwarding please do not start long Gaussian calculations interactively in the foreground.<br />
Instead of this you can save the Gaussian command file to disk and submit it as ordinary [[Gaussian]] job.<br />
<br><br />
When running GaussView via VNC, e.g. via [[TigerVNC]], on a compute node, you may run longer interactive GaussView calculations.<br />
Please do not start interactive jobs occupying more cores than have been requested when starting the interactive VNC session.<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained from the [https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussview Cluster Information System (CIS)]:<br />
{{#widget:Iframe<br />
|url=https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussview<br />
|width=99%<br />
|height=120<br />
|border=0<br />
}}<br />
On the command line of a particular bwHPC cluster a list of all available versions is displayed by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
= Usage =<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
If you wish to load a specific version you may do so by specifying the version<br />
explicitly, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/5.0.9<br />
</pre><br />
to load version ''5.0.9'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Moab option '''PPN''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When set up of the job is done one may store the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8251Gaussview2021-01-26T11:20:10Z<p>C Mosch: /* Description */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is an advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (very powerful molecule editor)<br />
* set up, start, monitor and control Gaussian calculations (create Gaussian input files with all parameters)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup and control QM/MM calculations<br />
For more information on features please visit GaussView's [http://www.gaussian.com/g_prod/gv5b.htm ''Visualizing Molecules & Reactions with GaussView''] web page.<br />
<br><br />
When running GaussView on one of the login nodes via X-forwarding please do not start long Gaussian calculations interactively in the foreground.<br />
Instead of this you can save the Gaussian command file to disk and submit it as ordinary [[Gaussian]] job.<br />
<br><br />
When running GaussView via VNC, e.g. via [[TigerVNC]], on a compute node, you may run longer interactive GaussView calculations.<br />
Please do not start interactive jobs occupying more cores than have been requested when starting the interactive VNC session.<br />
<br><br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained from the [https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussview Cluster Information System (CIS)]:<br />
{{#widget:Iframe<br />
|url=https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussview<br />
|width=99%<br />
|height=120<br />
|border=0<br />
}}<br />
On the command line of a particular bwHPC cluster a list of all available versions is displayed by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
= Usage =<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
If you wish to load a specific version you may do so by specifying the version<br />
explicitly, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/5.0.9<br />
</pre><br />
to load version ''5.0.9'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Moab option '''PPN''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When set up of the job is done one may store the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8250Gaussview2021-01-26T11:18:21Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of the Release Notes]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Gaussian Homepage]; [http://gaussian.com/gaussview6/ GaussView Documentation]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is a very advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (very powerful molecule editor)<br />
* set up, start, monitor and control Gaussian calculations (create Gaussian input files with all parameters)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup and control QM/MM calculations<br />
For more information on features please visit GaussView's [http://www.gaussian.com/g_prod/gv5b.htm ''Visualizing Molecules & Reactions with GaussView''] web page.<br />
<br><br />
When running GaussView on one of the login nodes via X-forwarding please do not start long Gaussian calculations interactively in the foreground.<br />
Instead of this you can save the Gaussian command file to disk and submit it as ordinary [[Gaussian]] job.<br />
<br><br />
When running GaussView via VNC, e.g. via [[TigerVNC]], on a compute node, you may run longer interactive GaussView calculations.<br />
Please do not start interactive jobs occupying more cores than have been requested when starting the interactive VNC session.<br />
<br><br />
<br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained from the [https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussview Cluster Information System (CIS)]:<br />
{{#widget:Iframe<br />
|url=https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussview<br />
|width=99%<br />
|height=120<br />
|border=0<br />
}}<br />
On the command line of a particular bwHPC cluster a list of all available versions is displayed by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
= Usage =<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
If you wish to load a specific version you may do so by specifying the version<br />
explicitly, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/5.0.9<br />
</pre><br />
to load version ''5.0.9'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Moab option '''PPN''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When set up of the job is done one may store the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussview&diff=8249Gaussview2021-01-26T11:14:55Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussview<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| [http://gaussian.com/gv611rn/ See section Citation of Gaussview manual]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage] &#124; [http://www.gaussian.com/g_tech/gv5ref/gv5ref_toc.htm Manual]<br />
|-<br />
| Graphical Interface<br />
| [[#Loading the module and starting GaussView (within VNC session)|Yes]]<br />
|-<br />
| Related program<br />
| See [[Gaussian]]<br />
|}<br />
<br />
= Description =<br />
'''GaussView''' is a very advanced and powerful graphical user interface for [[Gaussian]]. With the aid of GaussView you can<br />
* build, save or load molecular structures (very powerful molecule editor)<br />
* set up, start, monitor and control Gaussian calculations (create Gaussian input files with all parameters)<br />
* load, view and analyze results (e.g. visualize 3D ISO surfaces of densities and orbitals, plot IR and Raman spectra)<br />
* setup, visualize and analyze parameter scans<br />
* setup and control QM/MM calculations<br />
For more information on features please visit GaussView's [http://www.gaussian.com/g_prod/gv5b.htm ''Visualizing Molecules & Reactions with GaussView''] web page.<br />
<br><br />
When running GaussView on one of the login nodes via X-forwarding please do not start long Gaussian calculations interactively in the foreground.<br />
Instead of this you can save the Gaussian command file to disk and submit it as ordinary [[Gaussian]] job.<br />
<br><br />
When running GaussView via VNC, e.g. via [[TigerVNC]], on a compute node, you may run longer interactive GaussView calculations.<br />
Please do not start interactive jobs occupying more cores than have been requested when starting the interactive VNC session.<br />
<br><br />
<br />
<br />
= Versions and Availability =<br />
<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained from the [https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussview Cluster Information System (CIS)]:<br />
{{#widget:Iframe<br />
|url=https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussview<br />
|width=99%<br />
|height=120<br />
|border=0<br />
}}<br />
On the command line of a particular bwHPC cluster a list of all available versions is displayed by command<br />
<pre><br />
$ module avail chem/gaussview<br />
</pre><br />
<br />
== Parallel computing ==<br />
<br />
One can construct/save or start serial as well as parallel jobs with GaussView.<br />
To switch from serial to parallel one has to modify entry ''Shared Processors''<br />
within ''Gaussian Calculation Setup'' window.<br />
<br />
= Usage =<br />
<br />
== Loading the module and starting GaussView (within VNC session) ==<br />
<br />
The best method to use GaussView is to start it within an interactive<br />
remote VNC session or a (Linux-)System with X-forwarding (e.g.: ssh -X 'your-id'@FQDN-of-cluster.de).<br />
<br><br />
Details on how to start an interactive VNC session running on a compute node can be found here: [[TigerVNC]]<br />
<br />
Within the X11 session open a terminal window (e.g. xterm) and execute:<br />
<pre><br />
$ module load chem/gaussview<br />
gaussview &<br />
</pre><br />
<br />
The GaussView module automatically loads the corresponding Gaussian module.<br />
<br />
[[File:Gaussview.jpg]]<br />
<br><br />
<br><br />
See [[#GaussView example session|GaussView example session]] for more infos how to work with the GUI.<br />
<br><br />
<br><br />
If you wish to load a specific version you may do so by specifying the version<br />
explicitly, e.g. specify<br />
<pre><br />
$ module load chem/gaussview/5.0.9<br />
</pre><br />
to load version ''5.0.9'' of GaussView.<br />
<br><br />
<br />
== Starting Gaussian jobs interactively via GaussView ==<br />
<br />
Within the interactive VNC session running on a compute node<br />
you may start Gaussian jobs locally via ''Submit'' or ''Quick Launch''.<br />
These buttons are located within the ''Gaussian Calculation Setup'' window of GaussView.<br />
<br />
If you do so you must make sure that these jobs finish before<br />
the run time of the interactive VNC session ends. Furthermore<br />
your should never occupy more cores (GaussView setting '''Shared Processors''', default 1 core)<br />
than specified when submitting the VNC session (Moab option '''PPN''', default 1 core).<br />
Finally be aware that your job might get killed if it uses more memory<br />
than requested.<br />
<br />
== Submitting Gaussian jobs to the queueing system ==<br />
<br />
First you can construct your Gaussian job interactively with GaussView.<br />
When set up of the job is done one may store the Gaussian command file to disk<br />
and submit that *.com file as described in the documentation of [[Gaussian]].<br />
Please read the Gaussian documentation as well and make sure that you<br />
specify reasonable values for the number of cores, memory usage,<br />
disk usage and job run time when submitting the job.<br />
<br />
<br />
= Examples =<br />
<br />
== GaussView example session ==<br />
<br />
* When 'gaussview' is started, the [[#Loading the module and starting GaussView (within VNC session)|'Builder Fragment']] of the programs main window (gray background) displays the 'Carbon Tetrahedral' by default.<br />
* Click in the middle of the construction window (blue background). The selected fragment should now be displayed there (undo clicks via 'Menu: Edit -> Undo').<br />
* Start a Gaussian calculation 'Menu: Calculate -> Gaussian Calculation Setup'. Keep all defaults and hit 'Button: Quick Launch' at the bottom.<br />
* View the results via 'Menu: Results -> Summary'.<br />
* Create orbital contour plots via 'Menu: Results -> Surface/Contours', 'Button: Cube Actions -> New Cube', 'Button: Surface Actions -> New Surface'. If you have accepted all defaults, you should see the HOMO orbital now.<br />
<br />
= Version-Specific Information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussview/VERSION<br />
</pre><br />
Please read the local module help documentation before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8248Gaussian2021-01-26T11:11:21Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial - see [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| See [[Gaussview]]<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=N<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system (i.e. ''--ntasks-per-node=N'') '''must''' be identical to '''%NProcShared=N''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com ./<br />
$ gauss_sub test0553-4core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-4core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8247Gaussian2021-01-26T11:10:56Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| See [[Gaussview]]<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=N<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system (i.e. ''--ntasks-per-node=N'') '''must''' be identical to '''%NProcShared=N''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com ./<br />
$ gauss_sub test0553-4core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-4core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8246Gaussian2021-01-26T11:10:09Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| Availability<br />
| [[:Category:bwForCluster_JUSTUS_2|bwForCluster JUSTUS 2]]<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=N<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system (i.e. ''--ntasks-per-node=N'') '''must''' be identical to '''%NProcShared=N''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com ./<br />
$ gauss_sub test0553-4core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-4core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8245Gaussian2021-01-26T11:09:16Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| Availability<br />
| [[Category:BwForCluster_JUSTUS_2]]<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=N<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system (i.e. ''--ntasks-per-node=N'') '''must''' be identical to '''%NProcShared=N''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com ./<br />
$ gauss_sub test0553-4core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-4core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8244Gaussian2021-01-26T11:08:35Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| Availability<br />
| [[BwForCluster_JUSTUS_2]]<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=N<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system (i.e. ''--ntasks-per-node=N'') '''must''' be identical to '''%NProcShared=N''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com ./<br />
$ gauss_sub test0553-4core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-4core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8243Gaussian2021-01-26T10:58:27Z<p>C Mosch: /* Running Gaussian interactively */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=N<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system (i.e. ''--ntasks-per-node=N'') '''must''' be identical to '''%NProcShared=N''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com ./<br />
$ gauss_sub test0553-4core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-4core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8242Gaussian2021-01-26T10:58:15Z<p>C Mosch: /* Parallel computing */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=N<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system (i.e. ''--ntasks-per-node=N'') '''must''' be identical to '''%NProcShared=N''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com ./<br />
$ gauss_sub test0553-4core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-4core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8241Gaussian2021-01-26T10:30:45Z<p>C Mosch: /* Caveat for windows users */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com ./<br />
$ gauss_sub test0553-4core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-4core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8240Gaussian2021-01-26T10:30:17Z<p>C Mosch: /* Direct submission of Gaussian command files */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v $GAUSSIAN_EXA_DIR/test0553-4core-parallel.com ./<br />
$ gauss_sub test0553-4core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8239Gaussian2021-01-26T10:28:18Z<p>C Mosch: /* Queueing system template provided by Gaussian module */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8238Gaussian2021-01-26T10:27:34Z<p>C Mosch: /* Queueing system template provided by Gaussian module */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started on a compute node, all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the compute node where the job is running. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8237Gaussian2021-01-26T10:26:42Z<p>C Mosch: /* Queueing system template provided by Gaussian module */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started on a compute node, all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the particular compute node. When option '''--gres=scratch:nnn''' has been specified while submitting the job script, then '''$SCRATCH''' points to the node-local SSDs. Otherwise (option '''--gres=scratch:nnn''' has not been specified) '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8236Gaussian2021-01-26T10:24:51Z<p>C Mosch: /* Queueing system template provided by Gaussian module */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started on a compute node, all temporary files are kept below directory [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|'''$SCRATCH''']] only visible on the particular compute node. When you did specify the '--gres=scratch:nn''' while submitting the job script, then '''$SCRATCH''' points to a local SSD. Otherwise '''$SCRATCH''' points to a RAM disk. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8235Gaussian2021-01-26T10:16:24Z<p>C Mosch: /* Queueing system template provided by Gaussian module */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example (Hexanitroethan C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started on a compute node, all calculations will be done within a node-local directory on the [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|local disks ($SCRATCH)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8234Gaussian2021-01-26T10:16:04Z<p>C Mosch: /* Queueing system template provided by Gaussian module */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example of Hexanitroethan (C2N6O12) that runs a 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started on a compute node, all calculations will be done within a node-local directory on the [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|local disks ($SCRATCH)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8233Gaussian2021-01-26T10:15:43Z<p>C Mosch: /* Queueing system template provided by Gaussian module */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Slurm example of Hexanitroethan (C2N6O12) that runs an 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started on a compute node, all calculations will be done within a node-local directory on the [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|local disks ($SCRATCH)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8232Gaussian2021-01-26T10:15:26Z<p>C Mosch: /* Queueing system template provided by Gaussian module */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started on a compute node, all calculations will be done within a node-local directory on the [[Hardware_and_Architecture_(bwForCluster_JUSTUS_2)#Storage_Architecture|local disks ($SCRATCH)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8231Gaussian2021-01-26T10:12:17Z<p>C Mosch: /* Examples */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 4 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/bwforcluster-gaussian-example.sbatch ./<br />
$ sbatch bwforcluster-gaussian-example.sbatch<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.sbatch'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.sbatch''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8230Gaussian2021-01-26T10:06:03Z<p>C Mosch: /* Memory usage */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory the ''integrals could be kept in memory'' (just an example for one of the messages), the calculation will be faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8229Gaussian2021-01-26T10:03:39Z<p>C Mosch: /* Memory usage */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is as difficult as predicting the disk requirements. The strategies are very similar. So for a large new unknown system, start with smaller test systems and smaller basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8228Gaussian2021-01-26T10:00:56Z<p>C Mosch: /* Disk usage */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request more node-local disk space from the queueing system then you have specified in the Gausian input file. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8227Gaussian2021-01-26T09:58:54Z<p>C Mosch: /* Disk usage */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [http://gaussian.com/maxdisk/ Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8226Gaussian2021-01-26T09:57:16Z<p>C Mosch: /* Disk usage */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation can be a difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8225Gaussian2021-01-26T09:55:05Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/man/ Reference Manual]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8224Gaussian2021-01-26T09:54:23Z<p>C Mosch: /* Creating Gaussian input files */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://gaussian.com/man/ Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8223Gaussian2021-01-26T09:53:30Z<p>C Mosch: /* Running Gaussian interactively */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and redirecting that input into command g16.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://www.gaussian.com/g_tech/g_ur/g09help.htm Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8222Gaussian2021-01-26T09:52:51Z<p>C Mosch: /* Running Gaussian interactively */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g16 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and piping that input into g09.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://www.gaussian.com/g_tech/g_ur/g09help.htm Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8221Gaussian2021-01-26T09:52:24Z<p>C Mosch: /* Loading the module */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g16.C.01<br />
</pre><br />
to load version ''g16.C.01'' of Gaussian.<br />
<br><br />
<br />
'''In production jobs we strongly recommend to always specify<br />
the version when loading a module.'''<br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g09 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and piping that input into g09.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://www.gaussian.com/g_tech/g_ur/g09help.htm Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8220Gaussian2021-01-26T09:50:32Z<p>C Mosch: /* Parallel computing */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The '''number of cores''' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g09.D.01<br />
</pre><br />
to load version ''g09.D.01'' of Gaussian.<br />
<br><br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g09 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and piping that input into g09.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://www.gaussian.com/g_tech/g_ur/g09help.htm Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8219Gaussian2021-01-26T09:50:03Z<p>C Mosch: /* Parallel computing */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. The number of ''cores'' requested from the queueing system '''must''' be identical to '''NProcShared''' as specified in the Gaussian input file. The installed Gaussian binaries are shared-memory parallel only. Therefore only single node jobs do make sense. Without ''NProcShare'' Gaussian will use only one core by default.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g09.D.01<br />
</pre><br />
to load version ''g09.D.01'' of Gaussian.<br />
<br><br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g09 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and piping that input into g09.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://www.gaussian.com/g_tech/g_ur/g09help.htm Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8218Gaussian2021-01-26T09:46:54Z<p>C Mosch: /* Versions and Availability */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained via command line after login to the bwForCluster:<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. ''PPN'' should be replaced by the number of parallel cores. This value '''must''' be identical to the ''ppn'' value specified when requesting resources from the queueing system. The installed Gaussian binaries are shared-memory parallel. Therefore only single node jobs do make sense. Without ''NProcShare'' statement the serial version of Gaussian is selected.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g09.D.01<br />
</pre><br />
to load version ''g09.D.01'' of Gaussian.<br />
<br><br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g09 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and piping that input into g09.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://www.gaussian.com/g_tech/g_ur/g09help.htm Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8217Gaussian2021-01-26T09:45:56Z<p>C Mosch: /* Description */</p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://gaussian.com/gaussian16/ ''Overview of Capabilities and Features''] and [http://gaussian.com/relnotes/ ''Release Notes''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained from the [https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussian Cluster Information System (CIS)]:<br />
{{#widget:Iframe<br />
|url=https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussian<br />
|width=99%<br />
|height=250<br />
|border=<br />
}}<br />
On the command line of a particular bwHPC cluster a list of all available versions is displayed by command<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. ''PPN'' should be replaced by the number of parallel cores. This value '''must''' be identical to the ''ppn'' value specified when requesting resources from the queueing system. The installed Gaussian binaries are shared-memory parallel. Therefore only single node jobs do make sense. Without ''NProcShare'' statement the serial version of Gaussian is selected.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g09.D.01<br />
</pre><br />
to load version ''g09.D.01'' of Gaussian.<br />
<br><br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g09 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and piping that input into g09.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://www.gaussian.com/g_tech/g_ur/g09help.htm Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8216Gaussian2021-01-26T09:42:01Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage]; [http://gaussian.com/running/ Running Gaussian]; [http://gaussian.com/keywords/ Keywords], [http://gaussian.com/iops/ IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://www.gaussian.com/g_prod/g09b.htm ''Overview of Capabilities and Features''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained from the [https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussian Cluster Information System (CIS)]:<br />
{{#widget:Iframe<br />
|url=https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussian<br />
|width=99%<br />
|height=250<br />
|border=<br />
}}<br />
On the command line of a particular bwHPC cluster a list of all available versions is displayed by command<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. ''PPN'' should be replaced by the number of parallel cores. This value '''must''' be identical to the ''ppn'' value specified when requesting resources from the queueing system. The installed Gaussian binaries are shared-memory parallel. Therefore only single node jobs do make sense. Without ''NProcShare'' statement the serial version of Gaussian is selected.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g09.D.01<br />
</pre><br />
to load version ''g09.D.01'' of Gaussian.<br />
<br><br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g09 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and piping that input into g09.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://www.gaussian.com/g_tech/g_ur/g09help.htm Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Moschhttps://wiki.bwhpc.de/wiki/index.php?title=Gaussian&diff=8215Gaussian2021-01-26T09:37:36Z<p>C Mosch: </p>
<hr />
<div>{| width=600px class="wikitable"<br />
|-<br />
! Description !! Content<br />
|-<br />
| module load<br />
| chem/gaussian<br />
|-<br />
| License<br />
| Commercial. See: [http://gaussian.com/pricing/ Pricing for Gaussian Products]<br />
|-<br />
| Citing<br />
| See [http://gaussian.com/citation/ Gaussian Citation]<br />
|-<br />
| Links<br />
| [http://www.gaussian.com Homepage] &#124; [http://www.gaussian.com/g_tech/g09ur.htm Manual] &#124; [http://www.gaussian.com/g_tech/g09iop.htm IOps Reference]<br />
|-<br />
| Graphical Interface<br />
| Yes. See [[Gaussview]].<br />
|}<br />
<br />
= Description =<br />
'''Gaussian''' is a general purpose ''quantum chemistry'' software package for ''ab initio'' electronic structure calculations. It provides:<br />
* ground state calculations for methods such as HF, many DFT functionals, MP2/3/4 or CCSD(T);<br />
* basic excited state calculations such as TDHF or TDDF;<br />
* coupled multi-shell QM/MM calculations (ONIOM);<br />
* geometry optimizations, transition state searches, molecular dynamics calculations;<br />
* property and spectra calculations such as IR, UV/VIS, Raman or CD; as well as<br />
* shared-memory parallel versions for almost all kind of jobs.<br />
For more information on features please visit Gaussian's [http://www.gaussian.com/g_prod/g09b.htm ''Overview of Capabilities and Features''] web page.<br />
<br><br />
<br><br />
<br />
= Versions and Availability =<br />
A list of versions currently available on the bwForCluster Chemistry can be obtained from the [https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussian Cluster Information System (CIS)]:<br />
{{#widget:Iframe<br />
|url=https://cis-hpc.uni-konstanz.de/prod.cis/Justus/chem/gaussian<br />
|width=99%<br />
|height=250<br />
|border=<br />
}}<br />
On the command line of a particular bwHPC cluster a list of all available versions is displayed by command<br />
<pre><br />
$ module avail chem/gaussian<br />
</pre><br />
<br />
= Parallel computing =<br />
The binaries of the Gaussian module can run in serial and shared-memory parallel mode. Switching between the serial and parallel version is done via statement<br />
<pre><br />
%NProcShare=PPN<br />
</pre><br />
in section ''Link 0 commands'' before the ''route section'' at the beginning of the Gaussian input file. ''PPN'' should be replaced by the number of parallel cores. This value '''must''' be identical to the ''ppn'' value specified when requesting resources from the queueing system. The installed Gaussian binaries are shared-memory parallel. Therefore only single node jobs do make sense. Without ''NProcShare'' statement the serial version of Gaussian is selected.<br />
<br><br />
<br />
= Usage =<br />
<br />
== Loading the module ==<br />
<br />
You can load the default version of ''Gaussian'' with command:<br />
<pre><br />
$ module load chem/gaussian<br />
</pre><br />
The Gaussian module does not depend on any other module (no dependencies).<br />
<br />
If you wish to load a specific version you may do so by specifying the version explicitly, e.g.<br />
<pre><br />
$ module load chem/gaussian/g09.D.01<br />
</pre><br />
to load version ''g09.D.01'' of Gaussian.<br />
<br><br />
<br />
== Running Gaussian interactively ==<br />
After loading the Gaussian module you can run a quick interactive example by executing<br />
<pre><br />
$ time g09 < $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com<br />
</pre><br />
In most cases running Gaussian requires setting up the command input file and piping that input into g09.<br />
<br />
== Creating Gaussian input files ==<br />
<br />
For documentation about how to construct input files see the [http://www.gaussian.com/g_tech/g_ur/g09help.htm Gaussian manual]. In addition the program [[Gaussview]] is a very good graphical user interface for constructing molecules and for setting up calculations. Finally these calculation setups can be saved as Gaussian command files and thereafter can be submitted to the cluster with help of the queueing system examples below.<br />
<br><br />
<br />
== Disk usage ==<br />
<br />
By default, scratch files of Gaussian are placed in GAUSS_SCRDIR as displayed when loading the Gaussian module. In most cases the module load command of Gaussian should set the GAUSS_SCRDIR pointing to an optimal node-local file system. When running multiple Gaussian jobs together on one node a user may want to add one more sub-directory level containing e.g. job id and job name for clarity - if not done so already by the queueing system.<br />
<br />
Predicting how much disk space a specific Gaussian calculation requires is a very difficult task. It requires experience with the methods, the basis sets, the calculated properties and the system you are investigating. The best advice is probably to start with small basis sets and small example systems, run such example calculations and observe their (hopefully small) disk usage while the job is running. Then read the Gaussian documentation about scaling behaviour and basis set sizes (the basis set size of the current calculation is printed at the beginning of the output of the Gaussian job). Finally try to extrapolate to your desired final system and basis set.<br />
<br />
You can also try to specify a fixed amount of disk space for a calculation. This is done by adding a statement like<br />
<pre><br />
%MaxDisk=50000MB<br />
</pre><br />
to the route section of the Gaussian input file. But please be aware that (a) [[http://www.gaussian.com/g_tech/g_ur/k_maxdisk.htm Gaussian does not necessarily obey the specified value]] and (b) you might force Gaussian to select a slower algorithm when specifying an inappropriate value.<br />
<br />
In any case please make sure that you request sufficient but not far too much node-local disk space from the queueing system. For information on how much node-local disk space is available at the cluster and how to request a certain amount of node-local disk space for a calculation from the queueing system, please consult the cluster specific queueing system documentation as well as the queueing system examples of the Gaussian module as described below.<br />
<br />
Except for very short interactive test jobs please never run Gaussian calculations in any globally mounted directory like your $HOME or $WORK directory.<br />
<br><br />
<br />
== Memory usage ==<br />
<br />
Predicting the memory requirements of a job is nearly as difficult as predicting the disk requirements. But the strategies can be very similar. So start with small test systems and small basis sets and then extrapolate.<br />
<br />
You may specify the memory for a calculation explicitly in the route section of the Gaussian input file, for example<br />
<pre><br />
%Mem=10000MB<br />
</pre><br />
Gaussian usually obeys this value rather well. We have seen calculations that exceed the Mem value by at most by 2GB. Therefore it is usually sufficient to request Mem+2GB from the queueing system.<br />
<br />
But please carefully monitor the output of Gaussian when restricting the memory in the input file. Gaussian automatically switches between algorithms (e.g. recalculating values instead of storing them) when specifying too low memory values. So when the output is indicating that with more memory e.g. the ''integrals'' could be kept in memory the calculation might be much faster when assigning more memory.<br />
<br />
In case of shared-memory parallel jobs the number of workers has only minor influence on the memory consumption (maybe up to 10%). This is since all workers work together on one common data set.<br />
<br />
== Using SSD systems efficiently ==<br />
<br />
Compared with conventional disks SSD's are far more than 1000 times faster when serving random-IO requests. Therefore some of the default strategies of Gaussian, e.g. recalculate some values instead of storing them on disk, might not be optimal in all cases. Of course this is only relevant when there is not enough RAM to store the intermediate values, e.g. two centre integrals, etc.<br />
<br />
So if you plan to do many huge calculations that do not fit into the RAM, you may want to compare the execution time of a job that is re-calculating the intermediate values whenever needed and a job that forces these values to be written to and read from the node-local SSD's. Depending on how much time it costs to re-calculate the intermediate values, using the SSD's can be much faster.<br />
<br />
= Examples =<br />
<br />
== Queueing system template provided by Gaussian module ==<br />
<br />
The Gaussian module provides a simple Moab example of Hexanitroethan (C2N6O12) that runs an 8 core parallel single energy point calculation using method B3LYP and basis set 6-31g(df,pd). To submit the example do the following steps:<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp -v ${GAUSSIAN_EXA_DIR}/{bwforcluster-gaussian-example.moab,test0553-*.com} ./<br />
$ msub bwforcluster-gaussian-example.moab<br />
</pre><br />
The last step submits the job example script ''bwforcluster-gaussian-example.moab'' to the queueing system. Once started on a compute node, all calculations will be done under an unique directory on the [[Batch_Jobs_-_bwForCluster_Chemistry_Features#Disk_Space_and_Resources|local file system ($TMPDIR)]] of that particular compute node. Please '''carefully''' read this ''local file system'' documentation as well as the comments in the queueing system example script ''bwforcluster-gaussian-example.moab''.<br />
<br><br />
<br />
== Direct submission of Gaussian command files ==<br />
<br />
For users who do not want to deal with queueing system scripts we have created a submit command that automatically creates and submits queueing system scripts for Gaussian. For example:<br />
<br />
<pre><br />
$ ws_allocate calc_repo 30; cd $(ws_find calc_repo)<br />
$ mkdir my_first_job; cd my_first_job<br />
$ module load chem/gaussian<br />
$ cp $GAUSSIAN_EXA_DIR/test0553-8core-parallel.com ./<br />
$ gauss_sub test0553-8core-parallel.com<br />
</pre><br />
<br />
== Caveat for windows users ==<br />
<br />
If you have transferred the Gaussian input file from a Windows computer<br />
to Unix then make sure to convert the line breaks of Windows (<CR>+<LF>) <br />
to Unix (only <LF>). Otherwise Gaussian will write strange error messages.<br />
Typical Unix commands for that are: 'dos2unix' and 'unix2dos'. Example:<br />
<br />
<pre><br />
$ dos2unix test0553-8core-parallel.com<br />
</pre><br />
<br />
= Version-specific information =<br />
<br />
For specific information about version ''VERSION'' see the information available via the module system with the command<br />
<pre><br />
$ module help chem/gaussian/VERSION<br />
</pre><br />
'''Please read the local version-specific module help documentation''' before using the software. The module help contains links to additional documentation and resources as well as information about support contact.<br />
<br><br />
<br />
----<br />
[[Category:Chemistry software]][[Category:bwForCluster_Chemistry]][[Category:BwForCluster_BinAC]]</div>C Mosch