JUSTUS2/Software/SIESTA
The main documentation is available via |
Description | Content |
---|---|
module load | chem/siesta |
Availability | BwForCluster_JUSTUS_2 |
License | Free, open-source software, distributed under the GNU General Public License (GPL). More... |
Citing | Publications |
Links | Homepage | Documentation |
Graphical Interface | No |
Description
SIESTA (Spanish Initiative for Electronic Simulations with Thousands of Atoms) is both a method and its computer program implementation, to perform electronic structure calculations and ab initio molecular dynamics simulations of molecules and solids.
Availability
SIESTA is available on selected bwHPC-Clusters. A complete list of versions currently installed on the bwHPC-Clusters can be obtained from the Cluster Information System (CIS).
In order to check which versions of SIESTA are installed on the compute cluster, run the following command:
$ module avail chem/siesta
License
SIESTA is free, open-source software released under the GNU General Public License (GPL). The SIESTA package in its entirety may be copied, modified or distributed according to the conditions described in its documentation.
Usage
Loading the module
You can load the default version of SIESTA with the following command:
$ module load chem/siesta
The module will try to load all modules it needs to function (e.g., compiler, mpi, ...). If loading the module fails, check if you have already loaded one of those modules, but not in the version required by SIESTA.
If you wish to load another (older) version of SIESTA, you can do so using
$ module load chem/siesta/<version>
with <version> specifying the desired version.
Please cite SIESTA in your publications according to the references.
Program Binaries
The binary siesta is main program of the SIESTA package.
Usage: siesta [options] [< infile] [> outfile] Example: siesta RUN.fdf > RUN.out
For information about how to construct SIESTA input files, please see the documentation.
Hints for using SIESTA
Parallel Execution
It is usually more efficient to run several smaller jobs with reasonable speedup side by side. Check for a reasonable speedup by increasing the number of workers from 1, 2, 4, n cores. Do not forget to adjust the memory specification when changing the number of workers
Memory Management
It requires some experience to choose the perfect memory value. Monitoring a job interactively may help to estimate the memory consumption. Requesting 1GB more than the required amount is typically sufficient.
Runtime Specification
The wall-time limit is independent of the number of parallel workers. Selecting the best value requires some experience. In general, jobs with shorter wall times get higher priorities. It could help to run the job interactively for a small period of time to monitor its convergence. Furthermore, it may be advantageous to extrapolate the run time by looking at smaller test jobs.
Examples
As with all processes that require more than a few minutes to run, non-trivial compute jobs must be submitted to the cluster queuing system.
Example scripts are available in the directory $SIESTA_EXA_DIR:
$ module show chem/siesta # show environment variables, which will be available after 'module load' $ module load chem/siesta # load module $ ls $SIESTA_EXA_DIR # show content of directory $SIESTA_EXA_DIR $ cat $SIESTA_EXA_DIR/README # show examples README
Run a first simple example job on JUSTUS 2:
$ module load chem/siesta # load module $ WORKSPACE=`ws_allocate siesta 3` # allocate workspace $ cd $WORKSPACE # change to workspace $ cp -a $SIESTA_HOME/bwhpc-examples . # copy example files to workspace $ cd bwhpc-examples/h2o # change to test directory $ sbatch bwhpc_siesta_h2o.sh # submit job $ squeue # obtain JOBID $ scontrol show job <JOBID> # check state of job $ ls # when job finishes the results will be visible in this directory