JUSTUS2/Software/Quantum ESPRESSO: Difference between revisions
(→Usage) |
K Siegmund (talk | contribs) No edit summary |
||
(11 intermediate revisions by 2 users not shown) | |||
Line 7: | Line 7: | ||
| module load |
| module load |
||
| chem/quantum_espresso/7.1 |
| chem/quantum_espresso/7.1 |
||
|- |
|||
| Availability |
|||
| [https://wiki.bwhpc.de/e/JUSTUS2 BwForCluster JUSTUS2] |
|||
|- |
|- |
||
| License |
| License |
||
Line 28: | Line 25: | ||
'''Quantum ESPRESSO''' is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. |
'''Quantum ESPRESSO''' is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. |
||
= Availability = |
|||
Quantum ESPRESSO is available on selected bwHPC-Clusters. A complete list of versions currently installed on the bwHPC-Clusters can be obtained from the [https://www.bwhpc.de/software.html Cluster Information System (CIS)]. |
|||
In order to check which versions of Quantum ESPRESSO are installed on the compute cluster, run the following command: |
|||
<pre> |
|||
$ module avail chem/quantum_espresso |
|||
</pre> |
|||
= License = |
= License = |
||
Line 42: | Line 30: | ||
Quantum ESPRESSO is free, open-source software released under the GNU General Public License (GPL). Anyone interested in using the Quantum ESPRESSO program suite must read the conditions described in its [https://www.quantum-espresso.org/project/manifesto manifesto]. |
Quantum ESPRESSO is free, open-source software released under the GNU General Public License (GPL). Anyone interested in using the Quantum ESPRESSO program suite must read the conditions described in its [https://www.quantum-espresso.org/project/manifesto manifesto]. |
||
= Parallelization = |
|||
= Usage = |
|||
QUANTUM ESPRESSO utilizes hybrid MPI/OpenMP parallelization. When configuring parallel jobs in the Slurm sbatch script, use appropriate directives such as --nodes, --ntasks-per-node, and --cpus-per-task. It is recommended to conduct benchmarks before submitting a large number of productive jobs. |
|||
== Loading the module == |
|||
The most preferable way to load Quantum Espresso (QE) is with the specific version included, i.e., 'module load chem/quantum_espresso/<version>'. |
|||
<pre> |
|||
$ module load chem/quantum_espresso/7.1 |
|||
</pre> |
|||
The default version, which may change over time, can be loaded simply with the command 'module load chem/quantum_espresso'. |
|||
The module includes the loading of all additional modules necessary for the proper functioning of the program, so there's no need to load additional modules separately. However, loading additional modules may be counterproductive. |
|||
== Example Script == |
|||
A working example of the submission script for a parallel QE job is available, after loading the module, at $ESPRESSO_EXA_DIR/slurm_quantum_espresso-example.sbatch. |
|||
= Examples = |
|||
As with all processes that require more than a few minutes to run, non-trivial compute jobs must be submitted to the cluster queuing system. |
|||
Example scripts are available in the directory <span style="background:#edeae2;margin:2px;padding:1px;border:1px dotted #808080">$ESPRESSO_EXA_DIR</span>: |
|||
<pre> |
|||
$ module show chem/quantum_espresso # show environment variables, which will be available after 'module load' |
|||
$ module load chem/quantum_espresso # load module |
|||
$ ls $ESPRESSO_EXA_DIR # show content of directory $ESPRESSO_EXA_DIR |
|||
</pre> |
|||
QUANTUM ESPRESSO implements several levels of MPI parallelization, distributing both calculations and data structures across processors. These processors are organized hierarchically into different groups identified by various MPI communicators (you might also refer to the [https://www.quantum-espresso.org/Doc/user_guide/node20.html Quantum Espresso User Guide]: |
|||
Run a first simple example job on <font color=red>JUSTUS 2</font>: |
|||
<pre> |
|||
$ module load chem/quantum_espresso # load module |
|||
$ WORKSPACE=`ws_allocate quantum_espresso 3` # allocate workspace |
|||
$ cd $WORKSPACE # change to workspace |
|||
$ cp -a $ESPRESSO_HOME/bwhpc-examples . # copy example files to workspace |
|||
$ cd bwhpc-examples # change to test directory |
|||
$ sbatch bwforcluster-quantum_espresso-example.sbatch # submit job |
|||
$ squeue # obtain JOBID |
|||
$ scontrol show job <JOBID> # check state of job |
|||
$ ls # when job finishes the results will be visible in this directory |
|||
</pre> |
|||
* world: The group of all processors (MPI_COMM_WORLD). |
|||
= Useful links = |
|||
* images: Processors divided into different "images," each corresponding to a distinct self-consistent or linear-response calculation, loosely coupled to others. |
|||
* pools: Each image can be subdivided into "pools," each responsible for a group of k-points. |
|||
* bands: Each pool is further subdivided into "band groups," each handling a group of Kohn-Sham orbitals, especially useful for calculations with hybrid functionals. |
|||
* PW: Orbitals in the PW basis set, charges, and density are distributed across processors. Linear-algebra operations on PW grids are automatically parallelized. 3D FFTs transform electronic wave functions between reciprocal and real space, with planes of the 3D grid distributed in real space to processors. |
|||
* tasks: FFTs on Kohn-Sham states are redistributed to "task" groups to allow efficient parallelization of the 3D FFT when the number of processors exceeds the number of FFT planes. |
|||
* linear-algebra group: A further level of parallelization involves subspace diagonalization/iterative orthonormalization, organized into a square 2D grid of processors. This group performs parallel diagonalization using standard linear algebra operations. Preferred options include ELPA and ScaLAPACK, with alternative built-in algorithms available. |
|||
It's important to note that not all parallelization levels are implemented in all parts of the code. |
|||
* [https://www.quantum-espresso.org/Doc/user_guide/ Documentation (english)] |
|||
* [https://www.quantum-espresso.org/forum Forum (english)] |
|||
* [https://www.quantum-espresso.org/resources/tutorials Tutorials (english)] |
|||
* [https://en.wikipedia.org/wiki/Quantum_ESPRESSO Wikipedia article (english)] |
|||
* [https://www.quantum-espresso.org/resources/tools Plugins for Quantum ESPRESSO (english)] |
|||
---- |
|||
[[Category:Chemistry software]][[Category:BwForCluster_Chemistry]][[Category:BwForCluster_JUSTUS_2]] |
Latest revision as of 17:19, 29 February 2024
The main documentation is available via |
Description | Content |
---|---|
module load | chem/quantum_espresso/7.1 |
License | Open-source software, distributed under the GNU General Public License (GPL). More... |
Citing | As described here |
Links | Homepage | Documentation | Main Portal to Forum |
Graphical Interface | No |
Description
Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.
License
Quantum ESPRESSO is free, open-source software released under the GNU General Public License (GPL). Anyone interested in using the Quantum ESPRESSO program suite must read the conditions described in its manifesto.
Parallelization
QUANTUM ESPRESSO utilizes hybrid MPI/OpenMP parallelization. When configuring parallel jobs in the Slurm sbatch script, use appropriate directives such as --nodes, --ntasks-per-node, and --cpus-per-task. It is recommended to conduct benchmarks before submitting a large number of productive jobs.
QUANTUM ESPRESSO implements several levels of MPI parallelization, distributing both calculations and data structures across processors. These processors are organized hierarchically into different groups identified by various MPI communicators (you might also refer to the Quantum Espresso User Guide:
- world: The group of all processors (MPI_COMM_WORLD).
- images: Processors divided into different "images," each corresponding to a distinct self-consistent or linear-response calculation, loosely coupled to others.
- pools: Each image can be subdivided into "pools," each responsible for a group of k-points.
- bands: Each pool is further subdivided into "band groups," each handling a group of Kohn-Sham orbitals, especially useful for calculations with hybrid functionals.
- PW: Orbitals in the PW basis set, charges, and density are distributed across processors. Linear-algebra operations on PW grids are automatically parallelized. 3D FFTs transform electronic wave functions between reciprocal and real space, with planes of the 3D grid distributed in real space to processors.
- tasks: FFTs on Kohn-Sham states are redistributed to "task" groups to allow efficient parallelization of the 3D FFT when the number of processors exceeds the number of FFT planes.
- linear-algebra group: A further level of parallelization involves subspace diagonalization/iterative orthonormalization, organized into a square 2D grid of processors. This group performs parallel diagonalization using standard linear algebra operations. Preferred options include ELPA and ScaLAPACK, with alternative built-in algorithms available.
It's important to note that not all parallelization levels are implemented in all parts of the code.