JUSTUS2/Software/Quantum ESPRESSO: Difference between revisions
No edit summary |
K Siegmund (talk | contribs) No edit summary |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 7: | Line 7: | ||
| module load |
| module load |
||
| chem/quantum_espresso/7.1 |
| chem/quantum_espresso/7.1 |
||
|- |
|||
| Availability |
|||
| [https://wiki.bwhpc.de/e/JUSTUS2 BwForCluster JUSTUS2] |
|||
|- |
|- |
||
| License |
| License |
||
Line 28: | Line 25: | ||
'''Quantum ESPRESSO''' is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. |
'''Quantum ESPRESSO''' is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. |
||
= Availability = |
|||
Quantum ESPRESSO is available on [https://wiki.bwhpc.de/e/JUSTUS2 JUSTUS 2]. |
|||
In order to check which versions of Quantum ESPRESSO are installed on the compute cluster, run the following command: |
|||
<pre> |
|||
$ module avail chem/quantum_espresso |
|||
</pre> |
|||
<pre> |
|||
-----------------------------------------------------------/opt/bwhpc/common/modulefiles/Core --------------------------------------------------------------- |
|||
chem/quantum_espresso/6.5 chem/quantum_espresso/6.7_openmp-5 chem/quantum_espresso/6.7 chem/quantum_espresso/7.0 (D) chem/quantum_espresso/7.1 |
|||
</pre> |
|||
= License = |
= License = |
||
Line 50: | Line 33: | ||
QUANTUM ESPRESSO utilizes hybrid MPI/OpenMP parallelization. When configuring parallel jobs in the Slurm sbatch script, use appropriate directives such as --nodes, --ntasks-per-node, and --cpus-per-task. It is recommended to conduct benchmarks before submitting a large number of productive jobs. |
QUANTUM ESPRESSO utilizes hybrid MPI/OpenMP parallelization. When configuring parallel jobs in the Slurm sbatch script, use appropriate directives such as --nodes, --ntasks-per-node, and --cpus-per-task. It is recommended to conduct benchmarks before submitting a large number of productive jobs. |
||
QUANTUM ESPRESSO implements several levels of MPI parallelization, distributing both calculations and data structures across processors. These processors are organized hierarchically into different groups identified by various MPI communicators (you might also refer to the |
QUANTUM ESPRESSO implements several levels of MPI parallelization, distributing both calculations and data structures across processors. These processors are organized hierarchically into different groups identified by various MPI communicators (you might also refer to the [https://www.quantum-espresso.org/Doc/user_guide/node20.html Quantum Espresso User Guide]: |
||
* world: The group of all processors (MPI_COMM_WORLD). |
* world: The group of all processors (MPI_COMM_WORLD). |
||
Line 61: | Line 44: | ||
It's important to note that not all parallelization levels are implemented in all parts of the code. |
It's important to note that not all parallelization levels are implemented in all parts of the code. |
||
---- |
|||
[[Category:Chemistry software]][[Category:BwForCluster_Chemistry]][[Category:BwForCluster_JUSTUS_2]] |
Latest revision as of 17:19, 29 February 2024
The main documentation is available via |
Description | Content |
---|---|
module load | chem/quantum_espresso/7.1 |
License | Open-source software, distributed under the GNU General Public License (GPL). More... |
Citing | As described here |
Links | Homepage | Documentation | Main Portal to Forum |
Graphical Interface | No |
Description
Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.
License
Quantum ESPRESSO is free, open-source software released under the GNU General Public License (GPL). Anyone interested in using the Quantum ESPRESSO program suite must read the conditions described in its manifesto.
Parallelization
QUANTUM ESPRESSO utilizes hybrid MPI/OpenMP parallelization. When configuring parallel jobs in the Slurm sbatch script, use appropriate directives such as --nodes, --ntasks-per-node, and --cpus-per-task. It is recommended to conduct benchmarks before submitting a large number of productive jobs.
QUANTUM ESPRESSO implements several levels of MPI parallelization, distributing both calculations and data structures across processors. These processors are organized hierarchically into different groups identified by various MPI communicators (you might also refer to the Quantum Espresso User Guide:
- world: The group of all processors (MPI_COMM_WORLD).
- images: Processors divided into different "images," each corresponding to a distinct self-consistent or linear-response calculation, loosely coupled to others.
- pools: Each image can be subdivided into "pools," each responsible for a group of k-points.
- bands: Each pool is further subdivided into "band groups," each handling a group of Kohn-Sham orbitals, especially useful for calculations with hybrid functionals.
- PW: Orbitals in the PW basis set, charges, and density are distributed across processors. Linear-algebra operations on PW grids are automatically parallelized. 3D FFTs transform electronic wave functions between reciprocal and real space, with planes of the 3D grid distributed in real space to processors.
- tasks: FFTs on Kohn-Sham states are redistributed to "task" groups to allow efficient parallelization of the 3D FFT when the number of processors exceeds the number of FFT planes.
- linear-algebra group: A further level of parallelization involves subspace diagonalization/iterative orthonormalization, organized into a square 2D grid of processors. This group performs parallel diagonalization using standard linear algebra operations. Preferred options include ELPA and ScaLAPACK, with alternative built-in algorithms available.
It's important to note that not all parallelization levels are implemented in all parts of the code.