Difference between revisions of "JUSTUS2/Software/Quantum ESPRESSO"

From bwHPC Wiki
Jump to: navigation, search
m (K Siegmund moved page Quantum ESPRESSO to JUSTUS2/Software/Quantum ESPRESSO without leaving a redirect: sw cluster specific)
(No difference)

Revision as of 15:54, 29 February 2024

The main documentation is available via module help chem/quantum_espresso on the cluster. Most software modules for applications provide working example batch scripts.


Description Content
module load chem/quantum_espresso/7.1
Availability BwForCluster JUSTUS2
License Open-source software, distributed under the GNU General Public License (GPL). More...
Citing As described here
Links Homepage | Documentation | Main Portal to Forum
Graphical Interface No

1 Description

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.

2 License

Quantum ESPRESSO is free, open-source software released under the GNU General Public License (GPL). Anyone interested in using the Quantum ESPRESSO program suite must read the conditions described in its manifesto.

3 Parallelization

QUANTUM ESPRESSO utilizes hybrid MPI/OpenMP parallelization. When configuring parallel jobs in the Slurm sbatch script, use appropriate directives such as --nodes, --ntasks-per-node, and --cpus-per-task. It is recommended to conduct benchmarks before submitting a large number of productive jobs.

QUANTUM ESPRESSO implements several levels of MPI parallelization, distributing both calculations and data structures across processors. These processors are organized hierarchically into different groups identified by various MPI communicators (you might also refer to the following link:

  • world: The group of all processors (MPI_COMM_WORLD).
  • images: Processors divided into different "images," each corresponding to a distinct self-consistent or linear-response calculation, loosely coupled to others.
  • pools: Each image can be subdivided into "pools," each responsible for a group of k-points.
  • bands: Each pool is further subdivided into "band groups," each handling a group of Kohn-Sham orbitals, especially useful for calculations with hybrid functionals.
  • PW: Orbitals in the PW basis set, charges, and density are distributed across processors. Linear-algebra operations on PW grids are automatically parallelized. 3D FFTs transform electronic wave functions between reciprocal and real space, with planes of the 3D grid distributed in real space to processors.
  • tasks: FFTs on Kohn-Sham states are redistributed to "task" groups to allow efficient parallelization of the 3D FFT when the number of processors exceeds the number of FFT planes.
  • linear-algebra group: A further level of parallelization involves subspace diagonalization/iterative orthonormalization, organized into a square 2D grid of processors. This group performs parallel diagonalization using standard linear algebra operations. Preferred options include ELPA and ScaLAPACK, with alternative built-in algorithms available.

It's important to note that not all parallelization levels are implemented in all parts of the code.