next up previous contents
Next: 3.4 Tricks and problems Up: 3 Parallelism Previous: 3.2 Running on parallel   Contents


3.3 Parallelization levels

In QUANTUM ESPRESSO several MPI parallelization levels are implemented, in which both calculations and data structures are distributed across processors. Processors are organized in a hierarchy of groups, which are identified by different MPI communicators level. The groups hierarchy is as follow:

Note however that not all parallelization levels are implemented in all codes. About communications

Images and pools are loosely coupled: inter-processors communication between different images and pools is modest. Processors within each pool are instead tightly coupled and communications are significant. This means that fast communication hardware is needed if your pool extends over more than a few processors on different nodes. Choosing parameters

: To control the number of processors in each group, command line switches: -nimage, -npools, -nband, -ntg, -ndiag or -northo (shorthands, respectively: -ni, -nk, -nb, -nt, -nd) are used. As an example consider the following command line:
mpirun -np 4096 ./neb.x -ni 8 -nk 2 -nt 4 -nd 144 -i my.input
This executes a NEB calculation on 4096 processors, 8 images (points in the configuration space in this case) at the same time, each of which is distributed across 512 processors. k-points are distributed across 2 pools of 256 processors each, 3D FFT is performed using 4 task groups (64 processors each, so the 3D real-space grid is cut into 64 slices), and the diagonalization of the subspace Hamiltonian is distributed to a square grid of 144 processors (12x12).

Default values are: -ni 1 -nk 1 -nt 1 ; nd is set to 1 if ScaLAPACK is not compiled, it is set to the square integer smaller than or equal to half the number of processors of each pool. Massively parallel calculations

For very large jobs (i.e. O(1000) atoms or more) or for very long jobs, to be run on massively parallel machines (e.g. IBM BlueGene) it is crucial to use in an effective way all available parallelization levels: on linear algebra (requires compilation with ELPA and/or ScaLAPACK), on "task groups" (requires run-time option "-nt N"), and mixed MPI-OpenMP (requires OpenMP compilation: configure-enable-openmp). Without a judicious choice of parameters, large jobs will find a stumbling block in either memory or CPU requirements. Note that I/O may also become a limiting factor.

3.3.1 Understanding parallel I/O

In parallel execution, each processor has its own slice of data (Kohn-Sham orbitals, charge density, etc), that have to be written to temporary files during the calculation, or to data files at the end of the calculation. This can be done in two different ways: There is a third format, no longer used for final data but used for scratch and restart files:

The directory for data is specified in input variables outdir and prefix (the former can be specified as well in environment variable ESPRESSO_TMPDIR): outdir/ A copy of pseudopotential files is also written there. If some processor cannot access the data directory, the pseudopotential files are read instead from the pseudopotential directory specified in input data. Unpredictable results may follow if those files are not the same as those in the data directory!

IMPORTANT: Avoid I/O to network-mounted disks (via NFS) as much as you can! Ideally the scratch directory outdir should be a modern Parallel File System. If you do not have any, you can use local scratch disks (i.e. each node is physically connected to a disk and writes to it) but you may run into trouble anyway if you need to access your files that are scattered in an unpredictable way across disks residing on different nodes.

You can use input variable disk_io to vary the amount of I/O done by pw.x. Since v.5.1, the dafault value is disk_io='low', so the code will store wavefunctions into RAM and not on disk during the calculation. Specify disk_io='medium' only if you have too many k-points and you run into trouble with memory; choose disk_io='none' if you do not need to keep final data files.

next up previous contents
Next: 3.4 Tricks and problems Up: 3 Parallelism Previous: 3.2 Running on parallel   Contents
Paolo Giannozzi 2019-04-18