Both AMD and Intel CPUs, 32-bit and 64-bit, are supported and work, either in 32-bit emulation and in 64-bit mode. 64-bit executables can address a much larger memory space than 32-bit executable, but there is no gain in speed. Beware: the default integer type for 64-bit machine is typically 32-bit long. You should be able to use 64-bit integers as well, but it is not guaranteed to work and will not give any advantage anyway.
It is usually convenient to create semi-statically linked executables (with only libc, libm, libpthread dynamically linked). If you want to produce a binary that runs on different machines, compile it on the oldest machine you have (i.e. the one with the oldest version of the operating system).
Currently, configure supports, and QUANTUM ESPRESSO works with, not-too-old and not-too-buggy versions of gfortran, Intel (ifx, ifort), NVidia (nvfortran), AMD (AOCC v.5), ARM (armflang), Cray (ftn) compilers.
configure properly detects MKL libraries, as long as the $MKLROOT environment variable is set in the current shell. Normally this environment variable is set by sourcing the environment script provided by Intel.
By default the non-threaded version of MKL is linked, unless option
configure –with-openmp is specified. In case of trouble,
refer to the following web page to find the correct way to link MKL:
http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/.
For parallel (MPI) execution on multiprocessor (SMP) machines, set the environment variable OMP_NUM_THREADS to 1 unless you know how to run MPI+OpenMP. See Sec.3 for more info on this and on the difference between MPI and OpenMP parallelization.
`` Recently I played around with some AMD EPYC cpus and the bad thing is that I also saw some strange numbers when using libflame/aocl 2.1. (...) Since version 2020 the MKL performs rather well when using AMD cpus, however, if you want to get the best performance you have to additionally set:
export MKL_DEBUG_CPU_TYPE=5which gives an additional 10-20% speedup with MKL 2020, while for earlier versions the speedup is greater than 200%. [...] Another note, there seem to be problems using FFTW interface of MKL with AMD cpus. To get around this problem, one has to additionally set
export MKL_CBWR=AUTO`` (Info by Tobias Klöffel, Feb. 2020)
Apart from such problems, QUANTUM ESPRESSO compiles and works on all non-buggy, properly configured hardware and software combinations. In some cases you may have to recompile MPI libraries: not all MPI installations contain support for the Fortran compiler of your choice (or for any Fortran compiler at all).
If QUANTUM ESPRESSO does not work for some reason on a PC cluster, try first if it works in serial execution. A frequent problem with parallel execution is that QUANTUM ESPRESSO does not read from standard input, due to the configuration of MPI libraries: see Sec.3.5. If you are dissatisfied with the performances in parallel execution, see Sec.3 and in particular Sec.3.5.
Another option is Quantum Mobile: https://www.materialscloud.org/work/quantum-mobile.
If you prefere a native Windows build, you are welcome to try the various possibilities listed below and to report details in case of success.
Since February 2020 QUANTUM ESPRESSO can be compiled on MS-Windows 10 using PGI 19.10 Community Edition (freely downloadable). configure works with the bash script provided by PGI (the configure of FoX fails: use script install/build_fox_with_pgi.sh to manually compile FoX).
Another option: use MinGW/MSYS. Download the installer from https://osdn.net/projects/mingw/, install MinGW, MSYS, gcc and gfortran. Start a shell window; run "./configure"; edit make.inc; uncommenting the second definition of TOPDIR (the first one introduces a final "/" that Windows doesn't like); run "make". Note that on some Windows the code fails when checking that tmp_dir is writable, for unclear reasons.
Another option is Cygwin, a UNIX environment which runs under Windows: see
http://www.cygwin.com/.
Mac OS-X machines with gfortran, and possibly other compilers as well, should in principle work, but "your mileage may vary", depending upon the specific software stack you are using. Parallel compilation with OpenMPI should also work.
Gfortran information and binaries for Mac OS-X here: http://hpc.sourceforge.net/.
If you get an error like
clang: error: no input files make[1]: *** [laxlib.fh] Error 1 make: *** [libla] Error 1iredefine CPP as CPP=gcc -E in make.inc.
Mysterious crashes in zdotc are due to a known incompatibility of complex functions with some optimized BLAS. They should no longer be an issue, as all zdotc have been replaced from the current QUANTUM ESPRESSO version.
"I have had some success compiling pw.x on the newish apple hardware. Running run-tests-pw-parallel resulted in all but 3 tests passed (3 unknown). QE6.7 works out of the box:
./configure FC=mpif90 CC=mpicc CPP=cpp-11 BLAS_LIBS="-L/opt/homebrew/lib
-lveclibfort" LIBDIRS=/opt/homebrew/lib
Current develop branch needed two changes:
Cray machines may be tricky: ''... despite what people can imagine, every CRAY machine deployed can have different environment. For example on the machine I usually use for tests [...] I do have to unload some modules to make QE running properly. On another CRAY [...] there is also Intel compiler as option and the system is slightly different compared to the other.'' (info by Filippo Spiga)
./configure ARCH=craype should work for recent Cray machines. This selects the ftn compiler, that typically uses the crayftn compiler but may also use other ones, depending upon the site and personal environment. ftn v.15.0.1 and later should compile QE properly. Some compiler versions may however run into problems like these for ftn v.14.0.3:
esm_stres_mod.f90;
esm_stres_mod.f90,
Modules/qexsd*.f90, PW/src/pw_restart_new.f90
with reduced optimization, using -O0 or -O1 instead of the default
-O3,fp3 optimization.
If you want to use the Intel compiler instead, try something like:
$ module swap PrgEnv-cray PrgEnv-intel $ ./configure ARCH=craype [--enable-openmp --enable-parallel --with-scalapack]