Many problems in parallel execution derive from the mixup of different MPI libraries and runtime environments. There are two major MPI implementations, OpenMPI and MPICH, coming in various versions, not necessarily compatible; plus vendor-specific implementations (e.g. Intel MPI). A parallel machine may have multiple parallel compilers (typically, mpif90 scripts calling different serial compilers), multiple MPI libraries, multiple launchers for parallel codes (different versions of mpirun and/or mpiexec).
You have to figure out the proper combination of all of the above, which may require using command module or manually setting environment variables and execution paths. What exactly has to be done depends upon the configuration of your machine. You should inquire with your system administrator or user support, if available; if not, YOU are the system administrator and user support and YOU have to solve your problems.
Please also note that while mysterious and irreproducible crashes in parallel execution may be due to QUANTUM ESPRESSO bugs, more often than not they are a consequence of buggy compilers or of buggy or miscompiled MPI libraries.
Some implementations of the MPI library have problems with input redirection in parallel. This typically shows up under the form of mysterious errors when reading data. If this happens, use the option -i (or -in, -inp, -input), followed by the input file name. Example:
pw.x -i inputfile -nk 4 > outputfileOf course the input file must be accessible by the processor that must read it (only one processor reads the input file and subsequently broadcasts its contents to all other processors).
Apparently the LSF implementation of MPI libraries manages to ignore or to confuse even the -i/in/inp/input mechanism that is present in all QUANTUM ESPRESSO codes. In this case, use the -i option of mpirun.lsf to provide an input file.
QUANTUM ESPRESSO cannot control where MPI processes and OpenMP thread execute: this is something that the operating system should know about. All you can control is the number of MPI processes (with mpirun) and of OpenMP threads per MPI process (with the environment variable OMP_NUM_THREADS=N). If you are out of luck and of better ideas, just set OMP_NUM_THREADS=1,