...
Quantum ESPRESSO (QE) Is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.
Presentation
Here is the Introduction presentation video and slides:
Widget Connector | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
View file | ||
---|---|---|
|
Downloading and compiling QE
wget You have to register at https://www.quantum-espresso.org/rdmdownload-download/488/v7-1/2d5cbaa760022a30a83d648217f89560/qe-7.1-ReleasePack.tar.gz
page to download Quantum ESPRESSO v7.1. Code Block
Sample build script for Fritz:
Code Block |
---|
tar xfp qe-7.1-ReleasePack.tar.gz
cd qe-7.1
module load |
...
mkl/2022.1. |
...
0 module load |
...
git/ |
...
2.35.2 m4/1.4.19 MPI= |
...
intelmpi-2021. |
...
7.0 MPI= |
...
openmpi-4.1.2-intel2021. |
...
4.0 if [[ "$MPI" =~ ^intel ]]; then module load intel/2022.1.0 export I_MPI_CC=icc export I_MPI_CXX=icpc export I_MPI_FC=ifort export I_MPI_F90=ifort COMP="CC=mpiicc CXX=mpiicpc FC=mpiifort F90=mpiifort MPIF90=mpiifort" SCA="--with-scalapack=intel" elif [[ "$MPI" =~ ^openmpi ]]; then export OMPI_MPICC=icc export OMPI_MPICXX=icpc export OMPI_MPIFC=ifort export OMPI_MPIF90=ifort COMP="CC=mpicc CXX=mpicxx FC=mpif90 F90=mpif90" fi module load $(echo $MPI | sed -e "s/\-/\//") ./configure --enable-parallel --prefix=$PWD/../qe-7.1-$MPI \ --enable-openmp \ $SCA $COMP make -j 32 cp pw make install |
Sample build script for PSC:
Download HPC-X from ISC23 SCC Getting Started with Bridges-2 Cluster.
Code Block |
---|
tar xfp qe-7.1-ReleasePack.tar.gz cd qe-7.1 source /jet/packages/intel/oneapi/compiler/2022.1.0/env/vars.sh source /jet/packages/intel/oneapi/mkl/2022.1.0/env/vars.sh MPI=intelmpi MPI=hpcx if [[ "$MPI" =~ |
...
^intel ]]; then source /jet/packages/intel/oneapi/mpi/2021.6.0/env/vars.sh COMP="CC=mpiicc CXX=mpiicpc FC=mpiifort F90=mpiifort MPIF90=mpiifort" SCA="--with-scalapack=intel" elif [[ "$MPI" =~ |
...
^hpcx ]]; then module use $HOME/hpcx-2.13.1/modulefiles module load hpcx export OMPI_MPICC=icc export OMPI_MPICXX=icpc export OMPI_MPIFC=ifort export OMPI_MPIF90=ifort COMP="CC=mpicc CXX=mpicxx FC=mpif90 F90=mpif90" fi which mpicc BASE=$PWD INSDIR=$BASE/../qe-7.1-$MPI ./configure --enable-parallel --prefix=$INSDIR \ --enable-openmp \ CFLAGS=" -O3 |
...
" \
FCFLAGS=" -O3 |
...
" \
F90FLAGS="-O3 |
...
" \ $SCA $COMP make -j 32 cp pw 2>&1 | tee $BASE/build-${MPI}.log make |
...
install | tee -a $BASE/build-${MPI}.log |
Running PWscf
Download QE benchmarks
Code Block |
---|
git clone https://github.com/QEF/benchmarks.git |
To test Practice with one of benchmarks, AUSURF112.the small benchmarks called “AUSURF112”
Code Block |
---|
cd benchmarks/AUSURF112 mpirun -np <NPROC> <MPI FLAGS> pw.x -inp ausurf.in |
...
Code Block |
---|
mpirun -np 160 -x UCX_NET_DEVICES=mlx5_0:1 -x UCX_LOG_LEVEL=error pw.x -inp ausurf.in Program PWSCF v.7.1 starts on 12Sep2022 at 20:12:15 This program is part of the open-source Quantum ESPRESSO suite for quantum simulation of materials; please cite "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009); "P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017); "P. Giannozzi et al., J. Chem. Phys. 152 154105 (2020); URL http://www.quantum-espresso.org", in publications or presentations arising from this work. More details at http://www.quantum-espresso.org/quote Parallel version (MPI), running on 160 processors MPI processes distributed on 4 nodes 166131 MiB available memory on the printing compute node when the environment starts Reading input from ausurf.in Warning: card &CELL ignored Warning: card CELL_DYNAMICS = 'NONE', ignored Warning: card / ignored Current dimensions of program PWSCF are: Max number of different atomic species (ntypx) = 10 Max number of k-points (npk) = 40000 Max angular momentum in pseudopotentials (lmaxx) = 4 ... General routines calbec : 21.09s CPU 22.57s WALL ( 168 calls) fft : 1.25s CPU 1.45s WALL ( 296 calls) ffts : 0.09s CPU 0.12s WALL ( 44 calls) fftw : 40.23s CPU 40.97s WALL ( 100076 calls) interpolate : 0.11s CPU 0.15s WALL ( 22 calls) Parallel routines PWSCF : 3m27.78s CPU 3m58.71s WALL This run was terminated on: 20:16:14 12Sep2022 =------------------------------------------------------------------------------= JOB DONE. =------------------------------------------------------------------------------= |
We will look for the final wallclock time (WALL).
Task and submission
Use QE 7.1 for the benchamarks:
Use the following 2 inputs:
View file | ||
---|---|---|
|
View file | ||
---|---|---|
|
Profile the 2 given inputinputs
Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.Submit the profile as PDF to the team's folder.
Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.
Run the QE CP with the given input inputs on both PSC bridges-2 and FAU Fritz CPU clusters, and submit for four node runs.
Submit the results to the
Visualize the input on any of the clusters creating a figure or video out of the input file, using any method for the visualization. In case the team has a twitter account, tag the figure/video with the team name or university name and publish on Twitter with the following tags: #ISC23 #ISC23_SCC @QuantumESPRESSOteam's folder, 4 node run only (4 results, 2 per cluster).
Select one cluster and experiment with 1,2,4 node runs. Add to your presentation a scalability graph based on your results and any conclusions you came up with. No need to submit those results, just show your work on your presentation for the interview.
Bonus task - run QE CP on the Alex PSC cluster using A100 GPU partitionV100 GPUs. Use only 4 GPUs on a single node for the run. Submit the results to the team's folder.
Submission and Presentation:
- Submit all the build scripts, standard output, logs and slurm run scripts used to the team’s folder.
- No need to submit the output data or source codes.
- Prepare slides for the team’s interview based on your work for this application.