...
Quantum ESPRESSO (QE) Is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.
Presentation
Here is the Introduction presentation video and slides:
Widget Connector | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
View file | ||
---|---|---|
|
Downloading and compiling QE
wget You have to register at https://www.quantum-espresso.org/rdmdownload-download/488/v7-1/2d5cbaa760022a30a83d648217f89560/qe-7.1-ReleasePack.tar.gz
page to download Quantum ESPRESSO v7.1. Code Block
Sample build script for Fritz:
Code Block |
---|
tar xfp qe-7.1-ReleasePack.tar.gz
cd qe-7.1
module load |
...
mkl/2022.1. |
...
0 module load |
...
git/ |
...
2.35.2 m4/1.4.19 MPI= |
...
intelmpi-2021. |
...
7.0 MPI= |
...
openmpi-4.1.2 |
...
-intel2021.4.0 if [[ "$MPI" =~ ^intel ]]; then module load intel/2022.1.0 export I_MPI_CC=icc export I_MPI_CXX=icpc export I_MPI_FC=ifort export I_MPI_F90=ifort COMP="CC=mpiicc CXX=mpiicpc FC=mpiifort F90=mpiifort MPIF90=mpiifort" SCA="--with-scalapack=intel" elif [[ "$MPI" =~ ^openmpi ]]; then export OMPI_MPICC=icc export OMPI_MPICXX=icpc export OMPI_MPIFC=ifort export OMPI_MPIF90=ifort COMP="CC=mpicc CXX=mpicxx FC=mpif90 F90=mpif90" fi module load $(echo $MPI | sed -e "s/\-/\//") ./configure --enable-parallel --prefix=$PWD/../qe-7.1-$MPI \ --enable-openmp \ $SCA $COMP make -j 32 cp pw make install |
Sample build script for PSC:
Download HPC-X from ISC23 SCC Getting Started with Bridges-2 Cluster.
Code Block |
---|
tar xfp qe-7.1-ReleasePack.tar.gz cd qe-7.1 source /jet/packages/intel/oneapi/compiler/2022.1.0/env/vars.sh source /jet/packages/intel/oneapi/mkl/2022.1.0/env/vars.sh MPI=intelmpi MPI=hpcx if [[ "$MPI" =~ |
...
^intel ]]; then source /jet/packages/intel/oneapi/mpi/2021.6.0/env/vars.sh COMP="CC=mpiicc CXX=mpiicpc FC=mpiifort F90=mpiifort MPIF90=mpiifort" SCA="--with-scalapack=intel" elif [[ "$MPI" =~ |
...
^hpcx ]]; then module use $HOME/hpcx-2.13.1/modulefiles module load hpcx export OMPI_MPICC=icc export OMPI_MPICXX=icpc export OMPI_MPIFC=ifort export OMPI_MPIF90=ifort COMP="CC=mpicc CXX=mpicxx FC=mpif90 F90=mpif90" fi which mpicc BASE=$PWD INSDIR=$BASE/../qe-7.1-$MPI ./configure --enable-parallel --prefix=$INSDIR \ --enable-openmp \ CFLAGS=" -O3 |
...
" \
FCFLAGS=" -O3 |
...
" \
F90FLAGS="-O3 |
...
" \ $SCA $COMP make -j 32 cp pw 2>&1 | tee $BASE/build-${MPI}.log make |
...
install | tee -a $BASE/build-${MPI}.log |
Running PWscf
Download QE benchmarks
Code Block |
---|
git clone https://github.com/QEF/benchmarks.git |
...
We will look for the final wallclock time (WALL).
Task and submission
...
Use the following 2 inputs:
View file | ||
---|---|---|
|
View file | ||
---|---|---|
|
Profile the given input (TBD - Available in oneDrive folder)2 given inputs
Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.Submit the profile as PDF to the team's folder.
Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.
Run the QE CP with the given input inputs on both PSC bridges-2 and FAU Fritz CPU clusters, and submit for four node runs.
Submit the results to the
team's folder, 4 node run only (4 results, 2 per cluster).
Select one cluster and experiment with 1,2,4 node runs. Add to your presentation a scalability graph based on your results and any conclusions you came up with. No need to submit those results, just show your work on your presentation for the interview.
Bonus task - run QE CP on the Alex PSC cluster using A100 GPU partitionV100 GPUs. Use only 4 GPUs on a single node for the run. Submit the results to the team's folder.
Submission and Presentation:
- Submit all the build scripts, standard output, logs and run scripts to the team’s folder.
- No need to submit the output data or source codes.
- Prepare slides for the team’s interview based on your work for this application.