Getting Started with Quantum ESPRESSO for ISC23 SCC

 

 

Overview

Quantum ESPRESSO (QE) Is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.

 

Presentation

Here is the Introduction presentation video and slides:

Downloading and compiling QE

You have to register at https://www.quantum-espresso.org/download-page to download Quantum ESPRESSO v7.1.

Sample build script for Fritz:

tar xfp qe-7.1-ReleasePack.tar.gz cd qe-7.1 module load mkl/2022.1.0 module load git/2.35.2 m4/1.4.19 MPI=intelmpi-2021.7.0 MPI=openmpi-4.1.2-intel2021.4.0 if [[ "$MPI" =~ ^intel ]]; then module load intel/2022.1.0 export I_MPI_CC=icc export I_MPI_CXX=icpc export I_MPI_FC=ifort export I_MPI_F90=ifort COMP="CC=mpiicc CXX=mpiicpc FC=mpiifort F90=mpiifort MPIF90=mpiifort" SCA="--with-scalapack=intel" elif [[ "$MPI" =~ ^openmpi ]]; then export OMPI_MPICC=icc export OMPI_MPICXX=icpc export OMPI_MPIFC=ifort export OMPI_MPIF90=ifort COMP="CC=mpicc CXX=mpicxx FC=mpif90 F90=mpif90" fi module load $(echo $MPI | sed -e "s/\-/\//") ./configure --enable-parallel --prefix=$PWD/../qe-7.1-$MPI \ --enable-openmp \ $SCA $COMP make -j 32 cp pw make install

Sample build script for PSC:

Download HPC-X from ISC23 SCC Getting Started with Bridges-2 Cluster.

tar xfp qe-7.1-ReleasePack.tar.gz cd qe-7.1 source /jet/packages/intel/oneapi/compiler/2022.1.0/env/vars.sh source /jet/packages/intel/oneapi/mkl/2022.1.0/env/vars.sh MPI=intelmpi MPI=hpcx if [[ "$MPI" =~ ^intel ]]; then source /jet/packages/intel/oneapi/mpi/2021.6.0/env/vars.sh COMP="CC=mpiicc CXX=mpiicpc FC=mpiifort F90=mpiifort MPIF90=mpiifort" SCA="--with-scalapack=intel" elif [[ "$MPI" =~ ^hpcx ]]; then module use $HOME/hpcx-2.13.1/modulefiles module load hpcx export OMPI_MPICC=icc export OMPI_MPICXX=icpc export OMPI_MPIFC=ifort export OMPI_MPIF90=ifort COMP="CC=mpicc CXX=mpicxx FC=mpif90 F90=mpif90" fi which mpicc BASE=$PWD INSDIR=$BASE/../qe-7.1-$MPI ./configure --enable-parallel --prefix=$INSDIR \ --enable-openmp \ CFLAGS=" -O3 " \ FCFLAGS=" -O3 " \ F90FLAGS="-O3 " \ $SCA $COMP make -j 32 cp pw 2>&1 | tee $BASE/build-${MPI}.log make install | tee -a $BASE/build-${MPI}.log

Running PWscf

Download QE benchmarks

git clone https://github.com/QEF/benchmarks.git

Practice with one of the small benchmarks called “AUSURF112”

Sample output

 

We will look for the final wallclock time (WALL).

Task and submission

 

Use QE 7.1 for the benchamarks:

Use the following 2 inputs:

  1. Profile the 2 given inputs
    Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.

    • Submit the profile as PDF to the team's folder.

    • Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.

  2. Run the CP with the given inputs on both PSC bridges-2 and FAU Fritz CPU clusters, for four node runs.

    1. Submit the results to the team's folder, 4 node run only (4 results, 2 per cluster).

    2. Select one cluster and experiment with 1,2,4 node runs. Add to your presentation a scalability graph based on your results and any conclusions you came up with. No need to submit those results, just show your work on your presentation for the interview.

  3. Bonus task - run CP on the PSC cluster using V100 GPUs. Use only 4 GPUs on a single node for the run. Submit the results to the team's folder.

  4. Submission and Presentation:
    - Submit all the build scripts, standard output, logs and run scripts to the team’s folder.
    - No need to submit the output data or source codes.
    - Prepare slides for the team’s interview based on your work for this application.