Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Overview

Quantum ESPRESSO (QE) Is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.

Presentation

Here is the Introduction presentation video and slides:

...

View file
nameISC23 SCC - Quantum ESPRESSO.pdf

Downloading and compiling QE

You have to register at https://www.quantum-espresso.org/download-page to download Quantum ESPRESSO v7.1.

...

Code Block
tar xfp qe-7.1-ReleasePack.tar.gz
cd qe-7.1

module load intel/2022.3.1 compiler mkl
module load git/2.35.2 m4/1.4.19

MPI=impi-2021.7.0
MPI=hpcx-2.14.0
module load $(echo $MdPI | sed -e "s/\-/\//")
if [[ "$MPI" =~ ^impi ]]; then
        export I_MPI_CC=icc
        export I_MPI_CXX=icpc
        export I_MPI_FC=ifort
        export I_MPI_F90=ifort
        COMP="CC=mpiicc CXX=mpiicpc FC=mpiifort F90=mpiifort MPIF90=mpiifort"
        SCA="--with-scalapack=intel"
elif [[ "$MPI" =~ ^openmpi ]]; then
        export OMPI_MPICC=icc
        export OMPI_MPICXX=icpc
        export OMPI_MPIFC=ifort
        export OMPI_MPIF90=ifort
        COMP="CC=mpicc CXX=mpicxx FC=mpif90 F90=mpif90"
fi

./configure --enable-parallel --prefix=$PWD/../qe-7.1-$MPI \
        --enable-openmp \
        $SCA $COMP

make -j 32 cp pw 
make install 

Running PWscf (Example)

Download QE benchmarks

Code Block
git clone https://github.com/QEF/benchmarks.git

Practice with one of the small benchmarks called “AUSURF112” (not part of the online task, just to practice)

Code Block
cd benchmarks/AUSURF112
mpirun -np <NPROC> <MPI FLAGS> pw.x -inp ausurf.in

...

We will look for the final wallclock time (WALL).

Task and submission

Use QE 7.1 for the benchamarksbenchmark:

...

View file
namesupercell_11layer.zip

  1. Profile the given input
    Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.

    • Submit the profile as PDF to the team's folder.

    • Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.

  2. Run the CP with the given input.

    1. Submit the results to the team's folder.

    2. Experiment with 1,2,4+ node runs. Add to your presentation a scalability graph based on your results and any conclusions you came up with. No need to submit those results, just show your work on your presentation for the interview.

  3. Bonus task - run CP using GPUsRun the benchmark with the given input.

  4. Submission and Presentation:
    - Submit all the build scripts, standard output, logs and run scripts to the team’s folder.
    - No need to submit the output data or source codes.
    - Prepare slides for the team’s interview based on your work for this application.