Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Quantum ESPRESSO (QE) Is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.

Presentation

Here is the Introduction presentation video and slides:

Widget Connector
overlayyoutube
_templatecom/atlassian/confluence/extra/widgetconnector/templates/youtube.vm
width400px
urlhttps://www.youtube.com/watch?v=QioPJUL4038
height300px

View file
nameISC23 SCC - Quantum ESPRESSO.pdf

Downloading and compiling QE

Code Blockwget You have to register at https://www.quantum-espresso.org/rdmdownload-download/488/v7-1/2d5cbaa760022a30a83d648217f89560/qe-7.1-ReleasePack.tar.gz page to download Quantum ESPRESSO v7.1.

Sample build script for Fritz:

Code Block
tar xfp qe-7.1-ReleasePack.tar.gz
cd qe-7.1

module load 

...

mkl/2022.1.

...

0
module load 

...

git/

...

2.35.2 m4/1.4.19

MPI=

...

intelmpi-2021.

...

7.0
MPI=

...

openmpi-4.1.2

...

-intel2021.4.0
if [[ "$MPI" =~ ^intel ]]; then
        module load intel/2022.1.0
        export I_MPI_CC=icc
        export I_MPI_CXX=icpc
        export I_MPI_FC=ifort
        export I_MPI_F90=ifort
        COMP="CC=mpiicc CXX=mpiicpc FC=mpiifort F90=mpiifort MPIF90=mpiifort"
        SCA="--with-scalapack=intel"
elif [[ "$MPI" =~ ^openmpi ]]; then
        export OMPI_MPICC=icc
        export OMPI_MPICXX=icpc
        export OMPI_MPIFC=ifort
        export OMPI_MPIF90=ifort
        COMP="CC=mpicc CXX=mpicxx FC=mpif90 F90=mpif90"
fi
module load $(echo $MPI | sed -e "s/\-/\//")

./configure --enable-parallel --prefix=$PWD/../qe-7.1-$MPI \
        --enable-openmp \
        $SCA $COMP

make -j 32 cp pw 
make install 

Sample build script for PSC:

Download HPC-X from ISC23 SCC Getting Started with Bridges-2 Cluster.

Code Block
tar xfp qe-7.1-ReleasePack.tar.gz
cd qe-7.1

source /jet/packages/intel/oneapi/compiler/2022.1.0/env/vars.sh
source /jet/packages/intel/oneapi/mkl/2022.1.0/env/vars.sh

MPI=intelmpi
MPI=hpcx

if [[ "$MPI" =~ 

...

^intel ]]; then
        source /jet/packages/intel/oneapi/mpi/2021.6.0/env/vars.sh
        COMP="CC=mpiicc CXX=mpiicpc FC=mpiifort F90=mpiifort MPIF90=mpiifort"
        SCA="--with-scalapack=intel"
elif [[ "$MPI" =~ 

...

^hpcx ]]; then
        module use $HOME/hpcx-2.13.1/modulefiles
        module load hpcx
        export OMPI_MPICC=icc
        export OMPI_MPICXX=icpc
        export OMPI_MPIFC=ifort
        export OMPI_MPIF90=ifort
        COMP="CC=mpicc CXX=mpicxx FC=mpif90 F90=mpif90"
fi

which mpicc

BASE=$PWD
INSDIR=$BASE/../qe-7.1-$MPI

./configure --enable-parallel --prefix=$INSDIR \
        --enable-openmp \
        CFLAGS="  -O3 

...

" \
        FCFLAGS=" -O3

...

 " \
        F90FLAGS="-O3

...

 " \
        $SCA $COMP

make -j 32 cp pw 2>&1 | tee $BASE/build-${MPI}.log
make

...

 install | tee -a $BASE/build-${MPI}.log

Running PWscf

Download QE benchmarks

Code Block
git clone https://github.com/QEF/benchmarks.git

...

We will look for the final wallclock time (WALL).

Task and submission

...

Use the following 2 inputs:

View file
namesupercell_11layer.zip
View file
nameCP-W256.zip

  1. Profile the given input (TBD - Available in oneDrive folder)2 given inputs
    Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.

    • Submit the profile as PDF to the team's folder.

    • Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.

  2. Run the QE CP with the given input inputs on both PSC bridges-2 and FAU Fritz CPU clusters, and submit for four node runs.

    1. Submit the results to the

    teams folder
    1. team's folder, 4 node run only (4 results, 2 per cluster).

    2. Select one cluster and experiment with 1,2,4 node runs. Add to your presentation a scalability graph based on your results and any conclusions you came up with. No need to submit those results, just show your work on your presentation for the interview.

  3. Bonus task - run QE CP on the Alex PSC cluster using A100 GPU partitionV100 GPUs. Use only 4 GPUs on a single node for the run. Submit the results to the team's folder.

  4. Submission and Presentation:
    - Submit all the build scripts, standard output, logs and run scripts to the team’s folder.
    - No need to submit the output data or source codes.
    - Prepare slides for the team’s interview based on your work for this application.