Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Building and Running example

Download v1.2 the source code from https://github.com/OrderN/CONQUEST-release.git.

Code Block
wgetgit clone https://github.com/OrderN/CONQUEST-release/releases/download/v1.2/CONQUEST-release-1.2.tar.gz.git

Download libxc 6.2.2 from https://gitlab.com/libxc/libxc/-/tree/6.2.2?ref_type=tags.

...

Build libxc:

Code Block
# Load intelIntel compilers and mpi modules
cd libxc-6.2.2
./configure --prefix=<path> CC=mpiicc FC=mpiifort
make
make install 

...

Build Conquest:

Code Block
# Load intelIntel compilers and mpi modules
cd CONQUEST-release/src/system
# Use Createone andof editexample system/-specific makefiles such as system.kathleen.make and edit it for XC lib and include paths, and FFT & blas libraries.
cp system.kathleen.make system.make
# Add correct flag (-qopenmp for Intel) for OpenMP to compile and link arguments
# Set MULT_KERN to ompGemm
cd ..
make

Sample build script for libxc and Conquest:

Code Block
#!/bin/bash
BASE=$PWD
source /opt/intel/compiler/2023.2.1/env/vars.sh

rm -rf libxc-6.2.2
tar xfp libxc-6.2.2.tar.gz
cd libxc-6.2.2

MPI=impi-2021.10.0
MPI=hpcx-2.18

if [[ "$MPI" =~ ^impi ]]; then
        source /opt/intel/mpi/2021.10.0/env/vars.sh
        export MPIFC=mpiifort
        export CC=mpiicc
        export FC=$MPIFC
elif [[ "$MPI" =~ ^hpcx ]]; then
        module use <path>$MPI/modulefiles
        module load hpcx
        export OMPI_FC=ifort
        export OMPI_F90=ifort
        export MPIFC=mpif90
        export CC=mpicc
        export FC=mpif90
fi

./configure --prefix=$BASE/libxc-6.2.2-$MPI
make -j 16 install

...

Code Block
#!/bin/bash
BASE=$PWD
source /opt/intel/compiler/2023.2.1/env/vars.sh

export MPI=hpcx-2.18
export MPI=impi-2021.10.0
if [[ "$MPI" =~ ^impi ]]; then
        source /opt/intel/mpi/2021.10.0/env/vars.sh
        export MPIFC=mpiifort
        export WHICHMPI=intelmpi
elif [[ "$MPI" =~ ^hpcx ]]; then
        module use <path>/$MPI/modulefiles
        module load hpcx
        export OMPI_FC=ifort
        export MPIFC=mpif90
        export WHICHMPI=openmpi
fi
cd src
make clean
make -j 16

...

Tasks & Submissions

Input:

View file
nameBulkSiDoped4096LargeWaterCell.zip

The virtual task involves performing linear scaling calculations on samples of bulk silicon with different numbers of atoms. Conquest weak scaling is seen when the number of atoms per MPI process is kept fixed, and the number of processes is scaled with the system size (number of atoms). You have been provided with three inputs, with 512 atoms (si_444.xtl), 1728 atoms (si_666.xtl) and 4096 atoms (si_888.xtl). The minimum number of atoms per MPI process is 8; the maximum will be dictated by memory limitations. The simplest way to examine weak scaling is to keep the product of MPI processes and OpenMP threads per process constant, and vary system size. You might also explore the effect of under-populating nodes where that is possible.

The smaller inputs are only for practice, not for submissions. The only input for submission is si_888.xtl.

  1. Find the best balance between OpenMP threads and MPI processes, show your work in the team’s interview presentation. Investigate the weak scaling (with the other inputs) as the MPI/OpenMP balance is changed. present your work in the interview.

  2. Run CONQUEST on 4 nodes and submit the results to the team’s folder (any number of PPN you choose).

  3. Run IPM profile or any other MPI profile on 4 nodes, and find the 3 most used MPI calls, show your work in the team interview presentation.

  4. Try run the application on 1,2,4 nodes (for the si_888.xtl input) and present strong scaling graph in the teams interview presentationRun CONQUEST using the above input.

  5. Submit your best result (standard output), make files and build scripts. Do not submit binary output files nor multiple results.