Getting started with Conquest for ISC24 SCC (In-Person)

 

Overview

CONQUEST is a DFT code designed for large-scale calculations, with excellent parallelisation. It gives an exact diagonalisation approach for systems from 1 to 10,000+ atoms, and brings the possibility of linear scaling calculations on over 1,000,000 atoms. In this task, you will be using the linear scaling approach, which can show perfect weak scaling of thousands of cores.

 

Note: The page may be changed until the competition stats, maybe sure to follow up until the opening ceremony.

 

Conquest presentation to the teams:

 

Presentation file:

Building and Running example

Download the source code from GitHub - OrderN/CONQUEST-release: Full public release of large scale and linear scaling DFT code CONQUEST.

git clone https://github.com/OrderN/CONQUEST-release.git

Download libxc 6.2.2 from https://gitlab.com/libxc/libxc/-/tree/6.2.2?ref_type=tags.

 

Prerequisites

  • FFTW/MKL package

  • SCALAPACK

 

Build libxc:

# Load Intel compilers and mpi modules cd libxc-6.2.2 ./configure --prefix=<path> CC=mpiicc FC=mpiifort make make install

 

Build Conquest:

# Load Intel compilers and mpi modules cd CONQUEST-release/src/system # Use one of example system-specific makefiles such as system.kathleen.make and edit it for XC lib and include paths, and FFT & blas libraries. cp system.kathleen.make system.make # Add correct flag (-qopenmp for Intel) for OpenMP to compile and link arguments # Set MULT_KERN to ompGemm cd .. make

 

Sample build script for libxc and Conquest:

#!/bin/bash BASE=$PWD source /opt/intel/compiler/2023.2.1/env/vars.sh rm -rf libxc-6.2.2 tar xfp libxc-6.2.2.tar.gz cd libxc-6.2.2 MPI=impi-2021.10.0 if [[ "$MPI" =~ ^impi ]]; then source /opt/intel/mpi/2021.10.0/env/vars.sh export MPIFC=mpiifort export CC=mpiicc export FC=$MPIFC elif [[ "$MPI" =~ ^hpcx ]]; then module use <path>$MPI/modulefiles module load hpcx export OMPI_FC=ifort export OMPI_F90=ifort export MPIFC=mpif90 export CC=mpicc export FC=mpif90 fi ./configure --prefix=$BASE/libxc-6.2.2-$MPI make -j 16 install

Modify src/system/system.make:

# This is a system.make file for the UCL Kathleen machine. See # https://www.rc.ucl.ac.uk/docs/Clusters/Kathleen/ for details # Set compilers FC=$(MPIFC) # OpenMP flags # Set this to "OMPFLAGS= " if compiling without openmp # Set this to "OMPFLAGS= -qopenmp" if compiling with openmp OMPFLAGS= -qopenmp # Set this to "OMP_DUMMY = DUMMY" if compiling without openmp # Set this to "OMP_DUMMY = " if compiling with openmp OMP_DUMMY = # Set BLAS and LAPACK libraries # MacOS X # BLAS= -lvecLibFort # Intel MKL use the Intel tool # Generic #BLAS= -llapack -lblas # LibXC compatibility # Choose LibXC version: v4 (deprecated) or v5/6 (v5 and v6 have the same interface) # XC_LIBRARY = LibXC_v4 XC_DIR = <path>/libxc-6.2.2 XC_LIBRARY = LibXC_v5 XC_LIB = -L$(XC_DIR)/lib -lxcf90 -lxc XC_COMPFLAGS = -I$(XC_DIR)/include # Set FFT library FFT_LIB=-lmkl_rt FFT_OBJ=fft_fftw3.o # Full library call; remove scalapack if using dummy diag module # If using OpenMPI, use -lscalapack-openmpi instead. LIBS= -qmkl=sequential -lmkl_scalapack_lp64 -lmkl_blacs_$(WHICHMPI)_lp64 -liomp5 $(XC_LIB) # Compilation flags # NB for gcc10 you need to add -fallow-argument-mismatch COMPFLAGS= -xHOST -O3 -g $(OMPFLAGS) $(XC_COMPFLAGS) # Linking flags # Matrix multiplication kernel type MULT_KERN = ompGemm # Use dummy DiagModule or not DIAG_DUMMY =

Build Conquest:

#!/bin/bash BASE=$PWD source /opt/intel/compiler/2023.2.1/env/vars.sh export MPI=impi-2021.10.0 if [[ "$MPI" =~ ^impi ]]; then source /opt/intel/mpi/2021.10.0/env/vars.sh export MPIFC=mpiifort export WHICHMPI=intelmpi elif [[ "$MPI" =~ ^hpcx ]]; then module use <path>/$MPI/modulefiles module load hpcx export OMPI_FC=ifort export MPIFC=mpif90 export WHICHMPI=openmpi fi cd src make clean make -j 16

Build error:

If you encounter a Fortran module error with pseudo_tm_info.f90, modify src/Makefile as shown below.

pseudo_tm_info.o:pseudo_tm_info.f90 $(FC) $(XC_COMPFLAGS) -c $<

Running Conquest:

You will need to set the number of threads per process for OpenMP as well as the number of MPI processes.

export OMP_NUM_THREADS=XX mpirun -np YY path/to/Conquest

 

Application metric is wall-time “Total run time”.

Tasks & Submissions

Input:

  1. Run CONQUEST using the above input.

  2. Submit your best result (standard output), make files and build scripts. Do not submit binary output files nor multiple results.