Overview
CONQUEST is a DFT code designed for large-scale calculations, with excellent parallelisation. It gives an exact diagonalisation approach for systems from 1 to 10,000+ atoms, and brings the possibility of linear scaling calculations on over 1,000,000 atoms. In this task, you will be using the linear scaling approach, which can show perfect weak scaling of thousands of cores.
Note: The page may be changed until the competition stats, maybe sure to follow up until the opening ceremony.
Building and Running example
Download v1.2 from https://github.com/OrderN/CONQUEST-release.git.
wget https://github.com/OrderN/CONQUEST-release/releases/download/v1.2/CONQUEST-release-1.2.tar.gz
Download libxc 6.2.2 from https://www.tddft.org/programs/libxc/download/.
Prerequisites
FFTW/MKL package
SCALAPACK
Build libxc:
# Load intel compilers and mpi modules cd libxc-6.2.2 ./configure --prefix=<path> CC=mpicc FC=mpif90 make make install
Build Conquest:
# Load intel compilers and mpi modules cd CONQUEST-release/src # Edit system.make for XC lib and include paths, and FFT & blas libraries. # Add correct flag (-qopenmp for Intel) for OpenMP to compile and link arguments # Set MULT_KERN to ompGemm make
Running Conquest:
You will need to set the number of threads per process for OpenMP as well as the number of MPI processes.
export OMP_NUM_THREADS=XX mpirun -np YY path/to/Conquest
Application metric is wall-time “Total run time”.
Tasks & Submissions
Input:
The virtual task involves performing linear scaling calculations on samples of bulk silicon with different numbers of atoms. Conquest weak scaling is seen when the number of atoms per MPI process is kept fixed, and the number of processes is scaled with the system size (number of atoms). You have been provided with three inputs, with 512 atoms (si_444.xtl), 1728 atoms (si_666.xtl) and 4096 atoms (si_888.xtl). The minimum number of atoms per MPI process is 8; the maximum will be dictated by memory limitations. The simplest way to examine weak scaling is to keep the product of MPI processes and OpenMP threads per process constant, and vary system size. You might also explore the effect of under-populating nodes where that is possible.
Find the best balance between OpenMP threads and MPI processes, show your work in the team’s interview presentation. Investigate the weak scaling as the MPI/OpenMP balance is changed.
Run CONQUEST with si_888.xtl on 4 nodes and submit the results to the team’s folder.
Run IPM profile or any other MPI profile on 4 nodes, and find the 3 most used MPI calls, show your work in the team interview presentation.
Try run the application on 1,2,4 nodes (for the si_888.xtl input) and present strong scaling graph in the teams interview presentation.
Add Comment