Overview
OpenMX (Open source package for Material eXplorer) is a software package for nano-scale material simulations based on density functional theories (DFT) , norm-conserving pseudopotentials and pseudo-atomic localized basis functions.
Website: https://www.openmx-square.org/index.html
Build
After extracting the files from the source3.962.tar.gz
tarball, first create a directory called work
and then go into the source3.962
directory. Edit the makefile
for the compilers and MPI on the system you will be using. For example, on the HPC-AI Advisory Council’s Rome cluster, using Intel 2023.2 compilers, Intel MKL, and HPC-X 2.19, the following shows the differences between the modified makefile and the original one:
[gerardo@login02 source3.962]$ diff makefile makefile_orig 9,13c9,12 < ## MKLROOT = /opt/intel/mkl < CC = mpicc -O3 -march=core-avx2 -ip -no-prec-div -qopenmp -diag-disable=10441 -I${MKLROOT}/include/fftw < FC = mpif90 -O3 -march=core-avx2 -ip -no-prec-div -qopenmp < ## LIB= -L${MKLROOT}/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lmkl_blacs_openmpi_lp64 -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifcore -liomp5 -lpthread -lm -ldl < LIB= -L${MKLROOT}/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lmkl_blacs_openmpi_lp64 --- > MKLROOT = /opt/intel/mkl > CC = mpicc -O3 -xHOST -ip -no-prec-div -qopenmp -I/opt/intel/mkl/include/fftw/ > FC = mpif90 -O3 -xHOST -ip -no-prec-div -qopenmp > LIB= -L${MKLROOT}/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lmkl_blacs_openmpi_lp64 -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifcore -liomp5 -lpthread -lm -ldl 250,251c249 < $(FC) $(OBJS) $(LIB) -lm -nofor-main -o openmx < ## $(CC) $(OBJS) $(LIB) -lm -o openmx --- > $(CC) $(OBJS) $(LIB) -lm -o openmx
To build, just execute make install
. If everything goes well, there will be an executable called openmx
in the ../work
directory you created earlier.
Tasks and Submissions
You can practice building and running with a tiny input, Methane, below.
Run the application on 4 CPU nodes with the NVC input and submit the results in the team’s folder . Try varying the number of OpenMP threads per MPI rank for your optimal run.
Run MPI Profiler to profile the application. Which three MPI calls are mostly used? Include your results in the team's interview ppt slides.
Visualize the results, create a short video that demonstrate the given input via ParaView or any other suitable tool. Include it in the team’s interview ppt slides. If you use have an X (Twitter) account, please tag the video/photo with your team name and the hashtags “#ISC25 #ISC25_SCC” prior to the team interview.
0 Comments