Getting Started with OpenMX for ISC25 SCC

Getting Started with OpenMX for ISC25 SCC

Overview

OpenMX (Open source package for Material eXplorer) is a software package for nano-scale material simulations based on density functional theories (DFT) , norm-conserving pseudopotentials and pseudo-atomic localized basis functions.

Website: https://www.openmx-square.org/index.html

 

Presentation

PDF:

Build

Download the version 3.9 and keep the data folder, DFT_DATA19.

wget https://www.openmx-square.org/openmx3.9.tar.gz tar xfp openmx3.9.tar.gz cd openmx3.9 rm -rf source

Download and extract the version 3.962, .

tar xfp source3.962.tar.gz cd source3.962

Note: don’t use the v3.9.9, but the 3.962 in the competition.

 

Edit the makefile for the compilers and MPI on the system you will be using. The following shows the differences between the modified makefile and the original one:

$ diff makefile makefile.orig 9,12c9,12 < # MKLROOT = /opt/intel/mkl < CC = mpicc -O3 -march=core-avx2 -ip -no-prec-div -qopenmp -I${MKLROOT}/include/fftw -diag-disable=10441 < FC = mpif90 -O3 -march=core-avx2 -ip -no-prec-div -qopenmp -diag-disable=10441 < LIB= -mkl -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 --- > MKLROOT = /opt/intel/mkl > CC = mpicc -O3 -xHOST -ip -no-prec-div -qopenmp -I/opt/intel/mkl/include/fftw/ > FC = mpif90 -O3 -xHOST -ip -no-prec-div -qopenmp > LIB= -L${MKLROOT}/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lmkl_blacs_openmpi_lp64 -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifcore -liomp5 -lpthread -lm -ldl 249c249 < $(FC) $(OBJS) $(LIB) -lm -nofor-main -o openmx --- > $(CC) $(OBJS) $(LIB) -lm -o openmx

To build openmx using hpcx and Intel compilers on PSC.

module load intel-oneapi/2023.2.1 module load intel-compiler/2023.2.1 module use /ocean/projects/cis240152p/shared/hpcx-2.22-icc/modulefiles module load hpcx export OMPI_CC=icc export OMPI_FC=ifort make install

If everything goes well, there will be an executable called openmx in the ../work directory. (See also Installation tips.)

You can build your openmpi with latest Intel compilers or use any other MPI.

You will also need (at the same level where your work and source3.962 directories are), the DFT_DATA19 directory that contains the database needed by OpenMX.

Sample output:

The number of threads in each node for OpenMP parallelization is 1. ******************************************************* ******************************************************* Welcome to OpenMX Ver. 3.9.23 Copyright (C), 2002-2019, T. Ozaki OpenMX comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under the constitution of the GNU-GPL. ******************************************************* ******************************************************* Automatic determination of Kerker_factor: 1.072808181156 <Input_std> Your input file was normally read. <Input_std> The system includes 3 species and 64 atoms. ******************************************************* PAO and VPS ******************************************************* <SetPara_DFT> PAOs of species C were normally found. <SetPara_DFT> PAOs of species N were normally found. ... The SCF was achieved at MD= 1 ******************************************************* MD or geometry opt. at MD = 1 ******************************************************* outputting data on grids to files... *********************************************************** *********************************************************** Computational Time (second) *********************************************************** *********************************************************** Min_ID Min_Time Max_ID Max_Time Total Computational Time = 387 271.764 0 272.818 readfile = 8 2.722 344 2.733 truncation = 105 3.859 6 4.736 MD_pac = 507 0.003 27 0.006 ...

Tasks and Submissions

You can practice building and running with a tiny input, Methane, below.

cd ../work # Copy the input here mpirun -np <# procs> <mpi flags> ./openmx Methane.dat
  1. Run the application on 4 CPU nodes with the NVC input and submit the results in the team’s folder. Try varying the number of OpenMP threads per MPI rank for your optimal run. Submit the std outputs.

  1. Run MPI Profiler to profile the application. Which three MPI calls are mostly used? Include your results in the team's interview ppt slides.

 

  1. Visualize the results, create a short video that demonstrate the given input via OpenMXViewer (https://www.openmx-square.org/viewer/ just drag and drop). Include it in the team’s interview ppt slides. If you use have an X (Twitter) account, please tag the video/photo with your team name and the hashtags “#ISC25 #ISC25_SCC” prior to the team interview.