Table of Contents |
---|
Overview
...
POT3D is a Fortran code that computes potential field solutions to approximate the solar coronal magnetic field using observed photospheric magnetic fields as a boundary condition. It can be used to generate potential field source surface (PFSS), potential field current sheet (PFCS), and open field (OF) models. It has been (and continues to be) used for numerous studies of coronal structure and dynamics. The code is highly parallelized using MPI and is GPU-accelerated using Fortran standard parallelism (do concurrent) and OpenACC, along with an option to use the NVIDIA cuSparse library. The HDF5 file format is used for input/output.
Presentation
Widget Connector | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
View file | ||
---|---|---|
|
Downloading and compiling POT3D
Code Block |
---|
git clone https://github.com/predsci/POT3D |
There are sample build scripts under in the build_examples
directory.
We chose start with build_examples/build_cpu_mpi-only_intel_ubuntu20.04.sh
(This is for CPU-only runs).
Copy and rename
build_examples/build_cpu_mpi-only_intel_ubuntu20.04.sh
to the POT3D package root directory.:cp ./build_examples/build_cpu_mpi-only_intel_ubuntu20.04.sh ./rebuild.sh
Build HDF5 and edit and/or load the HDF5 library.
Modify the
rebuild.sh
with HDF5 location script to point to the location of the HDF5 library.Execute the build script:
./rebuild.sh
Sample build script for Fritz:
We used modules for MPI and HDF5 on Firtz.
Code Block |
---|
module load intel/2022# MPI=openmpi-4.1.2-intel2021.4.0 MPI=intelmpi-2021.7.0 if [[ "$MPI" =~ ^intel ]]; then module load compilerhdf5/20221.1.010.7-impi-intel export I_MPI_F90=ifort else module load mklhdf5/20221.0.2 # module load impi/2021.6.0 module load hpcx/2.13.0 10.7-ompi-intel export OMPI_MPIF90=ifort fi ################################################################# # Location of local hdf5 installed with same compiler being used for POT3D: HDF5_INCLUDE_DIR="$HDF5_ROOT/include" HDF5_LIB_DIR="$HDF5_ROOT/lib" # Fortran HDF5 library flags (these can be version dependent): HDF5_LIB_FLAGS="-lhdf5_fortran -lhdf5_hl_fortran -lhdf5 -lhdf5_hl"" ... |
Sample build script for PSC:
Download HPC-X from ISC23 SCC Getting Started with Bridges-2 Cluster.
Download and install HDF5 before building POT3D.
Code Block |
---|
source /jet/packages/intel/oneapi/compiler/2022.1.0/env/vars.sh source /jet/packages/intel/oneapi/mkl/2022.1.0/env/vars.sh # MPI=intelmpi MPI=hpcx if [[ "$MPI" =~ impi^hpcx ]]; then module export MPIFC=mpiifort elif [[ "$MPI" =~ hpcx ]]; thenuse $HOME/hpcx-2.13.1/modulefiles module load hpcx export OMPI_MPICC=icc export OMPI_MPICXX=icpc export OMPI_MPIFC=ifort export OMPI_MPIF90=ifort export MPIFCFC=mpif90 else fi module load hdf5/1.12.1-$MPIsource /jet/packages/intel/oneapi/mpi/2021.6.0/env/vars.sh export FC=mpiifort fi ################################################################# # Location of local hdf5 installed with same compiler being used for POT3D: HDF5_INCLUDE_DIR="$HDF5_DIRROOT/include" HDF5_LIB_DIR="$HDF5_DIRROOT/lib" # Fortran HDF5 library flags (these can be version dependent): HDF5_LIB_FLAGS="-lhdf5_fortran -lhdf5hllhdf5_hl_fortran -lhdf5 -lhdf5_hl"" ... |
Running an Example
The following commands will run a test case with NP
MPI ranks and validate that the code is working.
Code Block |
---|
NP=1 cd testsuite |
...
POT3D_HOME=$PWD/.. TEST="small" cp ${POT3D_HOME}/testsuite/${TEST}/input/* ${POT3D_HOME}/testsuite/${TEST}/run/ cd ${POT3D_HOME}/testsuite/${TEST}/run echo "Running POT3D with $NP MPI rank..." |
...
mpirun -np $NP |
...
${POT3D_HOME}/bin/pot3d |
...
> pot3d. |
...
log echo "Done!" # Get runtime: runtime=($(tail -n 5 timing.out | head -n 1)) echo "Wall clock time: ${runtime[6]} seconds" echo " " # |
...
Validate run: ${POT3D_HOME}/scripts/pot3d_validation.sh pot3d.out ${POT3D_HOME}/testsuite/${TEST}/validation/pot3d.out |
Task and
...
Submission
Use the input under
testsuite/isc2023
folder.Run POT3D with
isc2023
input on both PSC bridges-2 and FAU Fritz CPU clusters using 4 nodes.
Experiment with number of ranks per socket/numa domains to get the best results.
Profile the input
Your job should converge at 25112 steps and print outputs like below:Code Block ### The CG solver has converged. Iteration: 25112 Residual: 9.972489313728662E-13
Profile a run.
Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler)
for a run using 4 nodes
with full PPN
.
Submit the profile as a PDF to the team's folder.
In your presentation, also indicate the 3 main MPI calls that are being used
Run the POT3D with isc2023 input on both PSC bridges-2 and FAU Fritz CPU clusters.
and their run times, as well as the total MPI time for the test.
Bonus task: Run POT3D on the Alex PSC cluster using A100 the V100 GPU partition.
Use only 4 GPUs for the run. It is recommended to use one rank per GPU.
Submit the results to the team's folder.
NOTE: To compile and run POT3D with the
nvfortran
compiler, you must load and/or build the HDF5 library compiled withnvfortran
. The code is known to work with HDF5 1.8.21 (http://portal.hdfgroup.org/display/support/HDF5+1.8.21 )An example build script for POT3D with the NVIDIA compiler can be found in
build_examples/build_gpu_nv22.3_ubuntu20.04.sh
Note that you do NOT need to enable the
cusparse
option because the test case is not set up to use the algorithm that requirescusparse
. Therefore, if linkingcusparse
is causing difficulties, you can change the build script linePOT3D_CUSPARSE=1
toPOT3D_CUSPARSE=0
.
Submission and Presentation:
- Submit all the your build scripts, run scripts and input & outputs , inputs, and output text files (pot3d.dat, pot3d.out, timing.out, etc.)
- Do not submit the output HDF5 data or the source code.
- Prepare slides for the team’s interview based on your work for this application.
...