Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents

Overview

...

POT3D is a Fortran code that computes potential field solutions to approximate the solar coronal magnetic field using observed photospheric magnetic fields as a boundary condition. It can be used to generate potential field source surface (PFSS), potential field current sheet (PFCS), and open field (OF) models. It has been (and continues to be) used for numerous studies of coronal structure and dynamics. The code is highly parallelized using MPI and is GPU-accelerated using Fortran standard parallelism (do concurrent) and OpenACC, along with an option to use the NVIDIA cuSparse library. The HDF5 file format is used for input/output.

Presentation

Widget Connector
overlayyoutube
_templatecom/atlassian/confluence/extra/widgetconnector/templates/youtube.vm
width400px
urlhttps://www.youtube.com/watch?v=o-gRv3kVXfY
height300px

View file
nameISC_SCC_POT3D.pdf

Downloading and compiling POT3D

Code Block
git clone https://github.com/predsci/POT3D

There are sample build scripts under in the build_examples directory.

We chose start with build_examples/build_cpu_mpi-only_intel_ubuntu20.04.sh (This is for CPU-only runs).

  1. Copy and rename build_examples/build_cpu_mpi-only_intel_ubuntu20.04.sh to the POT3D package root directory.:
    cp ./build_examples/build_cpu_mpi-only_intel_ubuntu20.04.sh ./rebuild.sh

  2. Build and/or load HDF5 and edit the HDF5 library.

  3. Modify the rebuild.sh with HDF5 location.

  4. FC = mpif90” FC = $(MPIFC)

  5. script to point to the location of the HDF5 library.

  6. Execute the build script:
    ./rebuild.sh

Sample build script for Fritz:

...

Code Block
# MPI=openmpi-4.1.2-intel2021.4.0
MPI=intelmpi-2021.7.0
if [[ "$MPI" =~ ^intel ]]; then
        module load hdf5/1.10.7-impi-intel
        export I_MPI_F90=ifort
else
        module load hdf5/1.10.7-ompi-intel
        export OMPI_MPIF90=ifort
fi
#################################################################

# Location of local hdf5 installed with same compiler being used for POT3D:
HDF5_INCLUDE_DIR="$HDF5_ROOT/include"
HDF5_LIB_DIR="$HDF5_ROOT/lib"
# Fortran HDF5 library flags (these can be version dependent):
HDF5_LIB_FLAGS="-lhdf5_fortran -lhdf5_hl_fortran -lhdf5 -lhdf5_hl""
...

Sample build script for PSC:

Download HPC-X from ISC23 SCC Getting Started with Bridges-2 Cluster.

Download and install HDF5 before building POT3D.

Code Block
source /jet/packages/intel/oneapi/compiler/2022.1.0/env/vars.sh
source /jet/packages/intel/oneapi/mkl/2022.1.0/env/vars.sh

# MPI=intelmpi
MPI=hpcx
if [[ "$MPI" =~ ^hpcx ]]; then
        module use $HOME/hpcx-2.13.1/modulefiles
        module load hpcx
        export OMPI_MPICC=icc
        export OMPI_MPICXX=icpc
        export OMPI_MPIFC=ifort
        export OMPI_MPIF90=ifort
        export FC=mpif90
else
        source /jet/packages/intel/oneapi/mpi/2021.6.0/env/vars.sh
        export FC=mpiifort
fi
#################################################################

# Location of local hdf5 installed with same compiler being used for POT3D:
HDF5_INCLUDE_DIR="$HDF5_ROOT/include"
HDF5_LIB_DIR="$HDF5_ROOT/lib"
# Fortran HDF5 library flags (these can be version dependent):
HDF5_LIB_FLAGS="-lhdf5_fortran -lhdf5_hl_fortran -lhdf5 -lhdf5_hl""
...

Running an Example

The following commands will run a test case with NP MPI ranks and validate that the code is working.

Code Block
NP=1

cd testsuite

...


POT3D_HOME=$PWD/..
TEST="small"

cp ${POT3D_HOME}/testsuite/${TEST}/input/* ${POT3D_HOME}/testsuite/${TEST}/run/
cd ${POT3D_HOME}/testsuite/${TEST}/run

echo "Running POT3D with $NP MPI rank..."

...

mpirun -np $NP 

...

${POT3D_HOME}/bin/pot3d

...

 > pot3d.

...

log
echo "Done!"

# Get runtime:
runtime=($(tail -n 5 timing.out | head -n 1))
echo "Wall clock time: ${runtime[6]} seconds"
echo " "

# 

...

Validate run:
${POT3D_HOME}/scripts/pot3d_validation.sh pot3d.out ${POT3D_HOME}/testsuite/${TEST}/validation/pot3d.out

Task and

...

Submission

  1. Profile the input
    Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.

    Submit the profile as PDF to the team's

    Use the input under testsuite/isc2023 folder.

  1. folder.

  2. Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.

  3. Run POT3D with isc2023 input on both PSC bridges-2 and FAU Fritz CPU clusters using 4 nodes.

    code

    Experiment with number of ranks per socket/numa domains to get the best results.
    Your

    job

    should

    converge

    at 25112 steps and

    print

    outputs

    like

    below

    .

    :

    Code Block
      ### The CG solver has converged.
     Iteration:    25112   Residual:   9.972489313728662E-13
  4. Profile a run.

    1. Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) for a run using 4 nodes with full PPN.

    2. Submit the profile as a PDF to the team's folder.

    3. In your presentation, also indicate the 3 main MPI calls that are being used and their run times, as well as the total MPI time for the test.

  5. Bonus task: Run POT3D on the FAU Alex PSC cluster using A100 the V100 GPU partition.

    1. Use only 4 GPUs for the run. It is recommended to use one rank per GPU.

    2. Submit the results to the team's folder.

    It is required to build HDF5 and POT3D with the NV Compiler. A sample of POT3D scripts can be found in the “examples” folder.  Note that you do NOT need to build it with CUSPARSE enabled since they are only using preconditioner #1, so if you have issues with the build, you can just set that ENV in the build script to
    1. NOTE: To compile and run POT3D with the nvfortran compiler, you must load and/or build the HDF5 library compiled with nvfortran. The code is known to work with HDF5 1.8.21 (http://portal.hdfgroup.org/display/support/HDF5+1.8.21 )

    2. An example build script for POT3D with the NVIDIA compiler can be found in build_examples/build_gpu_nv22.3_ubuntu20.04.sh

    3. Note that you do NOT need to enable the cusparse option because the test case is not set up to use the algorithm that requires cusparse. Therefore, if linking cusparse is causing difficulties, you can change the build script line POT3D_CUSPARSE=1to POT3D_CUSPARSE=0.

  6. Submission and Presentation:
    - Submit all the your build scripts, run scripts and input & outputs , inputs, and output text files (pot3d.dat, pot3d.out, timing.out, etc.)
    - Do not submit the output HDF5 data or the source code.
    - Prepare slides for the team’s interview based on your work for this application.

...