/
Getting Started with SeisSol for ISC25 SCC

Getting Started with SeisSol for ISC25 SCC

Overview

SeisSol is a software package for simulating elastic wave propagation and dynamic rupture based on the arbitrary high-order accurate derivative discontinuous Galerkin method (ADER-DG). SeisSol can use (an)isotropic elastic, viscoelastic and viscoplastic material to approximate realistic geological subsurface properties.

Website: https://seissol.org/

Documentation: https://seissol.readthedocs.io/en/latest/

 

Presentation

 

PDF:

Download

We will be using v1.3.1 for the competition.

git clone --recursive -b v1.3.1 --depth 1 https://github.com/SeisSol/SeisSol.git cd SeisSol git submodule update --recursive

Build

Building SeisSol

An guide to install SeisSol is given in our documentation under https://seissol.readthedocs.io/en/latest/build-overview.html . To summarize, you will need to install the following packages:

For a GPU build, you will additionally need to install:

For SeisSol itself, you will need to set the architecture you will use—since our matrix kernel generators (e.g. LIBXSMM, PSpaMM) usually require that information. That can be done via the HOST_ARCH parameter. On x86 architectures without AVX512 support, choose hsw , rome , or milan as host architecture. With AVX512 support, choose skx or bergamo . On ARM machines, we provide the dummy targets neon , sve128 , sve256 , and sve512 , respectively. Setting these will enable the instruction generation for the respective SIMD architecture instructions; but you should also set your CPU using the -mcpu parameter.

For SeisSol GPU builds, you will also need to set the DEVICE_ARCH and DEVICE_BACKEND parameters. The DEVICE_BACKEND component corresponds to either cuda, hip, or oneapi (depending on your vendor); and the DEVICE_ARCH is set to sm_XY for NVIDIA, gfxXYZW for AMD, or pvc for Intel GPUs.

 

As an alternative to the manual setup, we provide a Spack package which installs SeisSol directly (Spack Packages).

For older Spack environments which do not contain the SeisSol package, you can also install most of the dependencies via the Spack environment under GitHub - SeisSol/seissol-spack-aid: Spack support for SeisSol and related tools , and compile SeisSol by yourself.

Sample build script for PSC cluster

Here is a sample build script with all the decencies using HPC-X and Intel compilers 2023.2. You may use these prebuilt libraries or build your own.

First, install PSpaMM and follow the next steps.

module load anaconda3 git clone https://github.com/SeisSol/PSpaMM.git cd PSpaMM pip install . module unload anaconda3

Build SeisSol:

module load intel-oneapi/2023.2.1 module load intel-compiler/2023.2.1 module use /ocean/projects/cis240152p/shared/hpcx-2.22-icc/modulefiles module load hpcx-mt module use /ocean/projects/cis240152p/shared/ISC25/SeisSol/modfiles module load hdf5/1.14.1-i232h220 module load netcdf/4.9.2-i232h220 module load yaml-cpp/0.6.3-i232 asagi/1.0-i232h220 lua/5.3.6-i232h220 easi/1.0-i232h220 module load parmetis/4.0.3-i232h220 module load eigen/3.4.0-i232 export PATH=/opt/packages/anaconda3-2024.10-1/bin:$PATH export CC=mpicc export CXX=mpicxx export FC=mpif90 export OMPI_CC=icc export OMPI_CXX=icpc export OMPI_FC=ifort export CXXFLAGS="-diag-disable=10441" export CFLAGS="-diag-disable=10441" export LDFLAGS=-lGKlib export PATH=/opt/packages/anaconda3-2024.10-1/bin:$PATH cd SeisSol mkdir build-release cd build-release cmake -DNUMA_AWARE_PINNING=ON -DCMAKE_BUILD_TYPE=Release -DHOST_ARCH=rome -DASAGI=OFF -DPRECISION=double -DORDER=4 -DGEMM_TOOLS_LIST=PSpaMM,Eigen .. make -j 16

Testing Your Installation

Once the build is finished, you will obtain two binaries under build-release directory:

  • SeisSol_Release_ARCH

  • SeisSol_proxy_Release_ARCH

To test your installation, you can do two things:

  • Run SeisSol_proxy_Release_ARCH 100000 100 all . This command will give you an idealized performance figure and run through all kernels once, thus also verifying that they run through without error.

  • Run SeisSol_Release_ARCH with a parameter file. We provide reference values for some test scenarios in the GitHub - SeisSol/precomputed-seissol repository which you can compare your installation to. See the description below on how to do that.

Pinning and Performance Considerations

For optimal performance, we recommend using one process per NUMA domain. If you use the GPU version, you will need one process per GPU.

Furthermore, we recommend keeping one CPU core free per process for the so-called communication thread. It will be used to advance MPI communication and I/O instead. To do so, you will need to set some environment variables, similar to the following snippet:

export SEISSOL_COMMTHREAD=1 # Test with both 1 and 0 for the test run. Check which gives higher performance and use that one. THREADS_PER_TASK=16 # Guidance, but you can change this parameter CPU_HYPERTHREADING=1 NUM_CORES=$(expr $CPUS_PER_TASK / $CPU_HYPERTHREADING) NUM_COMPUTE_CORES=$(expr $NUM_CORES - $SEISSOL_COMMTHREAD) export OMP_NUM_THREADS="$(expr $NUM_COMPUTE_CORES \* $CPU_HYPERTHREADING)" export OMP_PLACES="cores($NUM_COMPUTE_CORES)" export PROC_BIND=spread export MP_SINGLE_THREAD=no unset KMP_AFFINITY

Test Run

After having successfully compiled SeisSol, we test our installation as the next step. For that, clone the pre-computed solutions by running

git clone https://github.com/SeisSol/precomputed-seissol.git

This directory has a collection of pre-computed scenarios for testing whenever a new feature is implemented. Here, we will run the tpv33 test case; hence navigate to the tpv33 directory in the downloaded precomputed-seissol repository.

Inside, you will find the mesh file required to simulate the case. To run SeisSol, simply set the pinning variables as described in the previous section. Then, run SeisSol with the desired communicator size:

mpirun -n $NUM_RANKS $PATH_TO_SEISSOL_BIN/SeisSol_Release_ARCH parameters.par

This will start the simulation—and SeisSol will print some terminal info while running it.

Once the simulation is done, the folder output should contain the results of the simulation. To verify the correctness of your installation, can compare these results with the pre-computed folder. For that, either use paraview to visualize one of the Xdmf files in both scenarios or use viewrec (SeisSol/postprocessing/visualization/receiver/bin at master · SeisSol/SeisSol ) to visualize the point receivers (the txt files), and compare the results for both output folders.

For background information, the description of the scenario can be found here: https://strike.scec.org/cvws/download/TPV33_Description_v04.pdf .

Tasks and Submissions

  1. Run the application on 4 CPU nodes and submit the results.
    Experiment with SEISSOL_COMMTHREAD=0 and SEISSOL_COMMTHREAD=1 . Show the differences in the team presentation, submit the best results to the team folder.

    The simulation files are placed here:

  2. Run MPI Profiler to profile the application. Which 3 MPI calls are mostly used? Present your work in the team interview ppt slides.

 

  1. Visualize the results, and create a short video demonstrating the input via Paraview or any other tool. With this input (EndTime = 150).

 

Related content