Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

Overview

SeisSol is a software package for simulating elastic wave propagation and dynamic rupture based on the arbitrary high-order accurate derivative discontinuous Galerkin method (ADER-DG). SeisSol can use (an)isotropic elastic, viscoelastic and viscoplastic material to approximate realistic geological subsurface properties.

Website: https://seissol.org/

Documentation: https://seissol.readthedocs.io/en/latest/

Build

Building SeisSol

The list of necessary dependencies to install and compile can be found under https://seissol.readthedocs.io/en/latest/compiling-seissol.html (TODO: The installation in the home directory, as described there, is outdated. We suggest setting an environment variable SEISSOL_PREFIX to a local location and then setting -DCMAKE_INSTALL_PREFIX=$SEISSOL_PREFIX in CMake for all the dependencies installation.). In particular, you will need to install:

For GPUs, you will also need:

Alternatively, we provide a Spack environment under https://github.com/SeisSol/seissol-spack-aid .

For SeisSol itself, you will need to set the architecture you will use—since our matrix kernel generators usually require that information. That can be done via the HOST_ARCH parameter. On x86 architectures without AVX512 support, choose hsw , rome , or milan as host architecture. With AVX512 support, choose skx or bergamo . On ARM machines, we have the dummy targets neon , sve128 , sve256 , and sve512 , respectively. Setting these will enable the instruction generation for the respective SIMD architecture instructions. You should also set your CPU using the -mcpu parameter.

You will also need to set the DEVICE_ARCH and DEVICE_BACKEND parameters for GPUs.

Testing Your Installation

Once the build is finished, you will obtain two binaries:

  • SeisSol_Release_ARCH

  • SeisSol_proxy_Release_ARCH

To test your installation, you can do so within two steps:

  • Run SeisSol_proxy_Release_ARCH 100000 100 all . This command will give you an idealized performance figure and run through all kernels once.

  • Run SeisSol_Release_ARCH with a parameter file. We provide reference values for all scenarios in the https://github.com/SeisSol/precomputed-seissol repository, which you could use to test your installation.

Pinning and Performance Considerations

For optimal performance, we recommend using one process per NUMA domain. If you use the GPU version, you will need one process per GPU.

Furthermore, we recommend keeping one CPU core free per process for the so-called communication thread. It will be used to advance MPI communication and I/O instead. To do so, you will need to set some environment variables, similar to the following snippet:

export SEISSOL_COMMTHREAD=1 # Test with both 1 and 0 for the test run. Check which gives higher performance and use that
THREADS_PER_TASK=16 # Guidance, but you can change this parameter
CPU_HYPERTHREADING=1 
NUM_CORES=$(expr $CPUS_PER_TASK / $CPU_HYPERTHREADING)
NUM_COMPUTE_CORES=$(expr $NUM_CORES - $SEISSOL_COMMTHREAD)
export OMP_NUM_THREADS="$(expr $NUM_COMPUTE_CORES \* $CPU_HYPERTHREADING)"
export OMP_PLACES="cores($NUM_COMPUTE_CORES)"
export PROC_BIND=spread
export MP_SINGLE_THREAD=no
unset KMP_AFFINITY

Test Run

After successfully compiling SeisSol, we will test it with a small test case. Clone the pre-computed SeisSol directory by

git clone https://github.com/SeisSol/precomputed-seissol.git

This directory has a collection of pre-computed scenarios for testing whenever a new feature is implemented. Please navigate to the tpv33 directory in it. The description of the scenario is here: https://strike.scec.org/cvws/download/TPV33_Description_v04.pdf.

Inside, you will notice the mesh file required to simulate the case. You must execute the compiled SeisSol binary with the appropriate MPI, OpenMP setting with

mpirun -n $NUM_RANKS $PATH_TO_SEISSOL_BIN/SeisSol_Release_ARCH parameters.par

This will start the simulation. Once the simulation is done, the folder output has the output of the simulation produced. You could compare the result with the pre-computed folder. For that, either uses paraview to visualise one of the xdmf files in both scenarios or use viewrec(https://github.com/SeisSol/SeisSol/tree/master/postprocessing/visualization/receiver/bin ) to visualise the pick-point receivers in both scenarios.

Tasks and Submissions

  1. Run the application on 4 CPU nodes and submit the results.
    Experiment with SEISSOL_COMMTHREAD=0 and SEISSOL_COMMTHREAD=1 . Show the differences in the team presentation, submit the best results to the team folder.

    The simulation files are placed here:

  1. Run MPI Profiler to profile the application. Which 3 MPI calls are mostly used? present your work in the team interview ppt slides

  1. Visualize the results, and create a short video demonstrating the input via Paraview or any other tool. With this input (Endtime 150).

  • No labels