...
SeisSol is a software package for simulating elastic wave propagation and dynamic rupture based on the arbitrary high-order accurate derivative discontinuous Galerkin method (ADER-DG). SeisSol can use (an)isotropic elastic, viscoelastic and viscoplastic material to approximate realistic geological subsurface properties.
Website: https://seissol.org/
...
The list of necessary dependencies to install and compile can be found under https://seissol.readthedocs.io/en/latest/compiling-seissol.html (TODONOTE: The the installation in the home directory, as described in there, is outdated. We suggest setting an environment variable SEISSOL_PREFIX
to a local location and then setting -DCMAKE_INSTALL_PREFIX=$SEISSOL_PREFIX
in CMake for all the dependencies installation.). In particular, you will need to install:
Python
MPI
Hdf5
NetcdfNetCDF
yaml-cpp
easi https://github.com/SeisSol/easi
Lua
Eigen
ParMETIS
libxsmm
...
For SeisSol itself, you will need to set the architecture you are going to will use—since our matrix kernel generators usually require that information. That can be done via the HOST_ARCH
parameter. On x86 architectures without AVX512 support, choose either hsw
, rome
, or milan
as host architecture. With AVX512 support, choose skx
or bergamo
. On ARM machines, we have the dummy targets neon
, sve128
, sve256
, and sve512
, respectively. Setting these will enable the instruction generation for the respective SIMD architecture instructions. You should also set your CPU by using the -mcpu
parameter.
For GPUs, you You will also need to set the DEVICE_ARCH
and DEVICE_BACKEND
parameters for GPUs.
Testing Your Installation
...
To test your installation, you can do so within two stepsthings:
Run
SeisSol_proxy_Release_ARCH 100000 100 all
. This command will give you an idealized performance figure , and also run through all kernels once, thus also verifying that they run through without error.Run
SeisSol_Release_ARCH
with a parameter file. We provide reference values for all some test scenarios in the https://github.com/SeisSol/precomputed-seissol repository which you could use to test can compare your installation to. See the description below on how to do that.
Pinning and Performance Considerations
...
Furthermore, we recommend keeping one CPU core free per process free, to be used for the so-called communication thread. It will be used for advancing to advance MPI communication and IO I/O instead. To do so, you will need to set some environment variables, similar to the following snippet:
Code Block |
---|
export SEISSOL_COMMTHREAD=1 # Test with both 1 and 0 for the test run. Check which gives higher performance and use that one. THREADS_PER_TASK=16 # TODO: set Guidance, but you can change this parameter CPU_HYPERTHREADING=2 #TODO:1 set NUM_CORES=$(expr $CPUS_PER_TASK / $CPU_HYPERTHREADING) NUM_COMPUTE_CORES=$(expr $NUM_CORES - 1$SEISSOL_COMMTHREAD) export OMP_NUM_THREADS="$(expr $NUM_COMPUTE_CORES \* $CPU_HYPERTHREADING)" export OMP_PLACES="cores($NUM_COMPUTE_CORES)" export PROC_BIND=spread export MP_SINGLE_THREAD=no unset KMP_AFFINITY |
...
Test Run
After having successfully compiled SeisSol, we test our installation as the next step. For that, clone the pre-computed solutions by running
Code Block |
---|
git clone https://github.com/SeisSol/precomputed-seissol.git |
This directory has a collection of pre-computed scenarios for testing whenever a new feature is implemented. Here, we will run the tpv33 test case; hence navigate to the tpv33 directory in the downloaded precomputed-seissol
repository.
Inside, you will find the mesh file required to simulate the case. To run SeisSol, simply set the pinning variables as described in the previous section. Then, run SeisSol with the desired communicator size:
mpirun -n $NUM_RANKS $PATH_TO_SEISSOL_BIN/SeisSol_Release_ARCH parameters.par
This will start the simulation—and SeisSol will print some terminal info while running it.
Once the simulation is done, the folder output
should contain the results of the simulation. To verify the correctness of your installation, can compare these results with the pre-computed folder. For that, either use paraview to visualize one of the Xdmf files in both scenarios or use viewrec (https://github.com/SeisSol/SeisSol/tree/master/postprocessing/visualization/receiver/bin ) to visualize the point receivers (the txt files), and compare the results for both output folders.
For background information, the description of the scenario can be found here: https://strike.scec.org/cvws/download/TPV33_Description_v04.pdf .
Tasks and Submissions
Run the application on 4 CPU nodes and submit the results.
Experiment withSEISSOL_COMMTHREAD=0
andSEISSOL_COMMTHREAD=1
. Show the differences in the team presentation, submit the best results to the team folder.
The simulation files are placed here:View file name Turkey (for Benchmark).zip Run MPI Profiler to profile the application
...
. Which 3 MPI calls are mostly used?
...
Present your work in the
...
team interview ppt slides.
Visualize the results, and create a short video that demonstrate demonstrating the given input via paraview Paraview or any other tool. With this input (
EndTime = 150
).
View file |
---|
<TBD>
...
|