Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

1. Download (Clone) SWIFT from the GitLab site. In this example, we download Use the specific version of the example below.

Code Block
$ git clone https://gitlab.cosma.dur.ac.uk/swift/swiftsim.git && cd swiftsim
$ git checkout 3d44fb65ea39b9f7a2a99525f15c4cd464045c38


2.  Extract the files and change to the main directory. There In the swiftsim directory you will find an INSTALL.swift file with instructions for building.


3. There is no need to use snapshots. Disable snapshot dumps:  Edit src/engine.c and examples/main.c to comment out the out the calls to engine_dump_snapshot() (two lines in the engine.c, one line in main.c). This will disable two large snapshot dumps (one at the beginning of the run and another at the end), as well as any others in between.

This is done because, for the purposes of the competition, we are interested in the computational portion of the code, not in large file outputs it would otherwise perform.

...

  • Compiler
  • MPI
  • HDF5 library (to read the large input file; version 1.8.x with x ≥ 17 is fine)
  • GSL (GNU Scientific Library, without which SWIFT will not be able to perform cosmological time integration)
  • FFTW (3.3.x, x ≥ 6 is fine)
  • Metis or ParMetis (to optimize load between MPI nodestasks)

Configure and Build SWIFT


1. Run ./autogen.sh (only the first time)

2. Configure, the --with-tbbmalloc option is recommended on Xeon-based clusters.

Code Block
./configure --with-metis --with-tbbmalloc - -with -fftw=/path/to/fftw


3. make

...

2. Get the initial conditions, a ~30 GB file named EAGLE_ICs_50.hdf5, by running ./getICs.sh (this will only need to be done once).

3. Edit eagle_50.yml to change the value of dt_max from 1.e-2 to 1.e-5. This is done to increase the computational load and provide better scaling.

...

Note that SWIFT runs best with only a few MPI tasks per node. You may test a few different numbers of tasks per node, such as 1, 2 and 4, performance numbers may change. Performance will vary.


5. The threading model used by SWIFT is NOT not OpenMP. You will need to tell it explicitly how many threads to use per MPI task, through the --threads=N option, where N is the number of desired threads.

6. Your command line must include the --cosmology, --hydro, --self-gravity, and --stars options, all of which relate to the physics aspects of the simulation.

...

Code Block
mpirun -np 16 ../../swift_mpi --cosmology --hydro --self-gravity --stars - threads=16 -n 64 eagle_50.yml

(The above line works under a scheduler and with an MPI build that works with the scheduler and knows to spawn only two MPI tasks per node.  Failing that, a host file would need to be provided.)


8. There are other SWIFT options you may find useful. Run examples/swift --help to find out what other options are available.

...

Code Block
awk 'BEGIN{tot=0} {tot += $11} END {print 64*3600000/tot}' < timesteps_XXX.txt


Competition

At the start of the actual competition, you will be provided with an alternate set of input files (initial conditions and parameter file), which are not publicly available.  The EAGLE_50 test case above is just for practice.