GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions many groups are also using it for research on non-biological systems.

Some test cases to start with can be found here:

ftp://ftp.gromacs.org/pub/benchmarks/gmxbench-3.0.tar.gz

You can use any Gromacs version, more details to come.

For more information about Gromacs, refer to Gromacs docs at http://manual.gromacs.org/documentation/current/index.html.

In ISC20 Competition we will use two input files:

Both are available to download via the teams box account and have to be run with -nsteps 100000.

Running Gromacs

To build Gromacs 2020.2 with CPU only

wget http://ftp.gromacs.org/pub/gromacs/gromacs-2020.2.tar.gz
tar xfz gromacs-2020.2.tar.gz
cd gromacs-2020.2
# <load compilers and MPI>
mkdir build
cd build
cmake .. -DGMX_FFT_LIBRARY=mkl -DMKL_LIBRARIES=-mkl -DMKL_INCLUDE_DIR=$MKLROOT/include \
        -DGMX_SIMD=AVX2_256 \
        -DGMX_MPI=ON \
        -DGMX_BUILD_MDRUN_ONLY=on \
        -DBUILD_SHARED_LIBS=on \
        -DCMAKE_INSTALL_PREFIX=<install path> \
        -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx
make -j 16 install

To build with GPU support

cmake .. -DGMX_FFT_LIBRARY=mkl -DMKL_LIBRARIES=-mkl -DMKL_INCLUDE_DIR=$MKLROOT/include \
        -DGMX_SIMD=AVX2_256 \
        -DGMX_MPI=ON \
        -DGMX_GPU=ON \
        -DGMX_BUILD_MDRUN_ONLY=on \
        -DBUILD_SHARED_LIBS=on \
        -DCMAKE_C_COMPILER=mpicc -DCMAKE_CXX_COMPILER=mpicxx \
        -DCMAKE_INSTALL_PREFIX=$BASE/install-$MPI
make -j 16 install

To run Gromacs with CPU only

mpirun <mpi flags> mdrun_mpi -v -s <inp> -nsteps 100000 -noconfout

To run Gromacs with GPU

mpirun <mpi flags> mdrun_mpi -v -s <inp> -nsteps 100000 -noconfout -nb gpu -pin on

Sample output:

                      :-) GROMACS - mdrun_mpi, 2020.2 (-:

                            GROMACS is written by:
     Emile Apol      Rossen Apostolov      Paul Bauer     Herman J.C. Berendsen
...

Back Off! I just backed up ener.edr to ./#ener.edr.1#
starting mdrun ''
1000 steps,      2.0 ps.
step 0
vol 0.90  imb F 10% step 100, remaining wall clock time:    76 s
vol 0.96  imb F  2% step 200, remaining wall clock time:    66 s
vol 0.96  imb F  2% step 300, remaining wall clock time:    57 s
vol 0.94  imb F  2% step 400, remaining wall clock time:    49 s
vol 0.95  imb F  1% step 500, remaining wall clock time:    41 s
vol 0.96  imb F  1% step 600, remaining wall clock time:    32 s
vol 0.97  imb F  2% step 700, remaining wall clock time:    24 s
vol 0.96  imb F  1% step 800, remaining wall clock time:    16 s
vol 0.97  imb F  2% step 900, remaining wall clock time:     8 s
vol 0.97  imb F  1%
Writing final coordinates.
step 1000, remaining wall clock time:     0 s


Dynamic load balancing report:
 DLB was turned on during the run due to measured imbalance.
 Average load imbalance: 2.6%.
 The balanceable part of the MD step is 80%, load imbalance is computed from this.
 Part of the total run time spent waiting due to load imbalance: 2.1%.
 Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 % Y 0 % Z 0 %


NOTE: 7 % of the run time was spent communicating energies,
      you might want to increase some nst* mdp options

               Core t (s)   Wall t (s)        (%)
       Time:     4246.049       88.475     4799.1
                 (ns/day)    (hour/ns)
Performance:        1.955       12.276

Submission:

Brief explanation of how Gromacs was configured, compiled and deployed
Makefile
The job submission script (PBS)
stdout, md.log and ener.edr
Output should include Wall t(s) and Performance (ns/day)