Tinker-HP is a CPU based, double precision, parallel package dedicated to long polarizable molecular dynamics simulations and to polarizable QM/MM. Tinker-HP is an evolution of the popular Tinker package that conserves it simplicity of use but brings new capabilities allowing performing very long molecular dynamics simulations on modern supercomputers that use thousands of cores. The Tinker-HP approach offers various strategies using domain decomposition techniques for periodic boundary conditions in the framework of the (n)log(n) Smooth Particle Mesh Ewald or using polarizable solvation continuum simulations through the new generation ddCosmo approach. Tinker-HP proposes a high performance scalable computing environment for polarizable force fields giving access to large systems up to millions of atoms.
Running Tinker-HP
To Download Tinker-HP v1.2
wget --no-check-certificate https://tinker-hp.ip2ct.upmc.fr/docrestreint.api/473/8c9b2151cab3a0d442df84ff4228b8dd2fe8b97f/tgz/tinker-hp.v1.2_isc20_competition.tgz
or click here. Basic installation instructions can be found in the readme PDF in the Zip attached.
Basic Installation would be:
./configure ; make -16 ; make install ; cd example ; ./ubiquitin2.run
More tuning options can be found in the PDF.
Output example:
$ ./ubiquitin2.run ###################################################################### ########################################################################## ### ### ### Tinker-HP --- Software Tools for Molecular Design ### ## ## ## Version 1.2 November 2019 ## ## ## ## Copyright (c) Washington University in Saint Louis (WU) ## ## The University of Texas at Austin ## ## Sorbonne Universites, UPMC (Sorbonne) ## ## 1990-2019 ## ### All Rights Reserved ### ### ### ########################################################################## ###################################################################### License Number : ISC20_Competition Cite this work as : Tinker-HP: a Massively Parallel Molecular Dynamics Package for Multiscale Simulations of Large Complex Systems with Advanced Polarizable Force Fields. Louis Lagardre, Luc-Henri Jolly, Filippo Lipparini, Felix Aviat, Benjamin Stamm, Zhifeng F. Jing, Matthew Harger, Hedieh Torabifard, G. Andrs Cisneros, Michael J. Schnieders, Nohad Gresh, Yvon Maday, Pengyu Y. Ren, Jay W. Ponder and Jean-Philip Piquemal, Chem. Sci., 2018, 9, 956-972, doi: 10.1039/c7sc04531j 3D Domain Decomposition Nx = 2 Ny = 2 Nz = 2 In auto-tuning mode...... factors: 1 2 4 8 processor grid 1 by 8 time= 1.399874687194824E-003 processor grid 2 by 4 time= 1.043856143951416E-003 processor grid 4 by 2 time= 7.894933223724365E-004 processor grid 8 by 1 time= 5.473494529724121E-004 the best processor grid is probably 8 by 1 ***** Using the generic FFT engine ***** Smooth Particle Mesh Ewald Parameters : Ewald Coefficient Charge Grid Dimensions B-Spline Order 0.5446 72 54 54 5 Random Number Generator Initialized with SEED : 12345 3D Domain Decomposition Nx = 2 Ny = 2 Nz = 2 Langevin Molecular Dynamics Trajectory via BAOAB-RESPA Algorithm MD Step E Total E Potential E Kinetic Temp Pres 1 -18699.3485 -27470.3061 8770.9576 302.23 -264.38 2 -18704.1001 -27478.8942 8774.7941 302.36 -397.49 3 -18693.6690 -27442.7636 8749.0946 301.47 -184.74 4 -18691.5564 -27548.0349 8856.4785 305.17 -607.14 5 -18686.6435 -27364.3032 8677.6597 299.01 -887.84 6 -18682.8234 -27485.3114 8802.4879 303.31 -927.21 7 -18682.7029 -27413.0837 8730.3808 300.83 -727.35 8 -18689.4472 -27494.0409 8804.5937 303.39 -294.13 9 -18697.8714 -27455.9405 8758.0691 301.78 -566.06 10 -18694.3349 -27448.8444 8754.5095 301.66 -810.07
For every 100 steps, you will get performance numbers, we will be looking at those.
For example:
$ mpirun -np 128 ../bin/dynamic dhfr2 100 2.0 1.0 1 . . . 90 -45380.8811 -65461.3484 20080.4673 285.96 3054.55 91 -45435.5987 -65583.1772 20147.5786 286.91 659.26 92 -45465.7005 -65390.3946 19924.6941 283.74 -2520.50 93 -45437.1749 -65531.6736 20094.4987 286.16 -1574.08 94 -45394.5420 -65552.9020 20158.3601 287.07 1712.30 95 -45405.9554 -65476.6808 20070.7254 285.82 2301.81 96 -45453.3783 -65455.4998 20002.1215 284.84 -752.93 97 -45457.6634 -65466.9811 20009.3177 284.94 -2445.11 98 -45411.0548 -65549.8385 20138.7837 286.79 -28.50 99 -45383.0850 -65536.3735 20153.2885 286.99 2666.46 100 -45415.7848 -65439.5538 20023.7689 285.15 1510.28 Average Values for the last 100 out of 100 Dynamics Steps Simulation Time 0.2000 Picosecond Total Energy -45429.8585 Kcal/mole (+/- 49.7167) Potential Energy -66811.9073 Kcal/mole (+/-1590.9857) Kinetic Energy 21382.0487 Kcal/mole (+/-1582.4144) Temperature 304.49 Kelvin (+/- 22.53) Pressure 1544.79 Atmosphere (+/- 3162.73) Density 0.9957 Grams/cc (+/- 0.0000) Ave time for reneig = 0.003384641651064 Ave time for positions comm = 0.007003445550799 Ave time for param = 0.002978316862136 Ave time for forces comm = 0.008176138140261 Ave time for reciprocal forces comm = 0.001912602446973 Ave time for real space forces comm = 0.000000000000000 Ave time for bonded forces = 0.000733544230461 Ave time for non bonded forces = 0.071196460388601 Ave time for neighbor lists = 0.002845110893250 Ave time for real space (permanent) = 0.002853646297008 Ave time for real space (polar) = 0.001480445172638 Ave time for fill grid (permanent) = 0.001730768475682 Ave time for ffts (permanent) = 0.002861753590405 Ave time for scalar prod (permanent) = 0.000063080061227 Ave time for extract grid (permanent)= 0.000671311207116 Ave time for rec-rec comm (permanent)= 0.004985433053225 Ave time for recip space (permanent) = 0.003469009678811 Ave time for recip space (polar) = 0.002791905663908 Time for 100 Steps: 11.2295 Ave. Time per step: 0.1123 ns per day: 1.5388
Notes
1. Consider changing MPI_INIT_THREAD to MPI_INIT, not using the multi-threading, in case it causes running issues.
$ grep -i mpi_init * analyze.f: call MPI_INIT(ierr) analyze.f:c call MPI_INIT_THREAD(MPI_THREAD_MULTIPLE,nthreadsupport,ierr) bar.f: call MPI_INIT(ierr) bar.f:c call MPI_INIT_THREAD(MPI_THREAD_MULTIPLE,nthreadsupport,ierr) dynamic.f: call MPI_INIT(ierr) dynamic.f:c call MPI_INIT_THREAD(MPI_THREAD_MULTIPLE,nthreadsupport,ierr) minimize.f: call MPI_INIT(ierr) minimize.f:c call MPI_INIT_THREAD(MPI_THREAD_MULTIPLE,nthreadsupport,ierr) testgrad.f: call MPI_INIT(ierr) testgrad.f:c call MPI_INIT_THREAD(MPI_THREAD_MULTIPLE,nthreadsupport,ierr)
Tasks
1. Build and Run Tinker-HP
Build and run Tinker-HP application, you build it however you like and use any MPI. Test it on the examples supplied with the source code.
2. papain and protease_dimer inputs
The Input file for the competition would be some COVID-19 virus proteins and STMV (the complete Satellite Tobacco Mosaic Virus), much larger.
Analyze the papain and protease_dimer from COVID-19 and stmv inputs, run the following mpirun command
For (papain, protease_dimer) use timestep=1000 , STMV use timestep=100.
$ mpirun -np 128 dynamic papain 1000 2.0 1.0 2 300 ... $ mpirun -np 128 dynamic protease_dimer 1000 2.0 1.0 2 300 ... $ mpirun -np 128 ../bin/dynamic stmv 100 2.0 1.0 2 300 ...
Submit to us the stdout and the arc log file (we will send details later how to submit).
3. Visualisation
Run the papain and protease_dimer benchmarks with the following input parameters, steps=10000 and sf=0.1
$ mpirun -np 128 dynamic papain 10000 2.0 0.1 2 300
Use the *.arc file as an input to VMD or any other tool to visualize the protein movement.
See here for more documentation how to extract image and video.
https://www.ks.uiuc.edu/Training/Tutorials/vmd/tutorial-html/node2.html
Here is an example of the COVID19 protease that looks like heart.
4. Socialize your work
Add your team name or photo to the video and tweet about it using the handles: #ISC20 ISC20_SCC @TINKERtoolsMD. In case you don’t have a team twitter, submit to use the video.