Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
/usr/bin/time mpirun -np 256 $MPIFLAGS flutas
 the used processor grid is           16  by           16
 Padded ALLTOALL optimisation on
 ************************************************
 *** Beginning of simulation (TWO-PHASE mode) ***
 ************************************************
 *** Initial condition succesfully set ***
 dtmax =   3.322388020223433E-003 dt =   1.661194010111717E-003
 *** Calculation loop starts now ***
...

 *** Fim ***




 OUT:initial  :      6.335s (        1 calls)
 STEP         :     14.630s (     1000 calls)
 VOF          :      9.309s (     1000 calls)
 RK           :      0.545s (     1000 calls)



 SOLVER       :      1.264s (     1000 calls)
 CORREC       :      0.588s (     1000 calls)
 POSTPROC     :      0.117s (     1000 calls)
 OUT:iout0d   :      0.005s (        2 calls)
 OUT:iout1d   :      0.000s (        1 calls)
 OUT:iout3d   :      4.267s (        1 calls)
 OUT:isave    :      4.277s (        1 calls)

Task and submission

Use

...

this input, two_layer_rb.

View file
nametwo_layer_rb.zip

  1. Profile the input
    Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.

  2. Submit the profile as PDF to the team's folder.

    • Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.

  3. Run the FluTAS on both PSC bridges-2 and FAU Fritz CPU clusters for Single node, Two node and Four node runs.

    • Submit the results to the team's folder.

    • Add to your presentation a scalability graph based on your results and any conclusions you came up with.

  4. Submission and Presentation:
    - Submit all the build scripts, run scripts and stdout/logs.
    - Do not submit the output data or the source code.
    - Prepare slides for the team’s interview based on your work for this application

...