Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
git clone https://github.com/Multiphysics-Flow-Solvers/FluTAS.git

...

Builld FFTW library

...

Sample build script for Fritz

Code Block
cd FluTAS
BASE=$PWD/src
# Edit src/ FC name and FFTW path in targets/target.generic-intel.
with
your fftw path
module load intel/2022# MPI=intelmpi-2021.7.0
MPI=openmpi-4.1.2-intel2021.4.0
if [[ "$MPI" =~ ^intel ]]; then
        module load compilerintel/2022.1.0
        module load mklfftw/20223.0.2
module load hpcx/2.13.03.10-impi
        export CXX=mpicxx
export FC=mpif90
export FFTW_DIR=$BASE/fftw-I_MPI_F90=ifort
elif [[ "$MPI" =~ ^openmpi ]]; then
        module load fftw/3.3.10-ompi
 make clean APP=single_phase       export OMPI_MPIF90=ifort
fi
module load $(echo $MPI | sed -e "s/\-/\//")

make ARCH=generic-intel APP=singletwo_phase DO_DBG=0 -j 16 
_ht

Running Example

Code Block
# Copy one of examples
mkdir run
cp -r examplescd FluTAS/examples/two_phase_ht/dhc run
cd run/dhc
coarse_two_layer_rb
# Edit the following line in dns.in to change the process grid. Below will work with 128256cores.
cores16 (2x6416 cores) 2 64                                 ! dims(1:2)

EXE=<path>/flutas.two_phase_ht
/usr/bin/time mpirun -np 128 $MPIFLAGS $EXE"

 flutas
 the used processor grid is           16  by           16
 Padded ALLTOALL optimisation on
 ************************************************
 *** Beginning of simulation (TWO-PHASE mode) ***
 ************************************************
 *** Initial condition succesfully set ***
 dtmax =   3.322388020223433E-003 dt =   1.661194010111717E-003
 *** Calculation loop starts now ***
...

 *** Fim ***




 OUT:initial  :      6.335s (        1 calls)
 STEP         :     14.630s (     1000 calls)
 VOF          :      9.309s (     1000 calls)
 RK           :      0.545s (     1000 calls)



 SOLVER       :      1.264s (     1000 calls)
 CORREC       :      0.588s (     1000 calls)
 POSTPROC     :      0.117s (     1000 calls)
 OUT:iout0d   :      0.005s (        2 calls)
 OUT:iout1d   :      0.000s (        1 calls)
 OUT:iout3d   :      4.267s (        1 calls)
 OUT:isave    :      4.277s (        1 calls)

Task and submission

  1. Use the input under <TBD>.

  2. Profile the input
    Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.

    • Submit the profile as PDF to the team's folder.

    • Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.

  3. Run the FluTAS on both PSC bridges-2 and FAU Fritz CPU clusters.

  4. Submission and Presentation:
    - Submit all the build scripts, run scripts and stdout/logs.
    - Do not submit the output data or the source code.
    - Prepare slides for the team’s interview based on your work for this application

...