Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Overview

FluTAS (Fluid Transport Accelerated Solver) is an open-source code targeting multiphase fluid dynamics simulations. The key feature of FluTAS is the ability to efficiently run both on many-CPUs and many-GPUs in a single and unified framework.
This framework is also designed to be modular so that it can be extended in a sustainable manner.

Downloading and compiling FluTAS

  • Download the application:

git clone https://github.com/Multiphysics-Flow-Solvers/FluTAS.git

Sample build script for Fritz

cd FluTAS/src
# Edit FC name and FFTW path in targets/target.generic-intel.

# MPI=intelmpi-2021.7.0
MPI=openmpi-4.1.2-intel2021.4.0
if [[ "$MPI" =~ ^intel ]]; then
        module load intel/2022.1.0
        module load fftw/3.3.10-impi
        export I_MPI_F90=ifort
elif [[ "$MPI" =~ ^openmpi ]]; then
        module load fftw/3.3.10-ompi
        export OMPI_MPIF90=ifort
fi
module load $(echo $MPI | sed -e "s/\-/\//")

make ARCH=generic-intel APP=two_phase_ht

Running Example

cd FluTAS/examples/two_phase_ht/coarse_two_layer_rb
# Edit the following line in dns.in to change the process grid. Below will work with 256cores.
16 16                                 ! dims(1:2)

/usr/bin/time mpirun -np 128 $MPIFLAGS flutas
 the used processor grid is           16  by           16
 Padded ALLTOALL optimisation on
 ************************************************
 *** Beginning of simulation (TWO-PHASE mode) ***
 ************************************************
 *** Initial condition succesfully set ***
 dtmax =   3.322388020223433E-003 dt =   1.661194010111717E-003
 *** Calculation loop starts now ***
...

 *** Fim ***




 OUT:initial  :      6.335s (        1 calls)
 STEP         :     14.630s (     1000 calls)
 VOF          :      9.309s (     1000 calls)
 RK           :      0.545s (     1000 calls)



 SOLVER       :      1.264s (     1000 calls)
 CORREC       :      0.588s (     1000 calls)
 POSTPROC     :      0.117s (     1000 calls)
 OUT:iout0d   :      0.005s (        2 calls)
 OUT:iout1d   :      0.000s (        1 calls)
 OUT:iout3d   :      4.267s (        1 calls)
 OUT:isave    :      4.277s (        1 calls)

Task and submission

  1. Use the input under <TBD>.

  2. Profile the input
    Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.

    • Submit the profile as PDF to the team's folder.

    • Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.

  3. Run the FluTAS on both PSC bridges-2 and FAU Fritz CPU clusters.

  4. Submission and Presentation:
    - Submit all the build scripts, run scripts and stdout/logs.
    - Do not submit the output data or the source code.
    - Prepare slides for the team’s interview based on your work for this application

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.