...
FluTAS (Fluid Transport Accelerated Solver) is an open-source code targeting multiphase fluid dynamics simulations. The key feature of FluTAS is the ability to efficiently run both on many-CPUs and many-GPUs in a single and unified framework.
This framework is also designed to be modular so that it can be extended in a sustainable manner.
Building the Application
…
Running Example
…
Task and submission
...
Presentation
Here are the introduction presentation:
Widget Connector | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
Slides:
View file | ||
---|---|---|
|
Downloading and compiling FluTAS
Download the application:
Code Block |
---|
git clone https://github.com/Multiphysics-Flow-Solvers/FluTAS.git |
Sample build script for Fritz:
Code Block |
---|
cd FluTAS/src
# Edit FC name and FFTW path in targets/target.generic-intel.
# MPI=intelmpi-2021.7.0
MPI=openmpi-4.1.2-intel2021.4.0
if [[ "$MPI" =~ ^intel ]]; then
module load intel/2022.1.0
module load fftw/3.3.10-impi
export I_MPI_F90=ifort
elif [[ "$MPI" =~ ^openmpi ]]; then
module load fftw/3.3.10-ompi
export OMPI_MPIF90=ifort
fi
module load $(echo $MPI | sed -e "s/\-/\//")
make ARCH=generic-intel APP=two_phase_ht DO_DBG=0 DO_POSTPROC=0 |
Sample build script for PSC:
Download HPC-X from ISC23 SCC Getting Started with Bridges-2 Cluster.
Code Block |
---|
cd FluTAS/src
# Edit FC name and FFTW path in targets/target.generic-intel.
source /jet/packages/intel/oneapi/compiler/2022.1.0/env/vars.sh
source /jet/packages/intel/oneapi/mkl/2022.1.0/env/vars.sh
MPI=hpcx
MPI=intelmpi
if [[ "$MPI" =~ ^hpcx ]]; then
module use $HOME/hpcx-2.13.1/modulefiles
module load hpcx
export OMPI_MPICC=icc
export OMPI_MPICXX=icpc
export OMPI_MPIFC=ifort
export OMPI_MPIF90=ifort
export FC=mpif90
else
source /jet/packages/intel/oneapi/mpi/2021.6.0/env/vars.sh
export FC=mpiifort
fi
export FFTW_HOME=<path>/fftw-3.3.10-$MPI
make ARCH=generic-intel APP=two_phase_ht DO_DBG=0 DO_POSTPROC=0 |
Running Example
Before you start, make sure to change the process grid in dns.in:
The example below will work with 256cores (16x16):
Code Block |
---|
$cd FluTAS/examples/two_phase_ht/coarse_two_layer_rb
$ grep dims dns.in
16 16 ! dims(1:2) |
Example:
Code Block |
---|
/usr/bin/time mpirun -np 256 $MPIFLAGS flutas
the used processor grid is 16 by 16
Padded ALLTOALL optimisation on
************************************************
*** Beginning of simulation (TWO-PHASE mode) ***
************************************************
*** Initial condition succesfully set ***
dtmax = 3.322388020223433E-003 dt = 1.661194010111717E-003
*** Calculation loop starts now ***
...
*** Fim ***
OUT:initial : 6.335s ( 1 calls)
STEP : 14.630s ( 1000 calls)
VOF : 9.309s ( 1000 calls)
RK : 0.545s ( 1000 calls)
SOLVER : 1.264s ( 1000 calls)
CORREC : 0.588s ( 1000 calls)
POSTPROC : 0.117s ( 1000 calls)
OUT:iout0d : 0.005s ( 2 calls)
OUT:iout1d : 0.000s ( 1 calls)
OUT:iout3d : 4.267s ( 1 calls)
OUT:isave : 4.277s ( 1 calls) |
Task and submission
Use this input, two_layer_rb.
View file | ||
---|---|---|
|
Profile the input
Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.Submit the profile as PDF to the team's folder.
Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.
Run the FluTAS on both PSC bridges-2 and FAU Fritz CPU clusters for four node runs.
Submit the results to the team's folder (4 node run only, 1 result per cluster).
experiment with 1,2,4 node runs. Add to your presentation a scalability graph based on your results and any conclusions you came up with. No need to submit here the results, just show your work on the ppt for the team interview.
Submission and Presentation:
- Submit all the build scripts, run scripts and stdout/logs.
- Do not submit the output data or the source code.
- Prepare slides for the team’s interview based on your work for this application