Overview
FluTAS (Fluid Transport Accelerated Solver) is an open-source code targeting multiphase fluid dynamics simulations. The key feature of FluTAS is the ability to efficiently run both on many-CPUs and many-GPUs in a single and unified framework.
This framework is also designed to be modular so that it can be extended in a sustainable manner.
Presentation
Here are the introduction presentation:
Slides:
Downloading and compiling FluTAS
Download the application:
git clone https://github.com/Multiphysics-Flow-Solvers/FluTAS.git
Sample build script:
cd FluTAS/src # Edit FC name and FFTW path in targets/target.generic-intel. # MPI=intelmpi-2021.7.0 MPI=hpcx/2.15.0 if [[ "$MPI" =~ ^intel ]]; then module load intel/2022.1.0 module load fftw/3.3.10-impi export I_MPI_F90=ifort elif [[ "$MPI" =~ ^openmpi ]]; then module load fftw/3.3.10-ompi export OMPI_MPIF90=ifort fi module load $(echo $MPI | sed -e "s/\-/\//") make ARCH=generic-intel APP=two_phase_ht DO_DBG=0 DO_POSTPROC=0
Running Example
Before you start, make sure to change the process grid in dns.in:
The example below will work with 256cores (16x16):
$cd FluTAS/examples/two_phase_ht/coarse_two_layer_rb $ grep dims dns.in 16 16 ! dims(1:2)
Example:
/usr/bin/time mpirun -np 256 $MPIFLAGS flutas the used processor grid is 16 by 16 Padded ALLTOALL optimisation on ************************************************ *** Beginning of simulation (TWO-PHASE mode) *** ************************************************ *** Initial condition succesfully set *** dtmax = 3.322388020223433E-003 dt = 1.661194010111717E-003 *** Calculation loop starts now *** ... *** Fim *** OUT:initial : 6.335s ( 1 calls) STEP : 14.630s ( 1000 calls) VOF : 9.309s ( 1000 calls) RK : 0.545s ( 1000 calls) SOLVER : 1.264s ( 1000 calls) CORREC : 0.588s ( 1000 calls) POSTPROC : 0.117s ( 1000 calls) OUT:iout0d : 0.005s ( 2 calls) OUT:iout1d : 0.000s ( 1 calls) OUT:iout3d : 4.267s ( 1 calls) OUT:isave : 4.277s ( 1 calls)
Task and submission
Use this input, two_layer_rb.
Profile the input
Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.Submit the profile as PDF to the team's folder.
Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.
Run the FluTAS .
experiment with 1,2,4+ node runs. Add to your presentation a scalability graph based on your results and any conclusions you came up with. No need to submit here the results, just show your work on the ppt for the team interview.
Submission and Presentation:
- Submit all the build scripts, run scripts and stdout/logs.
- Do not submit the output binary data or the source code.
- Prepare slides for the team’s interview based on your work for this application
Add Comment