Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 26 Current »

Overview

code_saturne is the free, open-source multi-purpose software developed primarily by EDF for computational fluid dynamics (CFD) applications. It mainly solves the Navier-Stokes equations with scalar transport for 2D, 2D-axisymmetric, and 3D flows, whether steady or unsteady, laminar or turbulent, incompressible or weakly compressible, isothermal or not, using the Finite Volume Method (FVM) approach. A new discretisation based on the Compatible Discrete Operator (CDO) approach can be used for some other physics. A highly parallel coupling library (Parallel Locator Exchange - PLE) is also available in the distribution to couple other software with different physics, such as for conjugate heat transfer and structural mechanics. For the incompressible solver, the pressure is solved using an integrated Algebraic Multi-Grid algorithm and the velocity components/scalars are computed by conjugate gradient methods or Gauss-Seidel/Jacobi/Krylov solvers.

Website: https://www.code-saturne.org/cms/web/

Building version 8.3.0

Version 8.3.0 of code_saturne is used. It is built from its github repository, to be found here. After cloning the github repository, creating the configure file, a simple installer is made available for this version of the code and for HPC machines ONLY. Note that this install is tailored for HPC machine where the GUI is NOT built. On local machines or laptops, the GUI should be built, and this is done by changing the line “--disable-gui” to “enable-gui”.

This version assumes that the following modules are loaded:

module load intel/2024.0 compiler hpcx/2.19
module load python/3.10
export OMPI_CC=icc
export OMPI_CXX=icpc
export OMPI_FC=ifort

The end of the InstallHPC.sh script reads:

$KERSRC/configure    \
--enable-openmp      \
--disable-shared     \
--disable-gui        \
--prefix=$KEROPT     \
CC=mpicc FC=ifx CXX=mpicxx CFLAGS="-O3" FCFLAGS="-O3"

make -j 12
make install

$KEROPT/bin/code_saturne

cd $INSTALLPATH

The code is built as follows:

git clone https://github.com/code-saturne/code_saturne.git
cd code_saturne
git switch v8.3
./sbin/bootstrap
cd ../
./InstallHPC.sh

If the installation is successful, it should create code_saturne/arch/Linux/bin/code_saturne and other libraries that will make the code properly works, and shows:

./code_saturne/arch/Linux/bin/code_saturne
Usage: ./code_saturne/arch/Linux/bin/code_saturne <topic>

Topics:
  help
  studymanager
  smgr
  bdiff
  bdump
  compile
  config
  cplgui
  create
  gui
  parametric
  studymanagergui
  smgrgui
  trackcvg
  update
  up
  info
  run
  submit
  symbol2line

Options:
  -h, --help  show this help message and exit

Running the Tiny test case

The tiny test case tutorial (to be used to check that code_saturne is properly installed on a laptop, as it relies on the GUI) has been tailored for version 8.3.0 of code_saturne, and is to be found here, as . It requires the following meshes to be downloaded from here: .

Checking if the code is running fine is also possible. It can be performed using 1 MPI task (or 2 tasks, but this will not be efficient) of an HPC machine, but the whole code_saturne Study has to be copied across from the laptop to the HPC machine, and the job submitted using the queuing system.

Note: The tiny input is just for practicing purposes, no need to submit anything.

Tasks and Submissions

(Input for the task will be published later on)

input will be in this location on bridges-2:

/ocean/projects/cis240152p/shared/ISC25/code_saturne
  1. Run the application on 4 CPU nodes and submit the results. Experiment with MPI and OpenMP to find the best results.

  1. Run MPI Profiler to profile the application, which 3 MPI calls are mostly used? present your work in the teams interview ppt slides

  1. Visualize the results, create a short video that demonstrate the given input via paraview or any other tool.

  1. Bonus: Experiment with GPU runs over 4 GPUs, single node. Show your work on the team’s presentation.

  • No labels