Table of Contents | ||
---|---|---|
|
Overview
code_saturne is the free, open-source multi-purpose software developed primarily by EDF for computational fluid dynamics (CFD) applications. It mainly solves the Navier-Stokes equations with scalar transport for 2D, 2D-axisymmetric, and 3D flows, whether steady or unsteady, laminar or turbulent, incompressible or weakly compressible, isothermal or not, using the Finite Volume Method (FVM) approach. A new discretisation based on the Compatible Discrete Operator (CDO) approach can be used for some other physics. A highly parallel coupling library (Parallel Locator Exchange - PLE) is also available in the distribution to couple other software with different physics, such as for conjugate heat transfer and structural mechanics. For the incompressible solver, the pressure is solved using an integrated Algebraic Multi-Grid algorithm and the velocity components/scalars are computed by conjugate gradient methods or Gauss-Seidel/Jacobi/Krylov solvers.
Website: https://www.code-saturne.org/cms/web/
Building version 8.
23.0
The version Version 8.23.0 of Code_Saturne is code_saturne is used. It is built from its github repository, to be found here. A After cloning the github repository, creating the configure file, a simple installer
View file | ||
---|---|---|
|
This version assumes that the following modules are loaded:
module load intel/2024.0 compiler mkl hpcx/2.19
module load python/3.10
The -gui” to “--enable-gui”.
Assuming the the ifc/ifx compilers are used, the end of the InstallHPC.sh script reads:
$KERSRCCode Block |
---|
$CODESRC/configure |
\ --enable-openmp \ --disable- |
debug \ --disable-shared |
\ --disable-gui |
\ --prefix= |
$CODEOPT \ CC="mpicc" FC= |
"mpif90" CXX="mpicxx" CFLAGS="- |
O2" FCFLAGS="- |
make -j 12
make install
O2" CXXFLAGS="-O2" cd $CODEBUILD make -j 8 && make install cd $INSTALLATION $CODEOPT/bin/code_saturne |
cd $INSTALLPATH
The code is installed in the folder of your choice (called here Y_P) by typing:
./built as follows:
Code Block |
---|
git clone https://github.com/code-saturne/code_saturne.git cd code_saturne git switch v8.3 ./sbin/bootstrap cd ../ ./InstallHPC.sh |
If the installation is successful, this should be returned when typing: Y_Pit should create code_saturne/arch/Linux/bin/code_saturne -8.2.0and other libraries that will make the code properly works, and shows:
Code Block |
---|
./code_saturne/arch/Linux/bin/code_saturne |
Usage: ./code_saturne |
/arch/Linux/bin/code_saturne <topic> |
Topics: |
help
studymanager
smgr
bdiff
bdump
compile
config
cplgui
create
gui
parametric
studymanagergui
smgrgui
trackcvg
update
up
info
run
submit
symbol2line
Options:
-h, --help show this help message and exit
Tasks and Submissions
help
studymanager
smgr
bdiff
bdump
compile
config
cplgui
create
gui
parametric
studymanagergui
smgrgui
trackcvg
update
up
info
run
submit
symbol2line
Options:
-h, --help show this help message and exit |
Running the Tiny test case
The tiny test case tutorial (to be used to check that code_saturne is properly installed on a laptop, as it relies on the GUI) has been tailored for version 8.3.0 of code_saturne, and is to be found here, as
View file | ||
---|---|---|
|
View file | ||
---|---|---|
|
Checking if the code is running fine is also possible. It can be performed using 1 MPI task (or 2 tasks, but this will not be efficient) of an HPC machine, but the whole code_saturne Study has to be copied across from the laptop to the HPC machine, and the job submitted using the queuing system.
Note: The tiny input is just for practicing purposes, no need to submit anything.
Tasks and Submissions
(Input for the task will be published later on)
input will be in this location on bridges-2:
Code Block |
---|
/ocean/projects/cis240152p/shared/ISC25/code_saturne |
Run the application on 4 CPU nodes and submit the results. Experiment with MPI and OpenMP to find the best results.
Run MPI Profiler to profile the application, which 3 MPI calls are mostly used? present your work in the teams interview ppt slides
Visualize the results, create a short video that demonstrate the given input via paraview or any other tool.
<TBD>
Bonus: Experiment with GPU runs over 4 GPUs, single node. Show your work on the team’s presentation.