/
Getting Started with code_saturne for ISC25 SCC

Getting Started with code_saturne for ISC25 SCC

 

 

Overview

code_saturne is the free, open-source multi-purpose software developed primarily by EDF for computational fluid dynamics (CFD) applications. It mainly solves the Navier-Stokes equations with scalar transport for 2D, 2D-axisymmetric, and 3D flows, whether steady or unsteady, laminar or turbulent, incompressible or weakly compressible, isothermal or not, using the Finite Volume Method (FVM) approach. A new discretisation based on the Compatible Discrete Operator (CDO) approach can be used for some other physics. A highly parallel coupling library (Parallel Locator Exchange - PLE) is also available in the distribution to couple other software with different physics, such as for conjugate heat transfer and structural mechanics. For the incompressible solver, the pressure is solved using an integrated Algebraic Multi-Grid algorithm and the velocity components/scalars are computed by conjugate gradient methods or Gauss-Seidel/Jacobi/Krylov solvers.

Website: Home | code_saturne

 

Presentation

 

PDF:

 

Building version 8.3.1 from a tar.gz file

Version 8.3.1 of code_saturne is used. It is built from an adaptation of the github repository and is to be downloaded here . A simple installer is made available for this version of the code. Note that this installation script is tailored for HPC machines ONLY where the GUI is NOT built. On local machines or laptops, the GUI should be built, and this is done by changing the line “--disable-gui” to “--enable-gui” in the installer.

Source code and InstallHPC.sh are also available under ocean/projects/cis240152p/shared/ISC25/code_saturne/SOURCE_CODE

 

Building code_saturne on PSC:

cd <path to build directory> tar xfp /ocean/projects/cis240152p/shared/ISC25/code_saturne/SOURCE_CODE/code_saturne-8.3.1_ISC25.tar.gz # Copy InstallHPC.sh to the current directory and edit as shown below. cp /ocean/projects/cis240152p/shared/ISC25/code_saturne/SOURCE_CODE/InstallHPC.sh .

Sample InstallHPC.sh script with HPC-X:

... mkdir -p $CODEBUILD cd $CODEBUILD module use /ocean/projects/cis240152p/shared/hpcx-2.22-gcc/modulefiles module load hpcx $CODESRC/configure \ --enable-openmp \ --disable-debug \ --disable-shared \ --disable-gui \ --prefix=$CODEOPT \ CC="mpicc" FC="mpif90" CXX="mpicxx" CFLAGS="-O2" FCFLAGS="-O2" CXXFLAGS="-O2" cd $CODEBUILD make -j 8 && make install cd $INSTALLATION $CODEOPT/bin/code_saturne

The code is built as follows:

./InstallHPC.sh

If the installation is successful, it should create a code_saturne executable and other libraries that will make the code properly works, and shows:

Usage: ./code_saturne-8.3.1/arch/Linux/bin/code_saturne <topic> Topics: help studymanager smgr bdiff bdump compile config cplgui create gui parametric studymanagergui smgrgui trackcvg update up info run submit symbol2line Options: -h, --help show this help message and exit

 

Running the Tiny test case

The tiny test case tutorial (to be used to check that code_saturne is properly installed on a laptop, as it relies on the GUI) has been tailored for version 8.3.1 of code_saturne, and is to be found here, as . It requires the following meshes to be downloaded from here: .

Checking if the code is running fine is also possible. It can be performed using 1 MPI task (or 2 tasks, but this will not be efficient) of an HPC machine, but the whole code_saturne Study has to be copied across from the laptop to the HPC machine, and the job submitted using the queuing system.

Note: The tiny input is just for practicing purpose only to check that the code is installed properly, and this is not part of the assessment.

Tasks and Submissions

Prepare the simulation by following .

The MESH files to be used will be under this location on BRIDGES-2:

/ocean/projects/cis240152p/shared/ISC25/code_saturne/INPUT_FILES/KITCHEN/MESH

 

  1. Run the application on 4 CPU nodes, using MPI only and submit the results (run_solver.log, timer_stats.csv and performance.log files).

  2. Visualize the results on the laptop (NOT on BRIDGES-2), and still on the laptop, create a short video that demonstrates the given input via ParaView, and submit it, and also show it on the team’s presentation.

  3. By changing the setup.xml file on the laptop using the GUI (NOT on BRIDGES-2), disable as much writing on the disk as possible (postprocessing, restart/checkpointing, outputting on the disk in the run_solver.log/listing file, etc), and compare the timings with the original timings (Task 1). Show this comparison on the team’s presentation.

  4. Experiment with MPI and OpenMP to find the best timings using the .xml file from Task 3. Show this comparison on the team’s presentation.

  5. Bonus: Experiment with GPU runs over 1 GPU, single node. Show your work on the team’s presentation.

 

Related content