Building and running NEMO 4.0

What is NEMO?

NEMO, which expands to Nucleus for European Modelling of the Ocean, is a state-of-the-art modelling framework for research activities and forecasting services in ocean and climate sciences, developed in a sustainable way by a European consortium. (https://www.nemo-ocean.eu/)

The version for which the following notes are valid is 4.0

Installing NEMO

Instructions for installing and running NEMO can be found in https://forge.ipsl.jussieu.fr/nemo/chrome/site/doc/NEMO/guide/html/install.html

The first thing to do is to set up the software environment: compilers, MPI and support libraries. When building to run on the HPC-AI Advisory Council’s clusters, the requisite HDF5, NetCDF and FCM libraries are made available using environment modules.

module purge module load intel/2019.5.281 hpcx/2.5.0 module load hdf5/1.10.4-i195h250 netcdf/4.6.2-i195h250 fcm/2017.10.0

One additional library required by NEMO is XIOS; the Installation Guide recommends using version 2.5 rather than the latest available version. Instructions for building and testing XIOS are found here. First, check out a copy of XIOS version 2.5 (the command below extracts it to a new directory called XIOS).

svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 XIOS cd XIOS

Under XIOS there is an “arch” subdirectory that contains files that determine how XIOS is built on various systems; you have to create your "arch/arch_YOUR_ARCH.fcm" and "arch/arch_YOUR_ARCH.path" files for your intended system, typically based on existing arch*.* files. We have chosen to build XIOS and NEMO for the Lab’s Helios cluster, which has Intel Xeon Gold 6138 (Skylake) processors. Here are the contents of the two architecture-related files:

arch/arch-skl_hpcx.fcm

################################################################################ ################### Projet XIOS ################### ################################################################################ %CCOMPILER mpicc %FCOMPILER mpif90 %LINKER mpif90 -nofor-main %BASE_CFLAGS -diag-disable 1125 -diag-disable 279 %PROD_CFLAGS -O3 -D BOOST_DISABLE_ASSERTS %DEV_CFLAGS -g -traceback %DEBUG_CFLAGS -DBZ_DEBUG -g -traceback -fno-inline %BASE_FFLAGS -D__NONE__ %PROD_FFLAGS -O3 -xCORE-AVX512 %DEV_FFLAGS -g -O2 -traceback %DEBUG_FFLAGS -g -traceback %BASE_INC -D__NONE__ %BASE_LD -lstdc++ %CPP mpicc -EP %FPP cpp -P %MAKE make [gerardo@login02 arch]$ cat arch-skl_hpcx.path NETCDF_INCDIR="-I/$NETCDF_DIR/include" NETCDF_LIBDIR="-L/$NETCDF_DIR/lib" NETCDF_LIB="-lnetcdff -lnetcdf" MPI_INCDIR="" MPI_LIBDIR="" MPI_LIB="" HDF5_INCDIR="-I/$HDF5_DIR/include" HDF5_LIBDIR="-L/$HDF5_DIR/lib" HDF5_LIB="-lhdf5_hl -lhdf5 -lhdf5 -lz" OASIS_INCDIR="" OASIS_LIBDIR="" OASIS_LIB=""

arch/arch-skl_hpcx.path

The environment variables NETCDF_DIR and HDF5_DIR are set up by the second ‘module load’ command above. Building XIOS is done using a script that comes with it:

We are now ready to continue with the installation of NEMO.

As with XIOS, NEMO has an arch directory with files that determine how it is built on various platforms. You have to create your "arch/arch-YOUR_ARCH.fcm" file. Again, it is advisable to create that file based on one of the existing arch-*.fcm files. For our build intended to be run on the Lab’s Helios cluster, here are the contents of the corresponding arch-*.fcm file.

arch-skl_hpcx.fcm

You will need to adjust the path on the line that starts with %XIOS_HOME to point to your own build directory for XIOS.

NEMO can be built to run in several different configurations; we will use the GYRE_PISCES, which is listed along with several other reference configurations here. GYRE_PISCES involves three components and does not require any additional datasets to be downloaded. Here is the command to build NEMO with the above architecture file and the GYRE_PISCES configuration. The resulting build is created in the cfgs directory, in a subdirectory whose name is specified in the argument to the -n option of the build command:

The results of the build are under cfgs/hpcx_gyre_pisces, and the place to run NEMO is in EXP00 in that directory; the latter subdirectory already contains a set of input files specific to GYRE_PISCES as well as symbolic links to other input files that are shared by various configurations.

The executable, nemo.exe, is found under cfgs/hpcx_gyre_pisces/BLD/bin, but there will already be a symlink called nemo under EXP00, too.

Running NEMO

To run NEMO we need to be in the experiment directory:

There is a configuration file, namelist_cfg, that provides several parameters for the actual test or benchmark runs. As it stands, the size of the problem to be run is very small and can be run on a very small number of processors. For our purposes, the variables we need to change to make our testing more interesting are nn_GYRE (which determines the size of the simulation domain) and ln_bench. Save a copy of the original file:

Edit namelist_cfg with your favorite editor to change nn_GYRE from 1 to 25, and ln_bench from .false. to .true.

The most advisable way to run NEMO on the Helios cluster is to use a batch script. The scheduler in use on the Lab clusters is Slurm, so here is the Slurm script for running NEMO.

One thing to note is that the domain decomposition used by NEMO does not allow it to be run with an arbitrary number of MPI ranks. When a job with specified number of ranks fails, NEMO will suggest a list of possible valid numbers near the failing choice:

The error message will appear in the ocean.output file; an example follows: