Overview
Neko is a portable framework for high-order spectral element flow simulations. Written in modern Fortran, Neko adopts an object-oriented approach, allowing multi-tier abstractions of the solver stack and facilitating various hardware backends ranging from general-purpose processors, CUDA and HIP enabled accelerators to SX-Aurora vector processors. Neko has its roots in the spectral element code Nek5000 from UChicago/ANL, from where many of the namings, code structure and numerical methods are adopted.
For more information, please visit https://neko.cfd.
Note: The page may be changed until the competition stats, maybe sure to follow up until the opening ceremony.
Neko presentation to the teams:
Presentation file:
Building and Running example
Download Neko v0.8.0-rc1, https://github.com/ExtremeFLOW/neko/archive/refs/tags/v0.8.0-rc1.tar.gz.
To build Neko, you will need a Fortran compiler supporting the Fortran-08 standard, autotools, pkg-config, a working MPI installation supporting the Fortran 2008 bindings (mpi_f08
), BLAS/LAPACK and JSON-Fortran. Detailed installation instructions can be found in the Neko manual.
Sample build script on PSC:
#!/bin/bash source /jet/packages/oneapi/v2023.2.0/compiler/2023.2.1/env/vars.sh source /jet/packages/oneapi/v2023.2.0//mpi/2021.10.0/env/vars.sh export MPIFC=mpiifort export CC=mpiicc export FC=$MPIFC ./regen.sh ./configure --prefix=<path> make make install
Note: Intel MPI is recommended, OpenMPI may not be stable, but you are welcome to try.
Sample run script: (Update)
#!/bin/bash #SBATCH -p RM #SBATCH --nodes=4 #SBATCH --ntasks-per-node=128 #SBATCH -J neko #SBATCH --time=4:00:00 #SBATCH --exclusive module purge source /jet/packages/oneapi/v2023.2.0/compiler/2023.2.1/env/vars.sh source /jet/packages/oneapi/v2023.2.0/mkl/2023.2.0/env/vars.sh HCA=mlx5_0:1 source /jet/packages/oneapi/v2023.2.0//mpi/2021.10.0/env/vars.sh USE_UCX=1 MPIFLAGS="" if [ $USE_UCX -ne 0 ]; then MPIFLAGS+="-genv USE_UCX=$USE_UCX " MPIFLAGS+="-genv UCX_NET_DEVICES ${HCA} " MPIFLAGS+="-genv FI_PROVIDER=mlx " export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<path to ucx>/lib else MPIFLAGS+="-genv FI_PROVIDER=^mlx " fi cd tgv <path to neko>/bin/makeneko ${TEST}.f90 mpirun -np <# procs> $MPIFLAGS ./neko tgv_Re1600.case
Sample output: (Update)
_ __ ____ __ __ ____ / |/ / / __/ / //_/ / __ \ / / / _/ / ,< / /_/ / /_/|_/ /___/ /_/|_| \____/ (version: 0.6.1) (build: 2023-09-14 on x86_64-pc-linux-gnu using intel) -------Job Information-------- Start time: 18:44 / 2023-09-18 Running on: 128 MPI ranks CPU type : Intel(R) Xeon(R) Platinum 8280 CPU @ 2.70GHz Bcknd type: CPU Real type : double precision -------------Case------------- Reading case file tgv.case ... --------Postprocessing-------- --------Writer output--------- File name: field.fld Output number: 2 Writing at time: 2.000500 Output time (s): 2.501281 Normal end.
GPU support:
To compile Neko with GPU support, please follow either the instructions in the manual (Compiling Neko for NVIDIA GPUs and Compiling Neko for AMD GPUs).
Note: A Neko installation can only support one backend. Thus, to run both CPU and GPU experiments, two different builds and installations are necessary.
Tasks & Submissions
Run Neko with the given input
(neko tgv_Re1600.case
) on 4 CPU nodes and submit the results to the team’s folder (standard output,field0.f00000
,field0.f00001
andfield0.nek5000
)
Note: the small input is for you to play around.Run Neko with the given input on 4 GPUs (V100) and submit the results to the team’s folder (see instructions for GPU support above, and the additional information provided with the input file)
Note: You are allowed to experiment with different linear solvers in Neko (see the manual) to achieve the fastest runtime. However, all of them are not guaranteed to work for the given case or support all hardware backends in Neko.
Add Comment