Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

« Previous Version 10 Next »

Overview

Neko is a portable framework for high-order spectral element flow simulations. Written in modern Fortran, Neko adopts an object-oriented approach, allowing multi-tier abstractions of the solver stack and facilitating various hardware backends ranging from general-purpose processors, CUDA and HIP enabled accelerators to SX-Aurora vector processors. Neko has its roots in the spectral element code Nek5000 from UChicago/ANL, from where many of the namings, code structure and numerical methods are adopted.

For more information, please visit https://neko.cfd.

Neko presentation to the teams:

Presentation file:

Building and Running example

Download Neko v0.8.0-rc1, https://github.com/ExtremeFLOW/neko/archive/refs/tags/v0.8.0-rc1.tar.gz.

To build Neko, you will need a Fortran compiler supporting the Fortran-08 standard, autotools, pkg-config, a working MPI installation supporting the Fortran 2008 bindings (mpi_f08), BLAS/LAPACK and JSON-Fortran. Detailed installation instructions can be found in the Neko manual.

Note: For the in-person event, a Fortran compiler supporting the entire Fortran 2008 standard is required; compilers with partial support are not supported. Therefore, it is not allowed to apply any patches located in patches/ nor modify src/common/signal.f90.

Sample build script:

Build json-fortran:

git clone --depth=1 https://github.com/jacobwilliams/json-fortran.git
cd json-fortran && mkdir b && cd b
cmake -DCMAKE_INSTALL_PREFIX=/path/to/installation -DUSE_GNU_INSTALL_CONVENTION=ON ..
make install

Build Neko:

#!/bin/bash
# Load Intel Compilers and MPI libraries.
export MPIFC=mpiifort
export CC=mpiicc
export FC=$MPIFC
export PKG_CONFIG_PATH=/path/to/jsonfortran/lib64/pkgconfig:${PKG_CONFIG_PATH}

./regen.sh
./configure --prefix=<path>
make
make install

Sample run script:

# Untar tgv.zip
cd tgv
# Load Intel Compilers and MPI libraries.
export MPIFC=mpiifort
export FC=$MPIFC

export LD_LIBRARY_PATH=<path>/json-fortran/lib64:$LD_LIBRARY_PATH
<path>/neko/bin/makeneko  ${TEST}.f90
mpirun -np <NPROC> -genv USE_UCX=1 -genv UCX_NET_DEVICES mlx5_0:1 -genv FI_PROVIDER=mlx ./neko tgv_Re1600.case

Sample output:

    _  __  ____  __ __  ____
   / |/ / / __/ / //_/ / __ \
  /    / / _/  / ,<   / /_/ /
 /_/|_/ /___/ /_/|_|  \____/

 (version: 0.8.0-rc1)
 (build: 2024-04-10 on x86_64-pc-linux-gnu using cray)


 -------Job Information--------
 Start time: 10:41 / 2024-04-10
 Running on: 256 MPI ranks
 CPU type  : AMD EPYC 7742 64-Core Processor
 Bcknd type: CPU
 Real type : double precision

 -------------Case-------------
 Reading case file tgv_Re1600.case

   -------------Mesh-------------
   Reading a binary Neko file 32768.nmsh
...
   -----Material properties------
   Read non-dimensional values:
...
   -----Starting simulation------
   T  : [  0.0000000E+00,  0.2000000E+01)
   dt :    0.5000000E-03
...
   ----------------------------------------------------------------
   t =   0.0000000E+00                                  [   0.00% ]
   ----------------------------------------------------------------
   Time-step:      1
    CFL:  0.3970812E-01 dt:  0.5000000E-03

...

   ----------------------------------------------------------------
   t =   0.2683500E+01                                  [  26.84% ]
   ----------------------------------------------------------------
   Time-step:   5368
    CFL:  0.4915199E-01 dt:  0.5000000E-03

       ------------Fluid-------------
       Projection Pressure
       Proj. vec.:   Orig. residual:
                 2     0.4020491E-05
       Pressure
       Iterations:   Start residual:     Final residual:
                 1     0.2061696E-06       0.8489571E-07
       X-Velocity
       Iterations:   Start residual:     Final residual:
                 2     0.8948043E-03       0.1355196E-08
       Y-Velocity
       Iterations:   Start residual:     Final residual:
                 2     0.8948043E-03       0.1355196E-08
       Z-Velocity
       Iterations:   Start residual:     Final residual:
                 2     0.8240080E-03       0.1270342E-08
       Elapsed time (s):  0.8980917E+03 Step time:  0.1604918E+00

       --------Postprocessing--------
   ! stop at job limit >>>
   ! saving checkpoint >>>
   Normal end.

GPU support:

To compile Neko with GPU support, please follow either the instructions in the manual (Compiling Neko for NVIDIA GPUs and Compiling Neko for AMD GPUs).

Note: A Neko installation can only support one backend. Thus, to run both CPU and GPU experiments, two different builds and installations are necessary.

Task & Submission

The aim of this task is to compute as many time steps as possible for the given flow case within a time limit of maximum 15 minutes.

  1. Run Neko with the given input (neko tgv_Re1600.case) on either CPUs or GPUs.

  2. Submit your best result (standard output). Do not submit binary output files nor multiple results.

Note: You are allowed to experiment with different linear solvers in Neko (see the manual) to achieve the fastest runtime. However, all of them are not guaranteed to work for the given case or support all hardware backends in Neko. Furthermore, it is allowed to experiment with different configuration options when building Neko, e.g., enabling device-aware MPI. However, it is not allowed to use single precision.

  • No labels