Getting Started with WRF-ARW

Weather Research & Forecasting (WRF) Model is a mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. It features multiple dynamical cores, a 3-dimensional variational (3DVAR) data assimilation system, and a software architecture allowing for computational parallelism and system extensibility. WRF is suitable for a broad spectrum of applications across scales ranging from meters to thousands of kilometers. For more info, see here.

References

Getting Started

 

1. Clone the WRF Git

For 4.0 Branch use:

$ git clone https://github.com/wrf-model/WRF.git ...

 

For 3.9.1.1 branch use

$ git clone --branch V3.9.1.1 https://github.com/NCAR/WRFV3.git ...

 

2. Load prerequisite modules

$ module load intel/2019.1.144 hpcx/2.4.0 hdf5/1.10.4-hpcx-2.4.0 netcdf/4.6.2-hpcx-2.4.0

 

3. Add environemt variables

 

4. Configure

 

There are 4 ways to configure for any given platform (one per line, see below)

  • serial - to be run on single processor

  • smpar - to be run with OpenMP on a single node

  • dmpar - to be run with MPI over one or more nodes.

  • dm+sm - to be run hybrid with OpenMP and MPI

 

You need to choose the desired build from the menu (e.g. 66 for Intel broadwell) and choose the domain nesting option (e.g. 1)

The nesting options as as follows:

  • basic: any nested domains are fixed (default)

  • preset moves: nested domain may move according to fixed parameters

  • vortex-following: For hurricane model forecasting

 

After the configure, you may edit the configure.wrf file to adjust as needed.

For AMD nodes:

  • Remove -xHost - will not work on AMD

  • change to -xCORE-AVX2 -to -march=core-avx2

 

For Skylake Nodes

  • -xCORE-AVX2 may need to change to -xCORE-AVX512 for Intel Skylake hosts

 

Other

  • Edit mpicc, remove the “-f90=$(SFC)” option from mpif90 and the “-cc=$(SCC)” option from mpicc for HPC-X (Open MPI) as it doesn’t need those options.

 

5. Compile

 

Note: if you compiled before, you need to clean the old compilation by running, and renaming again the configure.wrf.

 

6. Check the run directory

 

For input, make sure you have

  • Input file: wrfinput_d0* (at least 1) or wrfrst*

  • wrfbdy_d0* (at least 1)

  • namelist.input

 

7. Check the progress by running tail -f on rsl.out.0000

In this example, we can see that every 15 seconds or weather forecast, takes about 1.7 seconds.

 

 

8. For actual forecast output and video creating, you need to enable history output in the namelist.input file.

Performance Measurements

 

As WRF is heavy in IO (reading the optionally writing the files), there are two ways to measure performance here.

1. Use Linux time command to measure the total time of the mpirun, that includes IO time and compute time. For example:

 

Check the real total time of the application.

 

2. In case you are not interested on the IO time, but only on the compute time, you can sum all the rows in the rsl.out.0000 files with the time measurements. For example:

 

A run command could be: