...
Copy build_examples/build_cpu_mpi-only_intel_ubuntu20.04.sh to root directory.
cp build_examples/build_cpu_mpi-only_intel_ubuntu20.04.sh rebuild.sh
Build HDF5 and edit rebuild.sh with HDF5 location.
FC = mpif90” FC = $(MPIFC)
Execute the build script.
Sample build script for Fritz:
We used modules for MPI and HDF5 on Firtz.
Code Block |
---|
module load intel/2022# MPI=openmpi-4.1.2 module load compiler/2022.1.0 module load mkl/2022.0.2 # module load impi/2021.6.0 module load hpcx/2.13.0 -intel2021.4.0 MPI=intelmpi-2021.7.0 if [[ "$MPI" =~ impi^intel ]]; then exportmodule MPIFC=mpiifort elif [[ "$MPI" =~ hpcx ]]; thenload hdf5/1.10.7-impi-intel export OMPII_MPI_MPICCF90=iccifort else export OMPI_MPICXX=icpcmodule export OMPI_MPIFC=ifortload hdf5/1.10.7-ompi-intel export OMPI_MPIF90=ifort export MPIFC=mpif90 fi module################################################################# load hdf5/1.12.1-$MPI # Location of local hdf5 installed with same compiler being used for POT3D: HDF5_INCLUDE_DIR="$HDF5_DIRROOT/include" HDF5_LIB_DIR="$HDF5_DIRROOT/lib" # Fortran HDF5 library flags (these can be version dependent): HDF5_LIB_FLAGS="-lhdf5_fortran -lhdf5hllhdf5_hl_fortran -lhdf5 -lhdf5_hl"" ... |
Running Example
Code Block |
---|
cd testsuite TEST=small POT3D_HOME=$PWD/.. cp ${POT3D_HOME}/testsuite/${TEST}/input/* ${POT3D_HOME}/testsuite/${TEST}/run/ cd ${POT3D_HOME}/testsuite/${TEST}/run echo "Running POT3D with $NP MPI rank..." CMD="mpirun -np $NP $MPIFLAGS $POT3D_HOME/bin/pot3d" /usr/bin/time $CMD > pot3d.out echo "Done!" runtime=($(tail -n 5 timing.out | head -n 1)) echo "Wall clock time: ${runtime[6]} seconds" echo " " #Validate run: ${POT3D_HOME}/scripts/pot3d_validation.sh pot3d.out ${POT3D_HOME}/testsuite/${TEST}/validation/pot3d.out |
...
Profile the input
Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.Submit the profile as PDF to the team's folder.
Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.
Run POT3D with isc2023 input on both PSC bridges-2 and FAU Fritz CPU clusters.
Code Block Your job should converge and print statementsoutputs like below. ### The CG solver has converged. Iteration: 25112 Residual: 9.972489313728662E-13
Run POT3D on the FAU Alex cluster using A100 GPU partition. Use only 4 GPUs for the run. Submit the results to the team's folder.
Submission and Presentation:
- Submit all the build scripts, run scripts and input & outputs (pot3d.dat, pot3d.out, timing.out)
- Do not submit the output data or the source code.
- Prepare slides for the team’s interview based on your work for this application.
...