...
To Compile Elmer follow this wiki.
Build Procedure
Important: Please note that the ElmerCSC wiki page has the latest compilation options and procedures.
https://github.com/ElmerCSC/elmerfem/wiki/Compilation
1. Retrieve the source and unpack it. Here we clone the whole repository:
...
For the student cluster competition, we have tagged (scc20) a particular version to be used, which can be directly download from https://
...
...
2. Create build directory
...
Code Block |
---|
SET(CMAKE_INSTALL_PREFIX "$ENV{PWD}/../elmer-install" CACHE PATH "")
SET(CMAKE_C_COMPILER "gcc" CACHE STRING "")
SET(CMAKE_CXX_COMPILER "g++" CACHE STRING "")
SET(CMAKE_Fortran_COMPILER "gfortran" CACHE STRING "")
SET(WITH_ElmerIce TRUE CACHE BOOL "")
SET(WITH_MPI TRUE CACHE BOOL "")
SET(WITH_OpenMP TRUE CACHE BOOL "")
SET(WITH_LUA TRUE CACHE BOOL "") |
Here you can choose the installation directory for Elmer as well by modifying variable the CMAKE_INSTALL_PREFIX.
...
Code Block |
---|
SET(CMAKE_INSTALL_PREFIX "../elmer-install" CACHE PATH "")
SET(CMAKE_C_COMPILER "icc" CACHE STRING "")
SET(CMAKE_CXX_COMPILER "icpc" CACHE STRING "")
SET(CMAKE_Fortran_COMPILER "ifort" CACHE STRING "")
SET(WITH_ElmerIce TRUE CACHE BOOL "")
SET(WITH_MPI TRUE CACHE BOOL "")
SET(WITH_OpenMP TRUE CACHE BOOL "")
SET(WITH_LUA TRUE CACHE BOOL "") |
4. Load the relevant modules e.g.
...
Code Block |
---|
$ cd elmer-build
--- For elmer use
$ cmake -C ../elmer-opts.cmake ../elmerfem |
The cmake tool will now inform if any libraries are missing. The build configuration can be further edited using e.g. ccmake tool or cmake-gui application.
6. Build and install
Code Block |
---|
$ make -j8 install |
7. Add the Elmer bin directory to your environment
Code Block |
---|
$ PATH=$PATH:<elmer-install> |
Testing Elmer
Basic/Quick tests
Go to the build directory and run ctest -L quick or ctest -
...
L fast for some basic tests.
Serial Testing Elmer (Basic)
1. Download LID3D.tgz file and open it. You should see 3 problems (LID3D_extrude 20k, 50k and 100k) as well as the sif file.
Code Block |
---|
$ ls
ELMERSOLVER_STARTINFO LID3D.sif LID3D_extrude_100k LID3D_extrude_20k LID3D_extrude_50k README runelmer.sh |
2. Check the problem to be run on the sif file (you can change it).
Code Block |
---|
$ grep "Mesh DB" LID3D.sif
Mesh DB "." "LID3D_extrude_20k" |
3. Verify that ELMERSOLVER_STARTINFO points to LID3D.sif file.
Code Block |
---|
$ cat ELMERSOLVER_STARTINFO
LID3D.sif |
Alternatively, one can also just launch ElmerSolver with the SIF as an argument (works also in parallel) and does not need to fill in ELMERSOLVER_STARTINFO
ElmerSolver LID3D.sif
4. Try to run the ElmerSolver serialy at first (no MPI), look for the SOLVER TOTAL TIME(CPU,REAL), in my case ~370s using one CPU core.
Code Block |
---|
$ ElmerSolver
...
VtuOutputSolver: All done for now
ResultOutputSolver: -------------------------------------
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_post]
ReloadInputFile: Realoading input file
LoadInputFile: Loading input file:
ElmerSolver: *** Elmer Solver: ALL DONE ***
ElmerSolver: The end
SOLVER TOTAL TIME(CPU,REAL): 369.29 370.85
ELMER SOLVER FINISHED AT: 2019/08/08 11:56:35 |
Parallel Testing Elmer (MPI)
To test Elmer with MPI, you will need to break the problem with partitions based on the number of MPI ranks you have, to do that you will need to use ElmerGrid (compiled with your ElmerSolver).
1. Assuming you wish to run the problem on 4 ranks as a start. Run ElmerGrid with the following parameters
Code Block |
---|
$ ls
ELMERSOLVER_STARTINFO LID3D.sif LID3D_extrude_100k LID3D_extrude_20k LID3D_extrude_50k README runelmer.sh
$ ls LID3D_extrude_20k/
mesh.boundary mesh.elements mesh.header mesh.nodes
-- First two parameters should be 2 2 (for mesh database), after the metiskway flag you should state the number of ranks. E.g.
$ ElmerGrid 2 2 LID3D_extrude_20k/ -metiskway 4 |
2. Verify that you get the partition folder partitioning.4
Code Block |
---|
$ ls LID3D_extrude_20k/
mesh.boundary mesh.elements mesh.header mesh.nodes partitioning.4 |
3. Run the mpi job using -np 4 for our example. Look for the SOLVER TOTAL TIME(CPU,REAL), in this case ~112 seconds which shows almost 4 times better than the serial, which is what we expect.
Code Block |
---|
$ mpirun -np 4 ElmerSolver ... VtuOutputSolver: Saving variable: pressure VtuOutputSolver: Saving variable: velocity VtuOutputSolver: Writing elemental fields AscBinWriteFree: Terminating buffered ascii/binary writing VtuOutputSolver: All done for now ResultOutputSolver: ------------------------------------- --Loading user function Package filenamelibrary: elmerfem-8.4-9d3f59b-20190807_Linux-x86_64 -- Patch version: 8.4-9d3f59b -- Configuring done -- Generating done -- Build files have been written to: ..... /elmer-build |
...
6. Build and install
Code Block |
---|
$ make -j8 install |
Testing Elmer
Quick tests
...
[ResultOutputSolve]...[ResultOutputSolver_post]
ReloadInputFile: Realoading input file
LoadInputFile: Loading input file:
ElmerSolver: *** Elmer Solver: ALL DONE ***
ElmerSolver: The end
SOLVER TOTAL TIME(CPU,REAL): 111.82 112.70
ELMER SOLVER FINISHED AT: 2019/08/08 15:54:39 |
4. Here is another example using the 100k problem with 128 core 4 Thor nodes.
Make sure to edit the sif file to point to the 100k database.
Code Block |
---|
$ grep "Mesh DB" LID3D.sif
Mesh DB "." "LID3D_extrude_20k" |
Run the ElmerGrid to create 128 partition file.
Code Block |
---|
$ ElmerGrid 2 2 LID3D_extrude_100k/ -metiskway 128
... |
Allocate 4 nodes and run the mpi job.
Code Block |
---|
$ mpirun -np 128 ElmerSolver
...
VtuOutputSolver: Writing elemental fields
AscBinWriteFree: Terminating buffered ascii/binary writing
VtuOutputSolver: All done for now
ResultOutputSolver: -------------------------------------
Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_post]
ReloadInputFile: Realoading input file
LoadInputFile: Loading input file:
ElmerSolver: *** Elmer Solver: ALL DONE ***
ElmerSolver: The end
SOLVER TOTAL TIME(CPU,REAL): 34.49 45.64
ELMER SOLVER FINISHED AT: 2019/08/08 16:01:36 |