Elmer is a finite element software for multiphysical problems published under open source. Elmer is mainly but not exclusively developed by CSC – IT Center for Science.
Elmer/ICE is Open Source Finite Element Software for Ice Sheet, Glaciers and Ice Flow Modelling.
References
To Compile Elmer follow this wiki.
Build Procedure
Important: Please note that the ElmerCSC wiki page has the latest compilation options and procedures.
https://github.com/ElmerCSC/elmerfem/wiki/Compilation
1. Retrieve the source and unpack it. Here we clone the whole repository:
$ git clone https://www.github.com/ElmerCSC/elmerfem
2. Create build directory
$ mkdir elmer-build
3. Elmer compilation system is built upon CMake which utilizes out of source builds strongly.
Create pre-cache file, and save it as, f.ex, elmer-opts.cmake for CMake which contains something like this:
SET(CMAKE_INSTALL_PREFIX "$ENV{PWD}/../elmer-install" CACHE PATH "") SET(CMAKE_C_COMPILER "gcc" CACHE STRING "") SET(CMAKE_CXX_COMPILER "g++" CACHE STRING "") SET(CMAKE_Fortran_COMPILER "gfortran" CACHE STRING "") SET(WITH_ElmerIce TRUE CACHE BOOL "") SET(WITH_MPI TRUE CACHE BOOL "") SET(WITH_OpenMP TRUE CACHE BOOL "") SET(WITH_LUA TRUE CACHE BOOL "")
Here you can choose the installation directory for Elmer as well by modifying variable the CMAKE_INSTALL_PREFIX.
In case, you are using intel compiler set elmer-opts.cmake to be as follows:
SET(CMAKE_INSTALL_PREFIX "../elmer-install" CACHE PATH "") SET(CMAKE_C_COMPILER "icc" CACHE STRING "") SET(CMAKE_CXX_COMPILER "icpc" CACHE STRING "") SET(CMAKE_Fortran_COMPILER "ifort" CACHE STRING "") SET(WITH_ElmerIce TRUE CACHE BOOL "") SET(WITH_MPI TRUE CACHE BOOL "") SET(WITH_OpenMP TRUE CACHE BOOL "") SET(WITH_LUA TRUE CACHE BOOL "")
4. Load the relevant modules e.g.
$ module list Currently Loaded Modulefiles: 1) cmake/3.13.4 2) intel/2018.5.274 3) hpcx/2.4.0 4) mkl/2018.5.274
5. Generate build scripts
$ cd elmer-build --- For elmer use $ cmake -C ../elmer-opts.cmake ../elmerfem
The cmake tool will now inform if any libraries are missing. The build configuration can be further edited using e.g. ccmake tool or cmake-gui application.
6. Build and install
$ make -j8 install
7. Add the Elmer bin directory to your environment
$ PATH=$PATH:<elmer-install>
Testing Elmer
Basic/Quick tests
Go to the build directory and run ctest -L quick or ctest -L fast for some basic tests.
Serial Testing Elmer (Basic)
1. Download LID3D.tgz file and open it. You should see 3 problems (LID3D_extrude 20k, 50k and 100k) as well as the sif file.
$ ls ELMERSOLVER_STARTINFO LID3D.sif LID3D_extrude_100k LID3D_extrude_20k LID3D_extrude_50k README runelmer.sh
2. Check the problem to be run on the sif file (you can change it).
$ grep "Mesh DB" LID3D.sif Mesh DB "." "LID3D_extrude_20k"
3. Verify that ELMERSOLVER_STARTINFO points to LID3D.sif file.
$ cat ELMERSOLVER_STARTINFO LID3D.sif
4. Try to run the ElmerSolver serialy at first (no MPI), look for the SOLVER TOTAL TIME(CPU,REAL), in my case ~370s using one CPU core.
$ ElmerSolver ... VtuOutputSolver: All done for now ResultOutputSolver: ------------------------------------- Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_post] ReloadInputFile: Realoading input file LoadInputFile: Loading input file: ElmerSolver: *** Elmer Solver: ALL DONE *** ElmerSolver: The end SOLVER TOTAL TIME(CPU,REAL): 369.29 370.85 ELMER SOLVER FINISHED AT: 2019/08/08 11:56:35
Parallel Testing Elmer (MPI)
To test Elmer with MPI, you will need to break the problem with partitions based on the number of MPI ranks you have, to do that you will need to use ElmerGrid (compiled with your ElmerSolver).
1. Assuming you wish to run the problem on 4 ranks as a start. Run ElmerGrid with the following parameters
$ ls ELMERSOLVER_STARTINFO LID3D.sif LID3D_extrude_100k LID3D_extrude_20k LID3D_extrude_50k README runelmer.sh $ ls LID3D_extrude_20k/ mesh.boundary mesh.elements mesh.header mesh.nodes -- First two parameters should be 2 2 (for mesh database), after the metiskway flag you should state the number of ranks. E.g. $ ElmerGrid 2 2 LID3D_extrude_20k/ -metiskway 4
2. Verify that you get the partition folder partitioning.4
$ ls LID3D_extrude_20k/ mesh.boundary mesh.elements mesh.header mesh.nodes partitioning.4
3. Run the mpi job using -np 4 for our example. Look for the SOLVER TOTAL TIME(CPU,REAL), in this case ~112 seconds which shows almost 4 times better than the serial, which is what we expect.
$ mpirun -np 4 ElmerSolver ... VtuOutputSolver: Saving variable: pressure VtuOutputSolver: Saving variable: velocity VtuOutputSolver: Writing elemental fields AscBinWriteFree: Terminating buffered ascii/binary writing VtuOutputSolver: All done for now ResultOutputSolver: ------------------------------------- Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_post] ReloadInputFile: Realoading input file LoadInputFile: Loading input file: ElmerSolver: *** Elmer Solver: ALL DONE *** ElmerSolver: The end SOLVER TOTAL TIME(CPU,REAL): 111.82 112.70 ELMER SOLVER FINISHED AT: 2019/08/08 15:54:39
4. Here is another example using the 100k problem with 128 core 4 Thor nodes.
Make sure to edit the sif file to point to the 100k database.
$ grep "Mesh DB" LID3D.sif Mesh DB "." "LID3D_extrude_20k"
Run the ElmerGrid to create 128 partition file.
$ ElmerGrid 2 2 LID3D_extrude_100k/ -metiskway 128 ...
Allocate 4 nodes and run the mpi job.
$ mpirun -np 128 ElmerSolver ... VtuOutputSolver: Writing elemental fields AscBinWriteFree: Terminating buffered ascii/binary writing VtuOutputSolver: All done for now ResultOutputSolver: ------------------------------------- Loading user function library: [ResultOutputSolve]...[ResultOutputSolver_post] ReloadInputFile: Realoading input file LoadInputFile: Loading input file: ElmerSolver: *** Elmer Solver: ALL DONE *** ElmerSolver: The end SOLVER TOTAL TIME(CPU,REAL): 34.49 45.64 ELMER SOLVER FINISHED AT: 2019/08/08 16:01:36
Add Comment