...
Code Block |
---|
mpirun -np 160 -x UCX_NET_DEVICES=mlx5_0:1 -x UCX_LOG_LEVEL=error pw.x -inp ausurf.in Program PWSCF v.7.1 starts on 12Sep2022 at 20:12:15 This program is part of the open-source Quantum ESPRESSO suite for quantum simulation of materials; please cite "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009); "P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017); "P. Giannozzi et al., J. Chem. Phys. 152 154105 (2020); URL http://www.quantum-espresso.org", in publications or presentations arising from this work. More details at http://www.quantum-espresso.org/quote Parallel version (MPI), running on 160 processors MPI processes distributed on 4 nodes 166131 MiB available memory on the printing compute node when the environment starts Reading input from ausurf.in Warning: card &CELL ignored Warning: card CELL_DYNAMICS = 'NONE', ignored Warning: card / ignored Current dimensions of program PWSCF are: Max number of different atomic species (ntypx) = 10 Max number of k-points (npk) = 40000 Max angular momentum in pseudopotentials (lmaxx) = 4 ... General routines calbec : 21.09s CPU 22.57s WALL ( 168 calls) fft : 1.25s CPU 1.45s WALL ( 296 calls) ffts : 0.09s CPU 0.12s WALL ( 44 calls) fftw : 40.23s CPU 40.97s WALL ( 100076 calls) interpolate : 0.11s CPU 0.15s WALL ( 22 calls) Parallel routines PWSCF : 3m27.78s CPU 3m58.71s WALL This run was terminated on: 20:16:14 12Sep2022 =------------------------------------------------------------------------------= JOB DONE. =------------------------------------------------------------------------------= |
Task and submission
...
Profile the given input
Use any of the remote clusters to run an MPI profile (such as IPM profile or any other profiler) over 4 nodes, full PPN to profile the given input.Submit the profile as PDF to the team's folder.
Add to your presentation the 3 main MPI calls that are being used plus the MPI time being used.
Run the QE with the given input on both PSC bridges-2 and FAU Fritz CPU clusters, and submit the results to the teams folder.
Visualize the input on any of the clusters creating a figure or video out of the input file, using any method for the visualization. In case the team has a twitter account, tag the figure/video with the team name or university name and publish on Twitter with the following tags: #ISC23 #ISC23_SCC @QuantumESPRESSO
Bonus task - run QE on the Alex cluster using A100 GPU partition. Use only 4 GPUs for the run. Submit the results to the team's folder.
Submission and Presentation:
- Submit all the build scripts, and slurm scripts used to the team’s folder.
- Prepare slides for the team’s interview based on your work for this application.