ISC23 SCC Getting Started with FAU Clusters

 

Follow the following procedures to setup an account for FAU clusters

 

1. Try and login to the FAU HPC portal with your university name (Use “Another institution” button)

In case there are missing attributes, you will need to contact your local IT support responsible for eduGAIN (see eduGAIN organizations).

In case you can not find your university, please let us know ASAP so we can handle this.

We will collect one email address (university email that can be login to eduGAIN) per team to create the account, you will receive the invitation. That email address must be able to login via eduGAIN.

 

2. After you got assigned to the ISC SCC project, you need to add a SSH key to be able to access the clusters. To do so, login to the portal and click on the User tab in the top area.

Inside the “User” tab, invitations to projects can be managed. In addition, account details are listed and SSH keys can be uploaded.

You will need to upload your 4K public ssh key to the portal. You can upload multiple keys per team/user.

$ ssh-keygen -t rsa -b 4096 ... $ cat .ssh/id_rsa_4k.pub

 

3. Once we will supply the access, you will need to login to the hpc-mover.rrze.uni-erlangen.de server on port 22322 with your given username (found in the hpc portal).

Setup your ssh config file to make it easier:

$ cat .ssh/config Host fau HostName hpc-mover.rrze.uni-erlangen.de Port 22322 User <user id (b154dc##)> IdentityFile ~/.ssh/id_rsa_4k IdentitiesOnly yes PasswordAuthentication no PreferredAuthentications publickey

 

4. Login to fau clusters. Make sure you are on a machine with an IP you provided us.

$ ssh fau ====================================================================== *** Welcome at NHR@FAU *** You are on a dedicated login node for ISC2023 - SCC. The hardware is identical to Fritz, however, the software installation is different. Thus, much of the information form https://hpc.fau.de/systems-services/documentation-instructions/clusters/fritz-cluster/ does not apply to ISC2023 - SCC !!! See https://hpcadvisorycouncil.atlassian.net/wiki/spaces/HPCWORKS/pages/2977628161/ISC23+SCC+Getting+Started+with+FAU+Clusters instead for software related topics. ====================================================================== Last login: <DATE> from <IP> <userid>@f0101:~$

 

5. You can see available nodes using Slurm:

The number of total nodes might change throughout the competition.
The time limit for jobs is 6 hours and the default allocation time (without the --time flag) is 1 hour.

 

6. On both the head node and the compute nodes, the Modules package is available. You can print available module with module avail, load/unload modules with module load <pkg> and module unload <pkg>, respectively, and list loaded modules with module list:

 

7. When submitting a job, you can specify the time with the --time option.
If you want to use performance counters (e.g., using PAPI or LIKWID), it is necessary to specify the hwperf constraint (-C hwperf).
For a fixed clock frequency, you may use the option
--cpu-freq=<MIN_CLOCK_kHZ>-<MAX_CLOCK_kHZ>[:<GOVERNOR>]. Note that this frequency (range) is only applied if you run your binary within the job using the srun command as a wrapper.

For a two-hour 4 node interactive job, e.g., you can use this command:

For more information about batch processing, see https://hpc.fau.de/systems-services/documentation-instructions/batch-processing/.

More info :