Getting Started with ROMEO Supercomputer - ISC26 SCC

Getting Started with ROMEO Supercomputer - ISC26 SCC

This year the teams will have access to ROMEO supercomputing center at University of Reims Champagne-Ardenne (URCA).

ROMEO is equipped with 232 NVIDIA GH200 Grace-Hopper Superchips, 25,000 CPU cores, NVIDIA  Quantum-2 InfiniBand networking and GPFS storage.

 

Getting Access

Each team captain will receive username/password before the competition starts to connect to the login nodes.

There are 4 login nodes:

  • romeo1.univ-reims.fr

  • romeo2.univ-reims.fr

  • romeo3.univ-reims.fr

  • romeo4.univ-reims.fr

 

For example:

$ ssh r250119-u1@romeo1.univ-reims.fr Warning: Permanently added 'romeo1.univ-reims.fr' (ECDSA) to the list of known hosts. r250119-u1@romeo1.univ-reims.fr's password: [r250119-u1@romeo1 ~]$

 

More info can be found here: https://romeo.univ-reims.fr/documentation/ressources/romeo_2025/se_connecter

 

Use Slurm to see the nodes availability, for example:

[r250119-u1@romeo1 ~]$ sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST long up 30-00:00:0 3 drain* romeo-a[004,020,029] long up 30-00:00:0 44 mix romeo-a[001-003,005-019,021-028,030-040],romeo-c[002-008] long up 30-00:00:0 3 alloc romeo-c[001,009-010] long up 30-00:00:0 14 idle romeo-c[011-020,101-104] short up 1-00:00:00 4 drain* romeo-a[004,020,029,056] short up 1-00:00:00 63 mix romeo-a[001-003,005-019,021-028,030-055],romeo-c[002-008,024-027] short up 1-00:00:00 4 alloc romeo-c[001,009-010,023] short up 1-00:00:00 26 idle romeo-c[011-020,028-039,101-104] short up 1-00:00:00 2 down romeo-c[021-022] instant* up 1:00:00 4 drain* romeo-a[004,020,029,056] instant* up 1:00:00 65 mix romeo-a[001-003,005-019,021-028,030-055,057-058],romeo-c[002-008,024-027] instant* up 1:00:00 4 alloc romeo-c[001,009-010,023] instant* up 1:00:00 27 idle romeo-c[011-020,028-040,101-104] instant* up 1:00:00 2 down romeo-c[021-022]

 

Note: Use the short (24h) or instant (1h) partitions only.

Allocating Nodes

To allocate nodes use the following flags, for example:

 

  • -p short (partition)

  • -N 1 (number of nodes)

  • --account=r250119 (this is the project used for ISC26 SCC)

  • --time (the time needed)

  • --constraint=armgpu (for the GH200 GPU cluster)

  • --gpus-per-node=1

 

Once granted, you can ssh to the node(s).

 

For example:

 

[r250119-u1@romeo1 ~]$ salloc -p short -N 1 --account=r250119 --time=1:00:00 --mem=1G --constraint=armgpu --gpus-per-node=1 salloc: Granted job allocation 128708 salloc: Waiting for resource configuration salloc: Nodes romeo-a005 are ready for job [r250119-u1@romeo1 ~]$ ssh romeo-a005 The authenticity of host 'romeo-a005 (172.1.1.95)' can't be established. ED25519 key fingerprint is SHA256:onoPA9lqIw3LByOyMoH7UKiGGnil7KjZu7VpsbRrcTg. This host key is known by the following other names/addresses: ~/.ssh/known_hosts:1: romeo-a032 ~/.ssh/known_hosts:4: romeo-a001 Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'romeo-a005' (ED25519) to the list of known hosts. [r250119-u1@romeo-a005 ~]$

 

Check the GPU available:

[r250119-u1@romeo-a005 ~]$ nvidia-smi Tue Jul 22 20:30:53 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GH200 120GB On | 00000019:01:00.0 Off | 0 | | N/A 46C P0 92W / 900W | 5MiB / 97871MiB | 0% Default | | | | Disabled | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found | +-----------------------------------------------------------------------------------------+

 

Check the network status:

[r250119-u1@romeo-a005 ~]$ ibstat CA 'mlx5_0' CA type: MT4129 Number of ports: 1 Firmware version: 28.36.2020 Hardware version: 0 Node GUID: 0xc470bd03000be07c System image GUID: 0xc470bd03000be07c Port 1: State: Active Physical state: LinkUp Rate: 200 Base lid: 55 LMC: 0 SM lid: 18 Capability mask: 0xa651e848 Port GUID: 0xc470bd03000be07c Link layer: InfiniBand CA 'mlx5_1' CA type: MT4129 Number of ports: 1 Firmware version: 28.36.2020 Hardware version: 0 Node GUID: 0xc470bd03000be078 System image GUID: 0xc470bd03000be078 Port 1: State: Active Physical state: LinkUp Rate: 200 Base lid: 109 LMC: 0 SM lid: 18 Capability mask: 0xa651e848 Port GUID: 0xc470bd03000be078 Link layer: InfiniBand CA 'mlx5_2' CA type: MT4129 Number of ports: 1 Firmware version: 28.36.2020 Hardware version: 0 Node GUID: 0xc470bd03000be070 System image GUID: 0xc470bd03000be070 Port 1: State: Active Physical state: LinkUp Rate: 200 Base lid: 108 LMC: 0 SM lid: 18 Capability mask: 0xa651e848 Port GUID: 0xc470bd03000be070 Link layer: InfiniBand CA 'mlx5_3' CA type: MT4129 Number of ports: 1 Firmware version: 28.36.2020 Hardware version: 0 Node GUID: 0xc470bd03000be074 System image GUID: 0xc470bd03000be074 Port 1: State: Active Physical state: LinkUp Rate: 200 Base lid: 116 LMC: 0 SM lid: 18 Capability mask: 0xa651e848 Port GUID: 0xc470bd03000be074 Link layer: InfiniBand

 

Check the CPU:

[r250119-u1@romeo-a005 ~]$ lscpu Architecture: aarch64 CPU op-mode(s): 64-bit Byte Order: Little Endian CPU(s): 288 On-line CPU(s) list: 0-287 Vendor ID: ARM Model name: Neoverse-V2 Model: 0 Thread(s) per core: 1 Core(s) per socket: 72 Socket(s): 4 Stepping: r0p0 Frequency boost: disabled CPU(s) scaling MHz: 93% CPU max MHz: 3465.0000 CPU min MHz: 81.0000 BogoMIPS: 2000.00 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh Caches (sum of all): L1d: 18 MiB (288 instances) L1i: 18 MiB (288 instances) L2: 288 MiB (288 instances) L3: 456 MiB (4 instances) NUMA: NUMA node(s): 36 NUMA node0 CPU(s): 0-71 NUMA node1 CPU(s): 72-143 NUMA node2 CPU(s): 144-215 NUMA node3 CPU(s): 216-287 NUMA node4 CPU(s): NUMA node5 CPU(s): NUMA node6 CPU(s): NUMA node7 CPU(s): NUMA node8 CPU(s): NUMA node9 CPU(s): NUMA node10 CPU(s): NUMA node11 CPU(s): NUMA node12 CPU(s): NUMA node13 CPU(s): NUMA node14 CPU(s): NUMA node15 CPU(s): NUMA node16 CPU(s): NUMA node17 CPU(s): NUMA node18 CPU(s): NUMA node19 CPU(s): NUMA node20 CPU(s): NUMA node21 CPU(s): NUMA node22 CPU(s): NUMA node23 CPU(s): NUMA node24 CPU(s): NUMA node25 CPU(s): NUMA node26 CPU(s): NUMA node27 CPU(s): NUMA node28 CPU(s): NUMA node29 CPU(s): NUMA node30 CPU(s): NUMA node31 CPU(s): NUMA node32 CPU(s): NUMA node33 CPU(s): NUMA node34 CPU(s): NUMA node35 CPU(s): Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Reg file data sampling: Not affected Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Spectre v1: Mitigation; __user pointer sanitization Spectre v2: Not affected Srbds: Not affected Tsx async abort: Not affected [r250119-u1@romeo-a005 ~]$

Useful links: