PCI Switch, CPU and GPU Direct server topology
Understanding server topology is very important for performance benchmarking.
The nvidia-smi topo command will help you figure out what is connected and how.
We will discuss several options here and give some examples.
Â
The GPU and HCA are connected on the same PCI switch
In the following example, GPU0 is connected to mlx5_2 device via single PCI switch, marked with PIX.
In this example, the traffic that goes from the GPU will not reach the CPU, but will pass via the PCI switch directly to the adapter -> GPU direct.
$ nvidia-smi topo -m
GPU0 GPU1 GPU2 GPU3 mlx5_0 mlx5_1 mlx5_2 mlx5_3 CPU Affinity
GPU0 X PIX PIX PIX NODE NODE PIX PIX 0-19
GPU1 PIX X PIX PIX NODE NODE PIX PIX 0-19
GPU2 PIX PIX X PIX NODE NODE PIX PIX 0-19
GPU3 PIX PIX PIX X NODE NODE PIX PIX 0-19
mlx5_0 NODE NODE NODE NODE X PIX NODE NODE
mlx5_1 NODE NODE NODE NODE PIX X NODE NODE
mlx5_2 PIX PIX PIX PIX NODE NODE X PIX
mlx5_3 PIX PIX PIX PIX NODE NODE PIX X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
In this case, adapter is ConnectX-6 HDR 100 and when running basic RDMA read or write, Device to Device test you should expect linerate.
To run this
Make sure to compile perftest with cuda support
Make sure nv_peer_mem module is loaded
$ ./ib_write_bw -a -d mlx5_2 -i 1 --report_gbits -F -n 10000 tessa001 --use_cuda
initializing CUDA
There are 4 devices supporting CUDA, picking first...
[pid = 169102, dev = 0] device name = [Tesla T4]
creating CUDA Ctx
making it the current CUDA Ctx
cuMemAlloc() of a 16777216 bytes GPU buffer
allocated GPU buffer address at 00007f4207000000 pointer=0x7f4207000000
---------------------------------------------------------------------------------------
RDMA_Write BW Test
Dual-port : OFF Device : mlx5_2
Number of qps : 1 Transport type : IB
Connection type : RC Using SRQ : OFF
TX depth : 128
CQ Moderation : 100
Mtu : 4096[B]
Link type : IB
Max inline data : 0[B]
rdma_cm QPs : OFF
Data ex. method : Ethernet
---------------------------------------------------------------------------------------
local address: LID 0xbd QPN 0x008e PSN 0x93ed50 RKey 0x00895c VAddr 0x007f4207800000
remote address: LID 0xa8 QPN 0x008e PSN 0x3dd06e RKey 0x00781b VAddr 0x007f45bf800000
---------------------------------------------------------------------------------------
#bytes #iterations BW peak[Gb/sec] BW average[Gb/sec] MsgRate[Mpps]
2 10000 0.11 0.11 6.659070
4 10000 0.22 0.21 6.705317
8 10000 0.43 0.42 6.633025
16 10000 0.82 0.79 6.188507
...
1048576 10000 98.95 96.75 0.011534
2097152 10000 98.26 96.77 0.005768
Â
The GPU and HCA are connected on the NUMA root complex
This is another server from the OPS cluster, that has the GPU connected via host bridge root complex (PHB) to the adapter . In this example traffic will flow from the GPU memory to the to the CPU host bridge and to the adapter.
$ nvidia-smi topo -m
GPU0 mlx5_0 mlx5_1 CPU Affinity
GPU0 X PHB PHB 0-9
mlx5_0 PHB X PIX
mlx5_1 PHB PIX X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Â
The GPU and HCA are connected on two different NUMA nodes
In this example, traffic from GPU memory will cross between two NUMA nodes to reach the adapter (SYS).
This is the least desired server architecture when GPUs and performance are required.
Â
Useful commands
Â