DRBD for bandwidth intensive replication

Hello

There are two modern servers with the single ConnectX-7 200 Gb/s link for DRBD replication.
On both servers DRBD disks are based on 8 x PCIe 5.0 CM7-V NVMe SSDs, configured in md raid0, so they are definitely not a bottleneck.

The highest possible bandwidth replication I achieved ~35 Gb/s, with the next resource config,

[root@memverge4 drbd.d]# cat test.res
resource test {

disk {
        c-plan-ahead 0;
        resync-rate 4G;
        c-max-rate 4G;
        c-min-rate 2G;
        al-extents 65536;
#        c-fill-target 512M;
     }

  volume 1 {
    device      /dev/drbd1;
    disk        /dev/vg_r0/lvol0;
    meta-disk   internal;
  }

  on memverge3 {
    node-id   27;
  }
  on memverge4 {
    node-id   28;
  }

net
    {
        transport tcp;
        protocol  C;
        sndbuf-size 64M;
        rcvbuf-size 64M;
        max-buffers 128K;
        max-epoch-size 16K;
        timeout 90;
        ping-timeout 10;
        ping-int 15;
        connect-int 15;
#       verify-alg crc32c;
    }
connection
    {
        path
        {
            host memverge3 address 1.1.1.3:7900;
            host memverge4 address 1.1.1.4:7900;
        }
    }

}
[root@memverge4 drbd.d]#

With the “transport rdma” even slightly worse ~30 Gb/s

When I tried set c-max-rate=resync-rate=5G, I got next error,

[root@memverge4 drbd.d]# drbdadm adjust all
drbd.d/test.res:6: Parse error: while parsing value ('5G')
for c-max-rate. Value is too big.

Anton