DRBD for bandwidth intensive replication

Hello

There are two modern servers with the single ConnectX-7 200 Gb/s link for DRBD replication.
On both servers DRBD disks are based on 8 x PCIe 5.0 CM7-V NVMe SSDs, configured in md raid0, so they are definitely not a bottleneck.

The highest possible bandwidth replication I achieved ~35 Gb/s, with the next resource config,

[root@memverge4 drbd.d]# cat test.res
resource test {

disk {
        c-plan-ahead 0;
        resync-rate 4G;
        c-max-rate 4G;
        c-min-rate 2G;
        al-extents 65536;
#        c-fill-target 512M;
     }

  volume 1 {
    device      /dev/drbd1;
    disk        /dev/vg_r0/lvol0;
    meta-disk   internal;
  }

  on memverge3 {
    node-id   27;
  }
  on memverge4 {
    node-id   28;
  }

net
    {
        transport tcp;
        protocol  C;
        sndbuf-size 64M;
        rcvbuf-size 64M;
        max-buffers 128K;
        max-epoch-size 16K;
        timeout 90;
        ping-timeout 10;
        ping-int 15;
        connect-int 15;
#       verify-alg crc32c;
    }
connection
    {
        path
        {
            host memverge3 address 1.1.1.3:7900;
            host memverge4 address 1.1.1.4:7900;
        }
    }

}
[root@memverge4 drbd.d]#

With the “transport rdma” even slightly worse ~30 Gb/s

When I tried set c-max-rate=resync-rate=5G, I got next error,

[root@memverge4 drbd.d]# drbdadm adjust all
drbd.d/test.res:6: Parse error: while parsing value ('5G')
for c-max-rate. Value is too big.

Anton

When I added second volume (located on the same physical disks) to the test resource,

  volume 2 {
    device      /dev/drbd2;
    disk        /dev/vg_r0/lvol1;
    meta-disk   internal;

I can’t exceed 39 Gb/s for the test resource, no matter using TCP or RDMA.

However, when I created two resources (test and test1) with only one volume in each resource, I got 69 Gb/s when I sync two resources simultaneously.

So how to increase initial replication bandwidth far beyond ~4 GB/s for a single resource using TCP or RDMA ?

What are the sizes of the volumes you used in each of these tests? A large enough volume size will be limited by the activity log and cause a hit to performance. This is also why you may have seen better performance synching multiple volumes simultaneously.

Initially I tested 1TB size. But today I have used two logical volumes, 256 GB each.

Something is limiting single resource replication up to ~4 GByte/s…. and with the modern hardware - PCIe 5.0 NVMe SSD, 200/400 Gb/s per port network, etc… This is a huge limit. This limitation for both - TCP and RDMA transports. Splitting a single large volume on N smaller volumes and configuring N separate resource files for each smaller volume to achieve (4GByte/s x N) bandwidth is not the best way…

How are you performing your testing? If you are using FIO, could you share the options you are using with it?

The local write speed is also good to know here, you can perform a peer isolation by drbdadm disconnect on the Primary node and repeating the test (after clearing any buffers/caches as relevant to the tests being . That way we can get a baseline of the write speed of the workload where DRBD is not replicating over the network, and compare that speed to when the peers are connected.

To confirm if the activity log is involved in the performance that you are seeing, you are able to turn it off in a test environment:

disk {
al-updates no;
}

The c-max-rate is only involved in the resynchronization rate, not the regular write replication, so changing that value would not be applicable in this case.

How are you performing your testing? If you are using FIO, could you share the options you are using with it?

On primary (memverge3) I run “drbdadm invalidate-remote test” and check network traffic between servers and/or check “iostat” on secondary (memverge4).

Ok, I applied

disk {
al-updates no;
}

but still can’t exceed 35 Gb/s using single 256 GB volume for both - TCP and RDMA transports.

The local write speed is also good to know here, you can perform a peer isolation by drbdadm disconnect on the Primary node and repeating the test (after clearing any buffers/caches as relevant to the tests being . That way we can get a baseline of the write speed of the workload where DRBD is not replicating over the network, and compare that speed to when the peers are connected.

[root@memverge3 drbd.d]# drbdadm status test
test role:Primary
  volume:1 disk:UpToDate open:no
  memverge4 role:Secondary
    volume:1 peer-disk:UpToDate

[root@memverge3 drbd.d]#
[root@memverge3 drbd.d]# drbdadm disconnect test
[root@memverge3 drbd.d]# drbdadm status test
test role:Primary
  volume:1 disk:UpToDate open:no
  memverge4 connection:StandAlone

[root@memverge3 drbd.d]#

Next on secondary (memverge4) I run fio two times. First for lvol device and then for drbd device.

  volume 1 {
    device      /dev/drbd1;
    disk        /dev/vg_r0/lvol0;
    meta-disk   internal;

[root@memverge4 anton]# fio --name=test --rw=write --bs=128k --filename=/dev/vg_r0/lvol0 --direct=1 --numjobs=1 --iodepth=8 --exitall --group_reporting --ioengine=libaio --runtime=30 --time_based=1
test: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
fio-3.41-55-g3a4c1
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=25.5GiB/s][w=209k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=16617: Wed Dec 17 13:50:00 2025
  write: IOPS=208k, BW=25.4GiB/s (27.3GB/s)(762GiB/30001msec)
    slat (nsec): min=2383, max=482284, avg=3224.62, stdev=969.12
    clat (usec): min=10, max=1484, avg=35.08, stdev=27.98
     lat (usec): min=17, max=1487, avg=38.30, stdev=27.99



[root@memverge4 anton]# fio --name=test --rw=write --bs=128k --filename=/dev/drbd1 --direct=1 --numjobs=1 --iodepth=8 --exitall --group_reporting --ioengine=libaio --runtime=30 --time_based=1
test: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
fio-3.41-55-g3a4c1
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=19.5GiB/s][w=160k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=16678: Wed Dec 17 13:50:43 2025
  write: IOPS=163k, BW=19.9GiB/s (21.3GB/s)(596GiB/30001msec)
    slat (nsec): min=1513, max=15399k, avg=5698.29, stdev=7793.88
    clat (usec): min=10, max=15520, avg=43.33, stdev=27.67
     lat (usec): min=19, max=15535, avg=49.03, stdev=28.84

In both cases far beyond 4GB/s.

Anton

Regarding the ‘invalidate remote’ command, I wouldn’t suggest relying on data from that method of testing, invalidating the peer would result in a resync operation which is treated differently than sychronous replication.

For the isolation testing, you would be running the same test on the same device, one where DRBD is connected to the peer, and one where it is not. So you would be using FIO to write to your device on the node that is Primary (with the secondary connected and UpToDate as well), and then you would disconnect DRBD, and then run FIO again, on the very same device you wrote to before. That would provide you appropriate statistics you can compare based on the outputs of FIO.

This knowledgebase article provides detailed information on how to do this:

Regarding the ‘invalidate remote’ command, I wouldn’t suggest relying on data from that method of testing, invalidating the peer would result in a resync operation which is treated differently than sychronous replication.

Thank you for clarifying. I think I need this too, so how to exceed 4 GB/s for resync operation too ??, if modern hardware (disks, network) and configuration allows it.

For the isolation testing, you would be running the same test on the same device, one where DRBD is connected to the peer, and one where it is not. So you would be using FIO to write to your device on the node that is Primary (with the secondary connected and UpToDate as well), and then you would disconnect DRBD, and then run FIO again, on the very same device you wrote to before. That would provide you appropriate statistics you can compare based on the outputs of FIO.

Ok, on primary (memverge3),

  volume 1 {
    device      /dev/drbd1;
    disk        /dev/vg_r0/lvol0;
    meta-disk   internal;

Here is next results,

[root@memverge3 anton]# drbdadm status test
test role:Primary
  volume:1 disk:UpToDate open:no
  memverge4 role:Secondary
    volume:1 peer-disk:UpToDate

[root@memverge3 anton]# fio --name=test --rw=write --bs=128k --filename=/dev/drbd1 --direct=1 --numjobs=1 --iodepth=8 --exitall --group_reporting --ioengine=libaio --runtime=30 --time_based=1
test: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
fio-3.41-55-g3a4c1
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=3765MiB/s][w=30.1k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=13997: Thu Dec 18 14:07:01 2025
  write: IOPS=30.6k, BW=3826MiB/s (4012MB/s)(112GiB/30001msec)
    slat (usec): min=5, max=115, avg= 8.51, stdev= 1.14
    clat (usec): min=79, max=922, avg=252.70, stdev=26.74
     lat (usec): min=87, max=931, avg=261.21, stdev=26.72



[root@memverge3 anton]# drbdadm disconnect test
[root@memverge3 anton]# drbdadm status test
test role:Primary
  volume:1 disk:UpToDate open:no
  memverge4 connection:StandAlone

[root@memverge3 anton]# fio --name=test --rw=write --bs=128k --filename=/dev/drbd1 --direct=1 --numjobs=1 --iodepth=8 --exitall --group_reporting --ioengine=libaio --runtime=30 --time_based=1
test: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
fio-3.41-55-g3a4c1
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=21.1GiB/s][w=173k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=14062: Thu Dec 18 14:07:45 2025
  write: IOPS=166k, BW=20.3GiB/s (21.8GB/s)(610GiB/30001msec)
    slat (nsec): min=1502, max=936645, avg=5495.71, stdev=3017.01
    clat (usec): min=12, max=2520, avg=42.40, stdev=35.34
     lat (usec): min=19, max=2523, avg=47.90, stdev=35.50

By using iperf3, I checked single 200 Gb/s link TCP bandwidth between the two directly connected servers. Even with one TCP stream I got ~100 Gb/s, two TCP streams ~150 Gb/s, 3 TCP streams ~200 Gb/s,

[root@memverge4 ~]# iperf3 -c 1.1.1.3 -P1
Connecting to host 1.1.1.3, port 5201
[  5] local 1.1.1.4 port 40464 connected to 1.1.1.3 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  11.1 GBytes  94.9 Gbits/sec    0   4.07 MBytes
[  5]   1.00-2.00   sec  10.5 GBytes  89.9 Gbits/sec    0   4.07 MBytes
[  5]   2.00-3.00   sec  11.2 GBytes  95.9 Gbits/sec    0   4.07 MBytes
[  5]   3.00-4.00   sec  11.3 GBytes  97.1 Gbits/sec    0   4.07 MBytes
[  5]   4.00-5.00   sec  11.1 GBytes  95.0 Gbits/sec    0   4.07 MBytes
[  5]   5.00-6.00   sec  11.4 GBytes  98.3 Gbits/sec    0   4.07 MBytes
[  5]   6.00-7.00   sec  11.1 GBytes  95.6 Gbits/sec    0   4.07 MBytes
[  5]   7.00-8.00   sec  11.2 GBytes  96.4 Gbits/sec    0   4.07 MBytes
[  5]   8.00-9.00   sec  11.2 GBytes  96.6 Gbits/sec    0   4.07 MBytes
[  5]   9.00-10.00  sec  11.2 GBytes  96.3 Gbits/sec    0   4.07 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   111 GBytes  95.6 Gbits/sec    0             sender
[  5]   0.00-10.00  sec   111 GBytes  95.6 Gbits/sec                  receiver

iperf Done.
[root@memverge4 ~]# iperf3 -c 1.1.1.3 -P2
Connecting to host 1.1.1.3, port 5201
[  5] local 1.1.1.4 port 42410 connected to 1.1.1.3 port 5201
[  7] local 1.1.1.4 port 42420 connected to 1.1.1.3 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  8.30 GBytes  71.3 Gbits/sec    0   4.05 MBytes
[  7]   0.00-1.00   sec  9.68 GBytes  83.1 Gbits/sec    0   4.09 MBytes
[SUM]   0.00-1.00   sec  18.0 GBytes   154 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   1.00-2.00   sec  7.68 GBytes  65.9 Gbits/sec    0   4.05 MBytes
[  7]   1.00-2.00   sec  9.88 GBytes  84.9 Gbits/sec    0   4.09 MBytes
[SUM]   1.00-2.00   sec  17.6 GBytes   151 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   2.00-3.00   sec  7.65 GBytes  65.7 Gbits/sec    0   4.05 MBytes
[  7]   2.00-3.00   sec  9.79 GBytes  84.1 Gbits/sec    0   4.09 MBytes
[SUM]   2.00-3.00   sec  17.4 GBytes   150 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   3.00-4.00   sec  7.93 GBytes  68.2 Gbits/sec    0   4.05 MBytes
[  7]   3.00-4.00   sec  10.1 GBytes  86.5 Gbits/sec    0   4.09 MBytes
[SUM]   3.00-4.00   sec  18.0 GBytes   155 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   4.00-5.00   sec  7.57 GBytes  65.0 Gbits/sec    0   4.05 MBytes
[  7]   4.00-5.00   sec  9.70 GBytes  83.3 Gbits/sec    0   4.09 MBytes
[SUM]   4.00-5.00   sec  17.3 GBytes   148 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   5.00-6.00   sec  8.83 GBytes  75.8 Gbits/sec    0   4.05 MBytes
[  7]   5.00-6.00   sec  8.77 GBytes  75.3 Gbits/sec    0   4.09 MBytes
[SUM]   5.00-6.00   sec  17.6 GBytes   151 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   6.00-7.00   sec  8.94 GBytes  76.8 Gbits/sec    0   4.05 MBytes
[  7]   6.00-7.00   sec  8.92 GBytes  76.6 Gbits/sec    0   4.09 MBytes
[SUM]   6.00-7.00   sec  17.9 GBytes   153 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   7.00-8.00   sec  8.81 GBytes  75.7 Gbits/sec    0   4.05 MBytes
[  7]   7.00-8.00   sec  8.57 GBytes  73.6 Gbits/sec    0   4.09 MBytes
[SUM]   7.00-8.00   sec  17.4 GBytes   149 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   8.00-9.00   sec  7.85 GBytes  67.4 Gbits/sec    0   4.05 MBytes
[  7]   8.00-9.00   sec  9.77 GBytes  84.0 Gbits/sec    0   4.09 MBytes
[SUM]   8.00-9.00   sec  17.6 GBytes   151 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   9.00-10.00  sec  8.20 GBytes  70.5 Gbits/sec    0   4.05 MBytes
[  7]   9.00-10.00  sec  9.87 GBytes  84.7 Gbits/sec    0   4.09 MBytes
[SUM]   9.00-10.00  sec  18.1 GBytes   155 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  81.8 GBytes  70.2 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  81.8 GBytes  70.2 Gbits/sec                  receiver
[  7]   0.00-10.00  sec  95.0 GBytes  81.6 Gbits/sec    0             sender
[  7]   0.00-10.00  sec  95.0 GBytes  81.6 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec   177 GBytes   152 Gbits/sec    0             sender
[SUM]   0.00-10.00  sec   177 GBytes   152 Gbits/sec                  receiver

iperf Done.
[root@memverge4 ~]# iperf3 -c 1.1.1.3 -P3
Connecting to host 1.1.1.3, port 5201
[  5] local 1.1.1.4 port 46890 connected to 1.1.1.3 port 5201
[  7] local 1.1.1.4 port 46898 connected to 1.1.1.3 port 5201
[  9] local 1.1.1.4 port 46912 connected to 1.1.1.3 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  7.72 GBytes  66.2 Gbits/sec    0   4.13 MBytes
[  7]   0.00-1.00   sec  6.85 GBytes  58.8 Gbits/sec    0   4.04 MBytes
[  9]   0.00-1.00   sec  7.44 GBytes  63.9 Gbits/sec    0   4.12 MBytes
[SUM]   0.00-1.00   sec  22.0 GBytes   189 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   1.00-2.00   sec  7.72 GBytes  66.3 Gbits/sec    0   4.13 MBytes
[  7]   1.00-2.00   sec  6.65 GBytes  57.2 Gbits/sec    0   4.04 MBytes
[  9]   1.00-2.00   sec  8.17 GBytes  70.2 Gbits/sec    0   4.12 MBytes
[SUM]   1.00-2.00   sec  22.5 GBytes   194 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   2.00-3.00   sec  7.85 GBytes  67.4 Gbits/sec    0   4.13 MBytes
[  7]   2.00-3.00   sec  7.25 GBytes  62.3 Gbits/sec    0   4.04 MBytes
[  9]   2.00-3.00   sec  7.79 GBytes  66.9 Gbits/sec    0   4.12 MBytes
[SUM]   2.00-3.00   sec  22.9 GBytes   197 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   3.00-4.00   sec  7.62 GBytes  65.5 Gbits/sec    0   4.13 MBytes
[  7]   3.00-4.00   sec  7.78 GBytes  66.8 Gbits/sec    0   4.04 MBytes
[  9]   3.00-4.00   sec  7.32 GBytes  62.9 Gbits/sec    0   4.12 MBytes
[SUM]   3.00-4.00   sec  22.7 GBytes   195 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   4.00-5.00   sec  7.85 GBytes  67.5 Gbits/sec    0   4.13 MBytes
[  7]   4.00-5.00   sec  7.72 GBytes  66.3 Gbits/sec    0   4.04 MBytes
[  9]   4.00-5.00   sec  6.99 GBytes  60.0 Gbits/sec    0   4.12 MBytes
[SUM]   4.00-5.00   sec  22.6 GBytes   194 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   5.00-6.00   sec  7.39 GBytes  63.5 Gbits/sec    0   4.13 MBytes
[  7]   5.00-6.00   sec  7.71 GBytes  66.3 Gbits/sec    0   4.04 MBytes
[  9]   5.00-6.00   sec  7.05 GBytes  60.6 Gbits/sec    0   4.12 MBytes
[SUM]   5.00-6.00   sec  22.2 GBytes   190 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   6.00-7.00   sec  7.75 GBytes  66.6 Gbits/sec    0   4.13 MBytes
[  7]   6.00-7.00   sec  7.94 GBytes  68.2 Gbits/sec    0   4.04 MBytes
[  9]   6.00-7.00   sec  7.14 GBytes  61.3 Gbits/sec    0   4.12 MBytes
[SUM]   6.00-7.00   sec  22.8 GBytes   196 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   7.00-8.00   sec  7.71 GBytes  66.2 Gbits/sec    0   4.13 MBytes
[  7]   7.00-8.00   sec  7.92 GBytes  68.1 Gbits/sec    0   4.04 MBytes
[  9]   7.00-8.00   sec  7.35 GBytes  63.1 Gbits/sec    0   4.12 MBytes
[SUM]   7.00-8.00   sec  23.0 GBytes   197 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   8.00-9.00   sec  7.70 GBytes  66.2 Gbits/sec    0   4.13 MBytes
[  7]   8.00-9.00   sec  7.68 GBytes  66.0 Gbits/sec    0   4.04 MBytes
[  9]   8.00-9.00   sec  7.66 GBytes  65.8 Gbits/sec    0   4.12 MBytes
[SUM]   8.00-9.00   sec  23.0 GBytes   198 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   9.00-10.00  sec  7.71 GBytes  66.2 Gbits/sec    0   4.13 MBytes
[  7]   9.00-10.00  sec  7.66 GBytes  65.8 Gbits/sec    0   4.04 MBytes
[  9]   9.00-10.00  sec  7.68 GBytes  66.0 Gbits/sec    0   4.12 MBytes
[SUM]   9.00-10.00  sec  23.0 GBytes   198 Gbits/sec    0
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  77.0 GBytes  66.2 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  77.0 GBytes  66.2 Gbits/sec                  receiver
[  7]   0.00-10.00  sec  75.2 GBytes  64.6 Gbits/sec    0             sender
[  7]   0.00-10.00  sec  75.2 GBytes  64.6 Gbits/sec                  receiver
[  9]   0.00-10.00  sec  74.6 GBytes  64.1 Gbits/sec    0             sender
[  9]   0.00-10.00  sec  74.6 GBytes  64.1 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec   227 GBytes   195 Gbits/sec    0             sender
[SUM]   0.00-10.00  sec   227 GBytes   195 Gbits/sec                  receiver

iperf Done.
[root@memverge4 ~]#

The same iperf3 results if I swap the sender and receiver.

Anton

I also tried with DRBD 9.3 (–bitmap-block-size=64k), but still the same - can’t exceed ~4 GB/s for initial sync (drbdadm invalidate-remote) and regular sync (fio). I also tried different modes of transport, the same for TCP and RDMA. I don’t know what to try else… Is it possible, could it be that ~4 GB/s is somewhere hardcoded in the DRBD code ?

I even tried “Protocol A = Asynchronous replication“, only +500 MB/s improvement, i. e. 4,5 GB/s total using TCP, lower with RDMA.

[root@memverge3 ~]# cat /etc/drbd.d/test.res | grep -i rate
        resync-rate 4096M;
        c-max-rate 4096M;
        c-min-rate 2G;
[root@memverge3 ~]# drbdadm adjust test
[root@memverge3 ~]#
[root@memverge3 ~]# vi /etc/drbd.d/test.res
[root@memverge3 ~]# cat /etc/drbd.d/test.res | grep -i rate
        resync-rate 4097M;
        c-max-rate 4097M;
        c-min-rate 2G;
[root@memverge3 ~]# drbdadm adjust test
drbd.d/test.res:6: Parse error: while parsing value ('4097M')
for c-max-rate. Value is too big.