Hello Linbit / DRBD community ;
New to this forum I would like to share something with you. We have been using DRBD with no issue with the following configuration on Debian 12.
resource "r0" {
disk {
al-extents 10000;
}
net {
protocol A;
verify-alg md5;
connect-int 60;
max-buffers 32k;
}
volume 0 {
device minor 0;
disk "/dev/vg0/drbd-r0-0";
meta-disk "/dev/vg0/drbd-r0-0-md";
disk {
disk-flushes no;
md-flushes no;
}
}
volume 1 {
device minor 1;
disk "/dev/vg0/drbd-r0-1";
meta-disk "/dev/vg0/drbd-r0-1-md";
disk {
resync-after r0/0;
disk-flushes no;
md-flushes no;
}
}
on "storage-1" {
node-id 1;
}
on "storage-2" {
node-id 2;
}
on "storage-4" {
node-id 3;
}
connection {
host storage-1 address *.*.*.*:30012;
host storage-2 address *.*.*.*:30012;
}
connection {
host storage-1 address *.*.*.*:30013 via proxy on drbd-proxy-main {
inside *.*.*.*:31013;
outside *.*.*.*:32013;
options {
memlimit 2G;
}
}
host storage-4 address *.*.*.*:30013 via proxy on storage-4 {
inside *.*.*.*:31013;
outside *.*.*.*:32013;
options {
memlimit 2G;
}
}
net {
cram-hmac-alg sha1;
shared-secret "****";
allow-remote-read no;
}
volume 0 {
disk {
c-plan-ahead 20;
c-fill-target 6M;
}
}
volume 1 {
disk {
c-plan-ahead 20;
c-fill-target 6M;
}
}
}
connection {
host storage-2 address *.*.*.*:30023 via proxy on drbd-proxy-main {
inside *.*.*.*:31023;
outside *.*.*.*:32023;
options {
memlimit 2G;
}
}
host storage-4 address *.*.*.*:30023 via proxy on storage-4 {
inside *.*.*.*:31023;
outside *.*.*.*:32023;
options {
memlimit 2G;
}
}
net {
cram-hmac-alg sha1;
shared-secret "****";
allow-remote-read no;
}
volume 0 {
disk {
c-plan-ahead 20;
c-fill-target 6M;
}
}
volume 1 {
disk {
c-plan-ahead 20;
c-fill-target 6M;
}
}
}
options {
cpu-mask FFFFFF;
}
}
Filesystems are mostly btrfs above LVM above dm-crypt above drbd above lvm above BBWC RAID. We have a fast SSD raid and a slow one which is why we have two volumes in the resource.
We updated the system in Debian 13 a few weeks ago and since then we experience some of the following errors :
drbd r0 storage-2: BAD! BarrierAck #47682 received with n_writes=4, expected n_writes=7!
We wonder if there is compatibility issues with DRBD on this kernel version (6.12) or btrfs (6.14), or is there is something we should adjust in our configuration, particularly enabling/disabling drain/flushes/barriers.
Thank you for time by advance