Remove old drbd setup

Hi all

I’m re-setting up drbd from scratch on Rocky 9.5/drbd 9.2.12

[root@memverge ~]# drbdadm create-md ha-nfs
You want me to create a v09 style flexible-size internal meta data block.
There appears to be a v09 flexible-size internal meta data block
already in place on /dev/md0 at byte offset 4800557084672

Do you really want to overwrite the existing meta-data?
[need to type ‘yes’ to confirm] yes

md_offset 4800557084672
al_offset 4800557051904
bm_offset 4800264048640

Found some data

==> This might destroy existing data! <==

Do you want to proceed?
[need to type ‘yes’ to confirm] yes

initializing activity log
initializing bitmap (279 MB) to all zero
Writing meta data…
New drbd meta data block successfully created.
[root@memverge ~]# drbdadm up ha-nfs

–== Thank you for participating in the global usage survey ==–
The server’s response is:

you are the 2441th user to install this version
ha-nfs: Failure: (162) Invalid configuration request
additional info from kernel:
Connection for peer node id 13 already exists
Command ‘drbdsetup new-peer ha-nfs 13 --_name=memverge2 --fencing=resource-and-stonith --connect-int=15 --ping-int=15 --ping-timeout=10 --timeout=90 --max-epoch-size=20000 --max-buffers=80K --rcvbuf-size=10M --sndbuf-size=10M --protocol=C --transport=tcp’ terminated with exit code 10
drbdadm: new-peer ha-nfs: skipped due to earlier error
[root@memverge ~]#

So what is ithe “Connection for peer node id 13 already exists”, how to check/remove it ??

Below current ha-nfs.res

[root@memverge ~]# more /etc/drbd.d/ha-nfs.res
resource ha-nfs {
disk {
c-plan-ahead 0;
resync-rate 32M;
al-extents 6007;
}

volume 29 {
device /dev/drbd0;
disk /dev/md0;
meta-disk internal;
}

on memverge {
address 10.72.14.152:7900;
node-id 12;
}

on memverge2 {
address 10.72.14.154:7900;
node-id 13;
}
on qs {
volume 29 {
disk none;
}
address 10.72.14.156:7900;
node-id 14;
}

connection-mesh {
hosts memverge memverge2 qs;
}

net
{
transport tcp;
protocol C;
sndbuf-size 10M;
rcvbuf-size 10M;
max-buffers 80K;
max-epoch-size 20000;
timeout 90;
ping-timeout 10;
ping-int 15;
connect-int 15;
fencing resource-and-stonith;
}

handlers {
fence-peer “/usr/lib/drbd/crm-fence-peer.9.sh”;
after-resync-target “/usr/lib/drbd/crm-unfence-peer.9.sh”;
}

connection
{
path
{
host “memverge” address ipv4 192.168.0.6:7900;
host “memverge2” address ipv4 192.168.0.8:7900;
}
path
{
host “memverge” address ipv4 1.1.1.6:7900;
host “memverge2” address ipv4 1.1.1.8:7900;
}
net
{
transport tcp;
protocol C;
sndbuf-size 10M;
rcvbuf-size 10M;
max-buffers 80K;
max-epoch-size 20000;
timeout 90;
ping-timeout 10;
ping-int 15;
connect-int 15;
fencing resource-and-stonith;
}
}

options {
    auto-promote            yes;
    auto-promote-timeout    200;
    quorum majority;
    on-no-quorum suspend-io;
    on-no-data-accessible suspend-io;
    quorum-minimum-redundancy 1;
    on-suspended-primary-outdated force-secondary;

}
}
[root@memverge ~]#

This error is common when the DRBD resource is already either connected, or listening for a connection. You can check the current connection status with a drbdadm status. If the connection is either in the Connecting or Connected state you will get the error above when running an up.

The strange thing here is that the resource must have been down though to create and update the metadata. Regardless, if trying to convalesce a already up resource, or apply any changes to the configuration, the preferred thing to do would be to run a drbdadm adjust.

Additionally, you can tear down any already existing connections with a drbdadm disconnect.

1 Like