Adding new LVM logical volume to the existing DRBD config

Hi

I have DRBD resource config file with several LVM logical volumes.

After I added new LVM logical volume to the existing DRBD resource, what should I do next ?

[root@memverge anton]# drbdadm adjust ha-nfs
No valid meta data found
[root@memverge anton]#
[root@memverge anton]# drbdadm adjust all
No valid meta data found

Anton

Hello Anton,

Could you share your DRBD resource file here? I’d be interested to see what the resource configuration file was before, as well as the resource file you are currently attempting to load with the drbdadm adjust command.

The output of lsblk may also be helpful here so I’d invite you to share that as well.

Here is DRBD resource file -

volume 29 {
device /dev/drbd1;
disk /dev/block_nfs_vg/ha_nfs_internal_lv;
meta-disk internal;
}
volume 30 {
device /dev/drbd2;
disk /dev/block_nfs_vg/ha_nfs_exports_lv;
meta-disk internal;
}
volume 31 {
device /dev/drbd3;
disk /dev/block_nfs_vg/ha_block_exports_lv;
meta-disk internal;
}

so volumes 29 and 30 already configured and used for NFS, and drbdadm shows

[root@memverge anton]# drbdadm status
ha-nfs role:Primary
volume:29 disk:UpToDate
volume:30 disk:UpToDate
memverge2 role:Secondary
volume:29 peer-disk:UpToDate
volume:30 peer-disk:UpToDate

I want add volume 31 as iscsi target for export to initiators.

I’m seeing some parts of the resource file are missing here, could you share the entire file including the sections where the hostnames and node-ids are configured? Issues or changes with those could present an error like you are seeing so this would be necessary to check.

I’ve linked a multi-volume DRBD resource file example from the UG here for reference on what we would expect to see: (here)

If you could also provide the output for modinfo -d drbd and drbdsetup show, that would provide the information on your currently loaded module version as well as the resource configuration you currently have running in memory we could compare against.

Also, are you getting any messages in your system journal from drbd when you attempt this drbdadm adjust? DRBD is sometimes more verbose in the kernel log files, so checking that may give you a better idea of what the current issue may be.

Ok, here is entire file -

[root@memverge anton]# cat /etc/drbd.d/ha-nfs.res
resource ha-nfs {

options {
auto-promote no;
}

handlers {
fence-peer “/usr/lib/drbd/crm-fence-peer.9.sh”;
after-resync-target “/usr/lib/drbd/crm-unfence-peer.9.sh”;
}

disk {
c-plan-ahead 0;
resync-rate 32M;
al-extents 6007;
}

volume 29 {
device /dev/drbd1;
disk /dev/block_nfs_vg/ha_nfs_internal_lv;
meta-disk internal;
}
volume 30 {
device /dev/drbd2;
disk /dev/block_nfs_vg/ha_nfs_exports_lv;
meta-disk internal;
}
volume 31 {
device /dev/drbd3;
disk /dev/block_nfs_vg/ha_block_exports_lv;
meta-disk internal;
}

on memverge {
address 10.72.14.152:7900;
node-id 27;
}
on memverge2 {
address 10.72.14.154:7900;
node-id 28;
}

connection-mesh {
hosts memverge memverge2;
}

net
{
transport tcp;
protocol C;
sndbuf-size 10M;
rcvbuf-size 10M;
max-buffers 80K;
max-epoch-size 20000;
timeout 90;
ping-timeout 10;
ping-int 15;
connect-int 15;
fencing resource-and-stonith;
}

connection
{
path
{
host memverge address 192.168.0.6:7900;
host memverge2 address 192.168.0.8:7900;
}
path
{
host memverge address 1.1.1.6:7900;
host memverge2 address 1.1.1.8:7900;
}
net
{
transport tcp;
protocol C;
sndbuf-size 10M;
rcvbuf-size 10M;
max-buffers 80K;
max-epoch-size 20000;
timeout 90;
ping-timeout 10;
ping-int 15;
connect-int 15;
fencing resource-and-stonith;
}
}

}
[root@memverge anton]#

[root@memverge anton]# modinfo -d drbd
drbd - Distributed Replicated Block Device v9.2.12
[root@memverge anton]#
[root@memverge anton]# drbdsetup show
resource “ha-nfs” {
options {
auto-promote no;
}
_this_host {
node-id 27;
volume 29 {
device minor 1;
disk “/dev/block_nfs_vg/ha_nfs_internal_lv”;
meta-disk internal;
disk {
al-extents 6007;
}
}
volume 30 {
device minor 2;
disk “/dev/block_nfs_vg/ha_nfs_exports_lv”;
meta-disk internal;
disk {
al-extents 6007;
}
}
volume 31 {
device minor 3;
}
}
connection {
_peer_node_id 28;
path {
_this_host ipv4 192.168.0.6:7900;
_remote_host ipv4 192.168.0.8:7900;
}
path {
_this_host ipv4 1.1.1.6:7900;
_remote_host ipv4 1.1.1.8:7900;
}
net {
transport “tcp”;
timeout 90; # 1/10 seconds
max-epoch-size 20000;
connect-int 15; # seconds
ping-int 15; # seconds
sndbuf-size 10485760; # bytes
rcvbuf-size 10485760; # bytes
ping-timeout 10; # 1/10 seconds
fencing resource-and-stonith;
max-buffers 81920;
_name “memverge2”;
}
volume 29 {
disk {
resync-rate 32768k; # bytes/second
c-plan-ahead 0; # 1/10 seconds
}
}
volume 30 {
disk {
resync-rate 32768k; # bytes/second
c-plan-ahead 0; # 1/10 seconds
}
}
volume 31 {
disk {
resync-rate 32768k; # bytes/second
c-plan-ahead 0; # 1/10 seconds
}
}
}
}

[root@memverge anton]#

Also, are you getting any messages in your system journal from drbd when you attempt this drbdadm adjust ?

[root@memverge anton]# drbdadm adjust all
No valid meta data found
[root@memverge anton]# dmesg
[ 1040.903634] drbd ha-nfs/31 drbd3 memverge2: pdsk( DUnknown → Diskless ) repl( Off → Established ) [peer-state]
[root@memverge anton]#
[root@memverge anton]# drbdadm status
ha-nfs role:Secondary
volume:29 disk:UpToDate
volume:30 disk:UpToDate
volume:31 disk:Diskless
memverge2 role:Secondary
volume:29 peer-disk:UpToDate
volume:30 peer-disk:UpToDate
volume:31 peer-disk:Diskless

Now I have started thinking that maybe for iSCSI I have to create a new DRBD resource file, rather than add iSCSI volume to existing NFS DRBD resource config ?, but I would prefer to still stay on a single DRBD resource config for both NFS and iSCSI volume.

What do you think ?, what are the best practices if I want to have NFS and iSCSI ?

Update, I just found how to add a new volume to the existing DRBD resource config.

[root@memverge ~]# drbdadm create-md ha-nfs/31
md_offset 1000001761280
al_offset 1000001728512
bm_offset 999940689920

Found some data

==> This might destroy existing data! <==

Do you want to proceed?
[need to type ‘yes’ to confirm] yes

initializing activity log
initializing bitmap (58 MB) to all zero
Writing meta data…
New drbd meta data block successfully created.
success
[root@memverge ~]#

[root@memverge ~]# drbdadm up ha-nfs/31
up operates on whole resources, but you specified a specific volume!
[root@memverge ~]#
[root@memverge ~]# drbdadm adjust all

but few seconds later, my Primary was switched to Secondary.

[root@memverge ~]# drbdadm status
ha-nfs role:Primary
volume:29 disk:UpToDate
volume:30 disk:UpToDate
volume:31 disk:Inconsistent
memverge2 role:Secondary
volume:29 peer-disk:UpToDate
volume:30 peer-disk:UpToDate
volume:31 peer-disk:Inconsistent

[root@memverge ~]# drbdadm status
ha-nfs role:Secondary
volume:29 disk:UpToDate
volume:30 disk:UpToDate
volume:31 disk:Inconsistent
memverge2 role:Secondary
volume:29 peer-disk:UpToDate
volume:30 peer-disk:UpToDate
volume:31 peer-disk:Inconsistent

And the cluster NFS was stopped.

So I’m missing something while adding new volume to existing DRBD resource config.

Anton

Hello Anton,

To answer your earlier question, the best practice is to use separate resources for these different services, as opposed to using a single resource and adding volumes. You can use the LVM configuration of using separate volumes from a single volume group as the backing storage, but using separate resources allows for greater ease of administration and troubleshooting as the DRBD resources can be handled individually and are more clearly distinct in your log files.

Based on the outputs shared, it could be that the LVM volume was not recognized as a space that could be written to, either having data or filesystem signatures already somewhere where DRBD would have intended to write. But now that the metadata has been recreated it’s less possible to determine if that is the case. The warning from DRBD about the space having data in it already should be be considered seriously, and continuing to write metadata there should be done only in cases where you are certain what DRBD is detecting there and that you do not need it.

Ok, thank you for your answer.

So I made one DRBD resource config file for NFS and another DRBD resource config file for iSCSI.

Interesting that in case of two DRBD resource config files, ports must be different, as example - in DRBD/NFS resource config file I use port 7900 and in DRBD/iSCSI resource config file I use port 7901.

But the root-question is still open, how to add new volume to an existing DRBD resource config ??

If now I have one iSCSI volume, and later I will want to add second iSCSI volume to DRBD/iSCSI resource config file.

The procedure is unclear.