Best place to layer DRBD on disks/raid/lvm-raid for Active/Passive NFS and ISCSI

Hello everyone

Ive setup a test environment for DRBD in my HomeLab with 2 vms but i want to build two physical machines. Im still reading to understand 2 vs 3 nodes and the setup difference so please forgive my lack of knowledge.

But my main question is this what is best practice on when to layer on DRBD i was thinking of using both ISCSI and NFS so my first thought was to use Linux mdraid with a mirror of to HDDS or SSDs layer on DRBD then layer on LVM. and create a volume for nfs shares and dedicated volumes for ISCSI. confusion set in when i thought am i over complicating things with using linux mdraid on the drives or am i better off using LVM to combine the drives then specify redundancy in LVM but then how would i layer DRBD.

Im trying to learn storage architecture I was going to use ceph but the cost of nodes for testing is high for me and then i setup active/passive NFS share and it was amazing.

Im planing to use 2 Zimablades with 10gb-nics running Ubuntu LTS and automating the setup with ansible. Im not sure if Im forgetting anything but would like to know what the community thinks.

Thank you

I would personally use LVM underneath DRBD for RAID to keep things simple.

For example, create a volume group named storage using 3 (or more) physical volumes:

# vgcreate storage /dev/sdb /dev/sdc /dev/sdd /dev/sde

Then, you can create the logical volumes using RAID options:

# lvcreate --type raid5 -i 3 -L 10G -n iscsi_0 storage
# lvcreate --type raid5 -i 3 -L 10G -n iscsi_1 storage
... 
# lvcreate --type raid5 -i 3 -L 10G -n nfs_0 storage
# lvcreate --type raid5 -i 3 -L 10G -n nfs_1 storage
...

And then use those LVMs to back DRBD:

resource "iscsi_0" {
  device minor 0;
  disk "/dev/storage/iscsi_0";
  meta-disk internal;
  on "alice" {
    address   10.1.1.31:7789;
  }
  on "bob" {
    address   10.1.1.32:7789;
  }
}

resource "iscsi_1" {
...

Another thing to consider is using external metadata, as opposed to the default internal metadata, for the DRBD volumes on some other equal or faster block device. DRBD writes to its metadata in 4K writes - these would likely cause lots of read-modify-write cycles on RAID w/ parity.

Can use LVM here too, just without striping:

# vgcreate metadata /dev/sdf
# lvcreate -L 4M -n iscsi_0_md metadata
...

And the metadata definition in the DRBD device configuration would look like this:

meta-disk /dev/metadata/iscsi_0_md;
3 Likes

HI @kermat

Thanks for the reply and apologys for the late reply.

That is something i did not consider using DRBD on top of LVM volumes.

I was aware of LVM redundancy by specifying raid level.

But im starting to think ISCSI is something unnecessary for my use case.

for VMs my main Hypervisor is XCP-ng and a few proxmox nodes.

Everything works with NFS and on the XCP-ng side the Storage Management API versions only supports thick provioning on ISCSI while i get thin provisioning on NFS

In terms of simplicity for me. Im not that familier with LVM. i might be better off using linuxmd on 2 hdds the layer on DRBD and then and a simple xfs file system.

I need to keep searching though and waying if in need ISCSI ans i dont want on over complicated storage system. I already run 2 central TrueNas Scale systems but patching them means down time. at least with active passive setup i can patch one reboot and fail over is quick