Evaluating DRBD Performance with NVMe & ZFS-Based RAID10 – Seeking Advice

Hello,

I have three Proxmox servers currently running GlusterFS in a 3-way replica setup. I’m considering switching to DRBD, but I have a few questions before making the move.

Hardware Configuration:

  • Each server has:
    • 10 x 2TB HDDs in RAID10 (managed by ZFS)
    • 1 x NVMe SSD (1.92TB, Samsung).
    • 2 x 10G NICs in bond (LACP) with 9000 MTU (Jumbo Frames enabled)

My Plan:

1. For NVMe disks:
  • DRBD directly on the NVMe disks, with XFS on top.
2. For the HDD array:
  • ZFS Zvol → DRBD → XFS
  • The reason for using ZFS for RAID10 instead of hardware RAID is the flexibility to upgrade disks—for example, replacing 2TB drives with 4TB ones without having to destroy and rebuild the array.
  • With hardware RAID, this would require wiping and reinitializing the entire RAID group.

Questions:

  1. NVMe performance impact – Does anyone have experience running DRBD directly on NVMe disks? How much performance loss should I expect in terms of IOPS, latency, and bandwidth?
  2. ZFS Zvol → DRBD → XFS – Does this approach seem too unconventional, or is it viable?
  3. What are the recommended DRBD settings for optimal performance in this setup?

If any additional details are needed, I’d be happy to provide them.

Best regards,
Alex