NFS export using linstor-gateway/drbd-reactor

Hello experts,

I have multiple questions regarding configuration and usage of linstor components.

The goal is to set up 8 file servers data0x (x=1-8), which are organized in pairs of two (dataa, datab, datac, datad) that each export a file system and at least one controller with combined controller/satellite functionality. All machines run on Rocky Linux 9.5.

The idea is to use a combination of drbd, drbd-reactor, linstor-controller, linstor-satellite, linstor-gateway, linstor-gui to administrate the exports. The corresponding packages are all compiled and installed.

I created nodes, partitions/storage pools (lvm-thin), xfs file systems on all drbd devices, a resource group ‘data’, a volume group ‘data’ and four resources data, datab, datac, datad.

The node/resource lists look all ok, both on the command line (“linstor node/resource list” and “drbd-reactorctl status”) and in the linstor-gui.

At this point I am unsure how to continue. With ‘drbd-reactorctl edit dataa’ I can configure a promoter

[[promoter]]
id = "dataa"
[promoter.resources.dataa]
start = [
"ocf:heartbeat:portblock portblock action=block ip=10.162.248.48 portno=2049 protocol=tcp",
"""ocf:heartbeat:Filesystem fs device=/dev/drbd1000 directory=/srv/reactor-exports/dataa fstype=xfs run_fsck=no""",
"""ocf:heartbeat:nfsserver nfsserver nfs_ip=10.162.248.48 nfs_server_scope=10.162.248.48 nfs_shared_infodir=/srv/ha/dataa/nfsinfo""",
"""ocf:heartbeat:exportfs exportfs clientspec=10.162.248.0/255.255.254.0 directory=/srv/reactor-exports/dataa fsid=0 options='rw,no_all_squash,root_squash,anonuid=0,anongid=0'""",
"ocf:heartbeat:IPaddr2 virtual_ip cidr_netmask=23 ip=10.162.248.48",
"""ocf:heartbeat:portblock portunblock action=unblock ip=10.162.248.48 portno=2049 protocol=tcp tickle_dir=/srv/ha/dataa/tickle""",
]

This activates the nfs export on dataa (with data01 being the primary node), however, the export cannot be mounted because the port 2049 seems to be blocked.

linstor-gateway check-health --iscsi-backends lio reports that the status of all components except for the NFS server is ok. When I stop the NFS server, the NFS status is ok, but the NFS server is automatically started again soon thereafter (probably by linstor-satellite), resulting in an error in the health check. I don’t see a problem with an active NFS server, so why does the health check complain?

I installed and activated linstor-gateway on all participating devices, which might make no sense, as only the controller responds on port 3370, which linstor-gateway seems to access.
Thus, the command linstor-gateway nfs list fails on pure satellites, and results in an empty list on the controller. linstor-gateway nfs list -c http://mylinstorcontroller:8080 works on all machines.

To populate the nfs list, a command something like:

linstor-gateway nfs create dataa 10.162.248.48/32 24T --allowed-ips=10.162.248.0/23 --filesystem xfs --resource-group=data

is (probably) needed, but that would collide with the promoter declaration of drbd-reactorctl.

My questions are:

  • How and with which program (drbd-reactorctl/linstor-gateway) should the NFS export be configured?

  • Is linstor-gateway usable in my use case, at all? The linstor-gateway help text says that only one NFS resource can exist in a cluster, but I have more than one resource.

  • If yes, where should linstor-gateway run? Only on the controller?

  • Is it possible to allow more than one subnet to access the NFS export? In that case I would need multiple client specifications in the promoter configuration

  • How can I configure the controller URL for linstor-gateway?

  • Is there a way to configure stonith fencing with drbd-reactor?

  • Does the linstor controller device have to be a combined linstor-controller and linstor-satellite?

  • Is there an easy way to switch primary/secondary roles without restart?

Many thanks in advance for your time.

I am giving up on linstor/drbd. Free support is obviously not available and prices for a subscription are exorbitant and by far too high for my use case (more than 20k€ for ~ 90 TB).