Quorom configuration problems

I have 2 physical storage nodes with alma linux and zfs, and a 3rd one that is virtual machine again with alma.

A cluster is configured with the 3 nodes and the 2 phisycal ones are satelites and the 3rd (VM) is combined which is diskless.

What is the proper way to configure the 3rd one to be a quorom for the satelites.
And the resource on it to be TieBreaker

[root@pod3 thsadmin]# linstor n l
╭─────────────────────────────────────────────────────────╮
┊ Node ┊ NodeType  ┊ Addresses                   ┊ State  ┊
╞═════════════════════════════════════════════════════════╡
┊ pod1 ┊ SATELLITE ┊ 172.31.253.111:3366 (PLAIN) ┊ Online ┊
┊ pod2 ┊ SATELLITE ┊ 172.31.253.112:3366 (PLAIN) ┊ Online ┊
┊ pod3 ┊ COMBINED  ┊ 172.31.253.113:3366 (PLAIN) ┊ Online ┊
╰─────────────────────────────────────────────────────────╯

[root@pod3 thsadmin]# linstor storage-pool list
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node ┊ Driver   ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName                ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ pod1 ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊ pod1;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ pod2 ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊ pod2;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ pod3 ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊ pod3;DfltDisklessStorPool ┊
┊ zfs_stg_pool         ┊ pod1 ┊ ZFS      ┊ zpool1   ┊     3.48 TiB ┊     16.36 TiB ┊ True         ┊ Ok    ┊ pod1;zfs_stg_pool         ┊
┊ zfs_stg_pool         ┊ pod2 ┊ ZFS      ┊ zpool1   ┊     3.48 TiB ┊     16.36 TiB ┊ True         ┊ Ok    ┊ pod2;zfs_stg_pool         ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

[root@pod3 thsadmin]# linstor r l
╭────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Layers       ┊ Usage  ┊ Conns ┊      State ┊ CreatedOn           ┊
╞════════════════════════════════════════════════════════════════════════════════════════╡
┊ tgt0         ┊ pod1 ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊   UpToDate ┊ 2025-09-10 15:12:22 ┊
┊ tgt0         ┊ pod2 ┊ DRBD,STORAGE ┊ InUse  ┊ Ok    ┊   UpToDate ┊ 2025-09-10 15:12:23 ┊
┊ tgt0         ┊ pod3 ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊ TieBreaker ┊ 2025-09-10 15:13:16 ┊
┊ tgt1         ┊ pod1 ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊   UpToDate ┊ 2025-09-10 15:25:53 ┊
┊ tgt1         ┊ pod2 ┊ DRBD,STORAGE ┊ InUse  ┊ Ok    ┊   UpToDate ┊ 2025-09-10 15:25:55 ┊
┊ tgt1         ┊ pod3 ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊ TieBreaker ┊ 2025-09-10 15:27:20 ┊
┊ tgt2         ┊ pod1 ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊   UpToDate ┊ 2025-09-10 15:26:31 ┊
┊ tgt2         ┊ pod2 ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊   UpToDate ┊ 2025-09-10 15:26:33 ┊
┊ tgt2         ┊ pod3 ┊ DRBD,STORAGE ┊ InUse  ┊ Ok    ┊   Diskless ┊ 2025-09-10 15:27:25 ┊

we dont want tgt2 to go to pod3.


[root@pod3 thsadmin]# linstor rd list-properties tgt2
╭────────────────────────────────────────────────────────────────────────────╮
┊ Key                                                      ┊ Value           ┊
╞════════════════════════════════════════════════════════════════════════════╡
┊ DrbdOptions/Resource/auto-promote                        ┊ no              ┊
┊ DrbdOptions/Resource/on-no-quorum                        ┊ suspend-io      ┊
┊ DrbdOptions/Resource/on-suspended-primary-outdated       ┊ force-secondary ┊
┊ DrbdOptions/Resource/quorum                              ┊ majority        ┊
┊ DrbdOptions/auto-add-quorum-tiebreaker                   ┊ true            ┊
┊ DrbdOptions/auto-verify-alg                              ┊ crct10dif       ┊
┊ DrbdPrimarySetOn                                         ┊ POD1            ┊
┊ files/etc/drbd-reactor.d/linstor-gateway-iscsi-tgt2.toml ┊ True            ┊

All resource definitions are the same.

[root@pod3 thsadmin]# linstor node list-properties pod1
╭──────────────────────────────────────────────╮
┊ Key                                ┊ Value   ┊
╞══════════════════════════════════════════════╡
┊ AutoplaceTarget                    ┊ true    ┊
┊ CurStltConnName                    ┊ default ┊
┊ DrbdOptions/AutoEvictAllowEviction ┊ false   ┊
┊ NodeUname                          ┊ pod1    ┊
╰──────────────────────────────────────────────╯
[root@pod3 thsadmin]# linstor node list-properties pod2
╭──────────────────────────────────────────────╮
┊ Key                                ┊ Value   ┊
╞══════════════════════════════════════════════╡
┊ AutoplaceTarget                    ┊ true    ┊
┊ CurStltConnName                    ┊ default ┊
┊ DrbdOptions/AutoEvictAllowEviction ┊ false   ┊
┊ NodeUname                          ┊ pod2    ┊
╰──────────────────────────────────────────────╯
[root@pod3 thsadmin]# linstor node list-properties pod3
╭──────────────────────────────────────────────╮
┊ Key                                ┊ Value   ┊
╞══════════════════════════════════════════════╡
┊ AutoplaceTarget                    ┊ false   ┊
┊ CurStltConnName                    ┊ default ┊
┊ DrbdOptions/AutoEvictAllowEviction ┊ false   ┊
┊ NodeUname                          ┊ pod3    ┊

If you disable DRBD Reactor on the diskless node, DRBD Reactor will not attempt to start services there.

systemctl disable drbd-reactor.service --now

Alternatively, you could configure a node preference on the promoter resources so that pod1 and pod2 are preferred over pod3, but node preferences probably shouldn’t be your first choice here.

Thank you for the advice , i will try it tomorrow

Hello,

Thank you for the advice , it actually worked.

I just have a few follow up question.
with disabled drbd-reactor.service will that affect the HA of the cluster or the quorom?
What will happen if for example pod1 and pod3(TieBreaker) go down? Will POD 2 automatically becaome primary? Or if 2 nodes go down no HA?

DRBD Reactor relies on the quorum of each DRBD device it manages. As long as the DRBD devices DRBD Reactor is managing have quorum on a node, DRBD Reactor will be able to have those nodes take over the Primary role.

In a case where two nodes go down unexpectedly, there would not be quorum, and therefore the remaining node would not be able to take over. Which makes sense, since from the clusters perspective, this single remaining node could be the node that became disconnected from the rest of the cluster.

In a case where two nodes are gracefully shutdown, the DRBD device on the remaining node would maintain quorum because of DRBD’s “last man standing” behavior.

Thank you very much for the detailed explanation!