Drbd/pacemaker integration question

Hello

There is two nodes active/standby cluster, based on Rocky Linux 9.6 and kmod-drbd9x-9.2.14-1.el9_6.elrepo
There are two services (ha-nfs, ha-iscsi) which are always run together on the same cluster node.

If I reboot the active cluster node memverge2, ha-nfs resource successfully switches to standby node memverge, but ha-iscsi resource fails.

Looking for the difference, in the logs -

resource ha-nfs

Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-nfs on memverge
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-nfs on memverge: ok
Jun 7 09:50:13 memverge kernel: drbd ha-nfs: Preparing remote state change 1671163414: 28->all role( Secondary )
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: Committing remote state change 1671163414 (primary_nodes=0)
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: peer( Primary → Secondary ) [remote]
Jun 7 09:50:13 memverge kernel: drbd ha-nfs/29 drbd1: Enabling local AL-updates
Jun 7 09:50:13 memverge kernel: drbd ha-nfs/30 drbd2: Enabling local AL-updates
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-nfs on memverge
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-nfs on memverge: ok
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-nfs on memverge
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-nfs on memverge: ok
Jun 7 09:50:13 memverge kernel: drbd ha-nfs: Preparing remote state change 374911319: 28->27 conn( Disconnecting )
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: Committing remote state change 374911319 (primary_nodes=0)
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: conn( Connected → TearDown ) peer( Secondary → Unknown ) [remote]
Jun 7 09:50:13 memverge kernel: drbd ha-nfs/29 drbd1 memverge2: pdsk( UpToDate → DUnknown ) repl( Established → Off ) [remote]
Jun 7 09:50:13 memverge kernel: drbd ha-nfs/30 drbd2 memverge2: pdsk( UpToDate → DUnknown ) repl( Established → Off ) [remote]
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: Terminating sender thread
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: Starting sender thread (peer-node-id 28)
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: Connection closed
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: helper command: /sbin/drbdadm disconnected
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: helper command: /sbin/drbdadm disconnected exit code 0
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: conn( TearDown → Unconnected ) [disconnected]
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: Restarting receiver thread
Jun 7 09:50:13 memverge kernel: drbd ha-nfs memverge2: conn( Unconnected → Connecting ) [connecting]
Jun 7 09:50:13 memverge pacemaker-attrd[2777]: notice: Setting master-ha-nfs[memverge2] in instance_attributes: 10000 → (unset)
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-nfs on memverge
Jun 7 09:50:13 memverge pacemaker-attrd[2777]: notice: Setting master-ha-nfs[memverge] in instance_attributes: 10000 → 1000
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-nfs on memverge: ok
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-nfs on memverge
Jun 7 09:50:14 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-nfs on memverge: ok
Jun 7 09:50:14 memverge pacemaker-controld[2780]: notice: Requesting local execution of promote operation for ha-nfs on memverge
Jun 7 09:50:14 memverge kernel: drbd ha-nfs memverge2: helper command: /sbin/drbdadm fence-peer
Jun 7 09:50:14 memverge crm-fence-peer.9.sh[4985]: DRBD_BACKING_DEV_29=/dev/block_nfs_vg/ha_nfs_internal_lv DRBD_BACKING_DEV_30=/dev/block_nfs_vg/ha_nfs_exports_lv DRBD_CONF=/etc/drbd.conf DRBD_CSTATE=Connecting DRBD_LL_DISK=/dev/block_nfs_vg/ha_nfs_internal_lv\ /dev/block_nfs_vg/ha_nfs_exports_lv DRBD_MINOR=1\ 2 DRBD_MINOR_29=1 DRBD_MINOR_30=2 DRBD_MY_ADDRESS=192.168.0.6 DRBD_MY_AF=ipv4 DRBD_MY_NODE_ID=27 DRBD_NODE_ID_27=memverge DRBD_NODE_ID_28=memverge2 DRBD_PEER_ADDRESS=192.168.0.8 DRBD_PEER_AF=ipv4 DRBD_PEER_NODE_ID=28 DRBD_RESOURCE=ha-nfs DRBD_VOLUME=29\ 30 UP_TO_DATE_NODES=0x08000000 /usr/lib/drbd/crm-fence-peer.9.sh
Jun 7 09:50:14 memverge crm-fence-peer.9.sh[4985]: INFO peers are reachable, my disk is UpToDate UpToDate: placed constraint ‘drbd-fence-by-handler-ha-nfs-ha-nfs-clone’
Jun 7 09:50:14 memverge kernel: drbd ha-nfs memverge2: helper command: /sbin/drbdadm fence-peer exit code 4 (0x400)
Jun 7 09:50:14 memverge kernel: drbd ha-nfs memverge2: fence-peer helper returned 4 (peer was fenced)
Jun 7 09:50:14 memverge kernel: drbd ha-nfs/29 drbd1 memverge2: pdsk( DUnknown → Outdated ) [primary]
Jun 7 09:50:14 memverge kernel: drbd ha-nfs/30 drbd2 memverge2: pdsk( DUnknown → Outdated ) [primary]
Jun 7 09:50:14 memverge kernel: drbd ha-nfs: Preparing cluster-wide state change 700209656: 27->all role( Primary )
Jun 7 09:50:14 memverge kernel: drbd ha-nfs: Committing cluster-wide state change 700209656 (0ms)
Jun 7 09:50:14 memverge kernel: drbd ha-nfs: role( Secondary → Primary ) [primary]

resource ha-iscsi, looks pacemaker is waiting while node memverge2 is booted and promoted it there

Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-iscsi on memverge
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-iscsi on memverge: ok
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi: Preparing remote state change 155406647: 28->all role( Secondary )
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: Committing remote state change 155406647 (primary_nodes=0)
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: peer( Primary → Secondary ) [remote]
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi/31 drbd3: Enabling local AL-updates
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-iscsi on memverge
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-iscsi on memverge: ok
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-iscsi on memverge
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-iscsi on memverge: ok
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi: Preparing remote state change 2424298786: 28->27 conn( Disconnecting )
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: Committing remote state change 2424298786 (primary_nodes=0)
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: conn( Connected → TearDown ) peer( Secondary → Unknown ) [remote]
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi/31 drbd3 memverge2: pdsk( UpToDate → DUnknown ) repl( Established → Off ) [remote]
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: Terminating sender thread
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: Starting sender thread (peer-node-id 28)
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: Connection closed
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: helper command: /sbin/drbdadm disconnected
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: helper command: /sbin/drbdadm disconnected exit code 0
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: conn( TearDown → Unconnected ) [disconnected]
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: Restarting receiver thread
Jun 7 09:50:13 memverge kernel: drbd ha-iscsi memverge2: conn( Unconnected → Connecting ) [connecting]
Jun 7 09:50:13 memverge pacemaker-attrd[2777]: notice: Setting master-ha-iscsi[memverge2] in instance_attributes: 10000 → (unset)
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-iscsi on memverge
Jun 7 09:50:13 memverge pacemaker-attrd[2777]: notice: Setting master-ha-iscsi[memverge] in instance_attributes: 10000 → 1000
Jun 7 09:50:13 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-iscsi on memverge: ok
Jun 7 09:53:25 memverge pacemaker-schedulerd[2779]: notice: Actions: Start ha-iscsi:1 ( memverge2 )
Jun 7 09:53:25 memverge pacemaker-controld[2780]: notice: Initiating monitor operation ha-iscsi:1_monitor_0 on memverge2
Jun 7 09:53:25 memverge pacemaker-controld[2780]: notice: Initiating notify operation ha-iscsi_pre_notify_start_0 locally on memverge
Jun 7 09:53:25 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-iscsi on memverge
Jun 7 09:53:25 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-iscsi on memverge: ok
Jun 7 09:53:25 memverge pacemaker-controld[2780]: notice: Initiating start operation ha-iscsi:1_start_0 on memverge2
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi memverge2: Handshake to peer 28 successful: Agreed network protocol version 122
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi memverge2: Feature flags enabled on protocol level: 0x7f TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi: Preparing cluster-wide state change 1709502368: 27->28 role( Secondary ) conn( Connected )
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi/31 drbd3 memverge2: drbd_sync_handshake:
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi/31 drbd3 memverge2: self A0B1026CDF591CD6:0000000000000000:29A475E429B9542C:5C9280E42D30ABB6 bits:0 flags:120
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi/31 drbd3 memverge2: peer A0B1026CDF591CD6:0000000000000000:66B01940CA59D348:CB1BE80494B0304E bits:0 flags:1020
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi/31 drbd3 memverge2: uuid_compare()=no-sync by rule=lost-quorum
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi: State change 1709502368: primary_nodes=0, weak_nodes=0
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi: Committing cluster-wide state change 1709502368 (14ms)
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi memverge2: conn( Connecting → Connected ) peer( Unknown → Secondary ) [connected]
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi/31 drbd3 memverge2: pdsk( DUnknown → Consistent ) repl( Off → Established ) [connected]
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi/31 drbd3 memverge2: cleared bm UUID and bitmap A0B1026CDF591CD6:0000000000000000:29A475E429B9542C:5C9280E42D30ABB6
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi/31 drbd3 memverge2: pdsk( Consistent → UpToDate ) [peer-state]
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi memverge2: helper command: /sbin/drbdadm unfence-peer
Jun 7 09:53:27 memverge kernel: drbd ha-iscsi memverge2: helper command: /sbin/drbdadm unfence-peer exit code 0
Jun 7 09:53:27 memverge pacemaker-attrd[2777]: notice: Setting master-ha-iscsi[memverge2] in instance_attributes: (unset) → 10000
Jun 7 09:53:27 memverge pacemaker-controld[2780]: notice: Transition 2 aborted by status-28-master-ha-iscsi doing create master-ha-iscsi=10000: Transient attribute change
Jun 7 09:53:27 memverge pacemaker-controld[2780]: notice: Initiating notify operation ha-iscsi_post_notify_start_0 locally on memverge
Jun 7 09:53:27 memverge pacemaker-controld[2780]: notice: Requesting local execution of notify operation for ha-iscsi on memverge
Jun 7 09:53:27 memverge pacemaker-controld[2780]: notice: Initiating notify operation ha-iscsi:1_post_notify_start_0 on memverge2
Jun 7 09:53:27 memverge pacemaker-controld[2780]: notice: Result of notify operation for ha-iscsi on memverge: ok
Jun 7 09:53:49 memverge pacemaker-schedulerd[2779]: notice: Actions: Promote ha-iscsi:0 ( Unpromoted → Promoted memverge2 )

Finally drbdadm status shows

[root@memverge anton]# drbdadm status
ha-iscsi role:Secondary
volume:31 disk:UpToDate
memverge2 role:Primary
volume:31 peer-disk:UpToDate

ha-nfs role:Primary
volume:29 disk:UpToDate
volume:30 disk:UpToDate
memverge2 role:Secondary
volume:29 peer-disk:UpToDate
volume:30 peer-disk:UpToDate

As result resource ha-iscsi failed because it can’t be started on the same cluster node as resource ha-nfs.

Any ideas why there is the difference in the behavior between ha-nfs and ha-iscsi resources ?

Anton

I had to copy and paste the logs lines out of this message and into a text editor to read. Please use code blocks (“preformatted text”, the “</>” option in the toolbar) for logs and shell output in the future.

I suspect the reason the ha-iscsi resources didn’t fail-over is going to be logged, but just isn’t in the snippets included. Was the reboot of memverg2 graceful? If so, did all the services stop cleanly? If so, (or a hard reboot) then there should be somewhere in the logs where memverge attempts to start ha-iscsi, but possibly fails.

I suspect the reason here is going to be more Pacemaker related then DRBD related. As such I would be paying closer attention to the Pacemaker logs. Might also be related to DRBD’s resource level fencing (which interfaces with Pacemaker), so pay attention to the crm-fence-peer.9.sh log lines as well.

1 Like

I had to copy and paste the logs lines out of this message and into a text editor to read.
Please use code blocks (“preformatted text”, the “</>” option in the toolbar) for logs and shell output in the future.

Ok, sorry about that.

Was the reboot of memverg2 graceful? If so, did all the services stop cleanly?
If so, (or a hard reboot) then there should be somewhere in the logs where memverge attempts to start ha-iscsi, but possibly fails.

I just typed command “reboot” and pressed enter.

For ha-nfs resource there is the record in the logs -

memverge pacemaker-controld[2780]: notice: Requesting local execution of promote operation for ha-nfs on memverge

However there is no such record for ha-iscsi resource.

I decided to update the cluster to Rocky Linux 10.0 and pacemaker 3.0

If the issue exists, I’ll let you know.

What is the better, keep two drbd resource files (for iscsi and nfs), or better keep all drbd resources in the single file ?

Anton

Are you implying that the upgrade has resolved this issue?

It should make no difference, but the standard practice is a separate file for each resource.

Are you implying that the upgrade has resolved this issue?

My apologize for the big delay with response, had power issues at my lab.

Rocky Linux 10.0 doesn’t support upgrade from 9.6, so I created the cluster from scratch on 10.0

No, it didn’t solve my issue.
For better troubleshooting and understanding I decided to back to basics and created simplest possible cluster,

Full List of Resources:

  • ipmi-fence-memverge (stonith:fence_ipmilan): Started memverge2
  • ipmi-fence-memverge2 (stonith:fence_ipmilan): Started memverge
  • Clone Set: ha-nfs-clone [ha-nfs] (promotable):
    • ha-nfs (ocf:linbit:drbd): Promoted memverge
    • ha-nfs (ocf:linbit:drbd): Unpromoted memverge2

I rebooted (by reboot command) memverge, ha-nfs was promoted to memverge2,

Full List of Resources:

  • ipmi-fence-memverge (stonith:fence_ipmilan): Started memverge2
  • ipmi-fence-memverge2 (stonith:fence_ipmilan): Started memverge
  • Clone Set: ha-nfs-clone [ha-nfs] (promotable):
    • ha-nfs (ocf:linbit:drbd): Unpromoted memverge
    • ha-nfs (ocf:linbit:drbd): Promoted memverge2

but when I rebooted memverge2, it doesn’t promote ha-nfs back to memverge,

Node List:

  • Node memverge (27): online, feature set 3.20.0
  • Node memverge2 (28): OFFLINE

Full List of Resources:

  • ipmi-fence-memverge (stonith:fence_ipmilan): Stopped
  • ipmi-fence-memverge2 (stonith:fence_ipmilan): Started memverge
  • Clone Set: ha-nfs-clone [ha-nfs] (promotable):
    • ha-nfs (ocf:linbit:drbd): Unpromoted memverge
    • ha-nfs (ocf:linbit:drbd): Stopped

Anton

What do the logs tell us this time around?

Additionally, please make sure the DRBD resources are connected and UpToDate on both nodes before trying to fail-back to the original node. You can check with a quick drbdadm status.

Ok, I repeated all with logs. Initial state

Full List of Resources:
  * ipmi-fence-memverge (stonith:fence_ipmilan):         Started memverge2
  * ipmi-fence-memverge2        (stonith:fence_ipmilan):         Started memverge
  * Clone Set: ha-nfs-clone [ha-nfs] (promotable):
    * ha-nfs    (ocf:linbit:drbd):       Promoted memverge
    * ha-nfs    (ocf:linbit:drbd):       Unpromoted memverge2
[root@memverge ~]# drbdadm status
ha-nfs role:Primary
  volume:29 disk:UpToDate open:no
  volume:30 disk:UpToDate open:no
  memverge2 role:Secondary
    volume:29 peer-disk:UpToDate
    volume:30 peer-disk:UpToDate

Reboot memverge (by reboot command)

[root@memverge2 ~]# dmesg|grep -i ha-nfs
[230813.487834] drbd ha-nfs: Preparing remote state change 3098631950: 27->all role( Secondary )
[230813.511688] drbd ha-nfs memverge: Committing remote state change 3098631950 (primary_nodes=0)
[230813.512404] drbd ha-nfs memverge: peer( Primary -> Secondary ) [remote]
[230813.593645] drbd ha-nfs: Preparing remote state change 1833838261: 27->28 conn( Disconnecting )
[230813.617385] drbd ha-nfs memverge: Committing remote state change 1833838261 (primary_nodes=0)
[230813.617613] drbd ha-nfs memverge: conn( Connected -> TearDown ) peer( Secondary -> Unknown ) [remote]
[230813.617835] drbd ha-nfs/29 drbd1 memverge: pdsk( UpToDate -> DUnknown ) repl( Established -> Off ) [remote]
[230813.618058] drbd ha-nfs/30 drbd2 memverge: pdsk( UpToDate -> DUnknown ) repl( Established -> Off ) [remote]
[230813.641795] drbd ha-nfs memverge: Terminating sender thread
[230813.642043] drbd ha-nfs memverge: Starting sender thread (peer-node-id 27)
[230813.680803] drbd ha-nfs memverge: Connection closed
[230813.681034] drbd ha-nfs memverge: helper command: /sbin/drbdadm disconnected
[230813.704037] drbd ha-nfs memverge: helper command: /sbin/drbdadm disconnected exit code 0
[230813.704263] drbd ha-nfs memverge: conn( TearDown -> Unconnected ) [disconnected]
[230813.704483] drbd ha-nfs memverge: Restarting receiver thread
[230813.704700] drbd ha-nfs memverge: conn( Unconnected -> Connecting ) [connecting]
[230813.870784] drbd ha-nfs memverge: helper command: /sbin/drbdadm fence-peer
[230814.002038] drbd ha-nfs memverge: helper command: /sbin/drbdadm fence-peer exit code 4 (0x400)
[230814.002301] drbd ha-nfs memverge: fence-peer helper returned 4 (peer was fenced)
[230814.002529] drbd ha-nfs/29 drbd1 memverge: pdsk( DUnknown -> Outdated ) [primary]
[230814.002736] drbd ha-nfs/30 drbd2 memverge: pdsk( DUnknown -> Outdated ) [primary]
[230814.002947] drbd ha-nfs: Preparing cluster-wide state change 2552277009: 28->all role( Primary )
[230814.003143] drbd ha-nfs: Committing cluster-wide state change 2552277009 (0ms)
[230814.003340] drbd ha-nfs: role( Secondary -> Primary ) [primary]
[root@memverge2 ~]# cat /var/log/messages|grep -i ha-nfs
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Requesting local execution of notify operation for ha-nfs on memverge2
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Result of notify operation for ha-nfs on memverge2: OK
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs: Preparing remote state change 3098631950: 27->all role( Secondary )
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Requesting local execution of notify operation for ha-nfs on memverge2
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: Committing remote state change 3098631950 (primary_nodes=0)
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: peer( Primary -> Secondary ) [remote]
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Result of notify operation for ha-nfs on memverge2: OK
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Requesting local execution of notify operation for ha-nfs on memverge2
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Result of notify operation for ha-nfs on memverge2: OK
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs: Preparing remote state change 1833838261: 27->28 conn( Disconnecting )
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: Committing remote state change 1833838261 (primary_nodes=0)
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: conn( Connected -> TearDown ) peer( Secondary -> Unknown ) [remote]
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs/29 drbd1 memverge: pdsk( UpToDate -> DUnknown ) repl( Established -> Off ) [remote]
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs/30 drbd2 memverge: pdsk( UpToDate -> DUnknown ) repl( Established -> Off ) [remote]
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: Terminating sender thread
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: Starting sender thread (peer-node-id 27)
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: Connection closed
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: helper command: /sbin/drbdadm disconnected
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: helper command: /sbin/drbdadm disconnected exit code 0
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: conn( TearDown -> Unconnected ) [disconnected]
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: Restarting receiver thread
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: conn( Unconnected -> Connecting ) [connecting]
Jun 24 06:36:06 memverge2 pacemaker-attrd[385164]: notice: Setting master-ha-nfs[memverge] in instance_attributes: 10000 -> (unset)
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Requesting local execution of notify operation for ha-nfs on memverge2
Jun 24 06:36:06 memverge2 pacemaker-attrd[385164]: notice: Setting master-ha-nfs[memverge2] in instance_attributes: 10000 -> 1000
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Result of notify operation for ha-nfs on memverge2: OK
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Requesting local execution of notify operation for ha-nfs on memverge2
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Result of notify operation for ha-nfs on memverge2: OK
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Requesting local execution of promote operation for ha-nfs on memverge2
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: helper command: /sbin/drbdadm fence-peer
Jun 24 06:36:06 memverge2 crm-fence-peer.9.sh[385904]: DRBD_BACKING_DEV_29=/dev/block_nfs_vg/ha_nfs_internal_lv DRBD_BACKING_DEV_30=/dev/block_nfs_vg/ha_nfs_exports_lv DRBD_CONF=/etc/drbd.conf DRBD_CSTATE=Connecting DRBD_LL_DISK=/dev/block_nfs_vg/ha_nfs_internal_lv\ /dev/block_nfs_vg/ha_nfs_exports_lv DRBD_MINOR=1\ 2 DRBD_MINOR_29=1 DRBD_MINOR_30=2 DRBD_MY_ADDRESS=192.168.0.8 DRBD_MY_AF=ipv4 DRBD_MY_NODE_ID=28 DRBD_NODE_ID_27=memverge DRBD_NODE_ID_28=memverge2 DRBD_PEER_ADDRESS=192.168.0.6 DRBD_PEER_AF=ipv4 DRBD_PEER_NODE_ID=27 DRBD_RESOURCE=ha-nfs DRBD_VOLUME=29\ 30 UP_TO_DATE_NODES=0x10000000 /usr/lib/drbd/crm-fence-peer.9.sh
Jun 24 06:36:06 memverge2 crm-fence-peer.9.sh[385904]: INFO peers are reachable, my disk is UpToDate UpToDate: placed constraint 'drbd-fence-by-handler-ha-nfs-ha-nfs-clone'
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: helper command: /sbin/drbdadm fence-peer exit code 4 (0x400)
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs memverge: fence-peer helper returned 4 (peer was fenced)
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs/29 drbd1 memverge: pdsk( DUnknown -> Outdated ) [primary]
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs/30 drbd2 memverge: pdsk( DUnknown -> Outdated ) [primary]
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs: Preparing cluster-wide state change 2552277009: 28->all role( Primary )
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs: Committing cluster-wide state change 2552277009 (0ms)
Jun 24 06:36:06 memverge2 kernel: drbd ha-nfs: role( Secondary -> Primary ) [primary]
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Result of promote operation for ha-nfs on memverge2: OK
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Requesting local execution of notify operation for ha-nfs on memverge2
Jun 24 06:36:06 memverge2 pacemaker-attrd[385164]: notice: Setting master-ha-nfs[memverge2] in instance_attributes: 1000 -> 10000
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Result of notify operation for ha-nfs on memverge2: OK
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Initiating monitor operation ha-nfs_monitor_29000 locally on memverge2
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Requesting local execution of monitor operation for ha-nfs on memverge2
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Result of monitor operation for ha-nfs on memverge2: Promoted
[root@memverge2 ~]# drbdadm status
ha-nfs role:Primary
  volume:29 disk:UpToDate open:no
  volume:30 disk:UpToDate open:no
  memverge connection:Connecting

So far so good.

Few minutes later, after memverge booted

[root@memverge2 ~]# drbdadm status
ha-nfs role:Primary
  volume:29 disk:UpToDate open:no
  volume:30 disk:UpToDate open:no
  memverge role:Secondary
    volume:29 peer-disk:UpToDate
    volume:30 peer-disk:UpToDate

Reboot memverge2 (by reboot command)

[root@memverge ~]# dmesg|grep -i ha-nfs
[  251.822037] drbd ha-nfs: Preparing remote state change 1244207805: 28->all role( Secondary )
[  251.845712] drbd ha-nfs memverge2: Committing remote state change 1244207805 (primary_nodes=0)
[  251.846451] drbd ha-nfs memverge2: peer( Primary -> Secondary ) [remote]
[  251.950430] drbd ha-nfs: Preparing remote state change 1354069447: 28->27 conn( Disconnecting )
[  251.974278] drbd ha-nfs memverge2: Committing remote state change 1354069447 (primary_nodes=0)
[  251.974620] drbd ha-nfs memverge2: conn( Connected -> TearDown ) peer( Secondary -> Unknown ) [remote]
[  251.974953] drbd ha-nfs/29 drbd1 memverge2: pdsk( UpToDate -> DUnknown ) repl( Established -> Off ) [remote]
[  251.975290] drbd ha-nfs/30 drbd2 memverge2: pdsk( UpToDate -> DUnknown ) repl( Established -> Off ) [remote]
[  251.987572] drbd ha-nfs memverge2: meta connection shut down by peer.
[  251.999912] drbd ha-nfs memverge2: Terminating sender thread
[  252.000334] drbd ha-nfs memverge2: Starting sender thread (peer-node-id 28)
[  252.037154] drbd ha-nfs memverge2: Connection closed
[  252.037491] drbd ha-nfs memverge2: helper command: /sbin/drbdadm disconnected
[  252.060821] drbd ha-nfs memverge2: helper command: /sbin/drbdadm disconnected exit code 0
[  252.061131] drbd ha-nfs memverge2: conn( TearDown -> Unconnected ) [disconnected]
[  252.061442] drbd ha-nfs memverge2: Restarting receiver thread
[  252.061744] drbd ha-nfs memverge2: conn( Unconnected -> Connecting ) [connecting]
[root@memverge ~]# cat /var/log/messages|grep -i ha-nfs
Jun 24 06:43:04 memverge pacemaker-controld[2940]: notice: Requesting local execution of notify operation for ha-nfs on memverge
Jun 24 06:43:04 memverge pacemaker-controld[2940]: notice: Result of notify operation for ha-nfs on memverge: OK
Jun 24 06:43:04 memverge kernel: drbd ha-nfs: Preparing remote state change 1244207805: 28->all role( Secondary )
Jun 24 06:43:04 memverge kernel: drbd ha-nfs memverge2: Committing remote state change 1244207805 (primary_nodes=0)
Jun 24 06:43:04 memverge kernel: drbd ha-nfs memverge2: peer( Primary -> Secondary ) [remote]
Jun 24 06:43:04 memverge pacemaker-controld[2940]: notice: Requesting local execution of notify operation for ha-nfs on memverge
Jun 24 06:43:04 memverge pacemaker-controld[2940]: notice: Result of notify operation for ha-nfs on memverge: OK
Jun 24 06:43:04 memverge pacemaker-controld[2940]: notice: Requesting local execution of notify operation for ha-nfs on memverge
Jun 24 06:43:04 memverge pacemaker-controld[2940]: notice: Result of notify operation for ha-nfs on memverge: OK
Jun 24 06:43:04 memverge kernel: drbd ha-nfs: Preparing remote state change 1354069447: 28->27 conn( Disconnecting )
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: Committing remote state change 1354069447 (primary_nodes=0)
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: conn( Connected -> TearDown ) peer( Secondary -> Unknown ) [remote]
Jun 24 06:43:05 memverge kernel: drbd ha-nfs/29 drbd1 memverge2: pdsk( UpToDate -> DUnknown ) repl( Established -> Off ) [remote]
Jun 24 06:43:05 memverge kernel: drbd ha-nfs/30 drbd2 memverge2: pdsk( UpToDate -> DUnknown ) repl( Established -> Off ) [remote]
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: meta connection shut down by peer.
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: Terminating sender thread
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: Starting sender thread (peer-node-id 28)
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: Connection closed
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: helper command: /sbin/drbdadm disconnected
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: helper command: /sbin/drbdadm disconnected exit code 0
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: conn( TearDown -> Unconnected ) [disconnected]
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: Restarting receiver thread
Jun 24 06:43:05 memverge kernel: drbd ha-nfs memverge2: conn( Unconnected -> Connecting ) [connecting]
Jun 24 06:43:05 memverge pacemaker-attrd[2937]: notice: Setting master-ha-nfs[memverge2] in instance_attributes: 10000 -> (unset)
Jun 24 06:43:05 memverge pacemaker-controld[2940]: notice: Requesting local execution of notify operation for ha-nfs on memverge
Jun 24 06:43:05 memverge pacemaker-attrd[2937]: notice: Setting master-ha-nfs[memverge] in instance_attributes: 10000 -> 1000
Jun 24 06:43:05 memverge pacemaker-controld[2940]: notice: Result of notify operation for ha-nfs on memverge: OK

I noticed that during the switch memverge → memverge2 there are next records in the /var/log/messages

Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Result of notify operation for ha-nfs on memverge2: OK
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Requesting local execution of notify operation for ha-nfs on memverge2
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Result of notify operation for ha-nfs on memverge2: OK
Jun 24 06:36:06 memverge2 pacemaker-controld[385166]: notice: Requesting local execution of promote operation for ha-nfs on memverge2

But there are no such records during failback memverge2 → memverge, only next record

Jun 24 06:43:05 memverge pacemaker-controld[2940]: notice: Result of notify operation for ha-nfs on memverge: OK

So there is no record “Requesting local execution of promote operation for ha-nfs on memverge”

Anton

You clearly have resource level fencing configured here. Do you have both a fence-peer and an unfence-peer handler set in the DRBD configuration? I have a strange hunch that perhaps you’re missing the unfence-peer handler?

If you check the CIB with either crm configure show or a pcs config do you see any strange location constraints?

You clearly have resource level fencing configured here. Do you have both a fence-peer and an unfence-peer handler set in the DRBD configuration? I have a strange hunch that perhaps you’re missing the unfence-peer handler?

in ha-nfs.res file on both cluster nodes I have

handlers {
      fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh";
      after-resync-target "/usr/lib/drbd/crm-unfence-peer.9.sh";
    }

[root@memverge ~]# ll /usr/lib/drbd/crm-fence-peer.9.sh
-rwxr-xr-x 1 root root 48363 May 22 03:00 /usr/lib/drbd/crm-fence-peer.9.sh
[root@memverge ~]# ll /usr/lib/drbd/crm-unfence-peer.9.sh
lrwxrwxrwx 1 root root 19 May 22 03:00 /usr/lib/drbd/crm-unfence-peer.9.sh -> crm-fence-peer.9.sh

If you check the CIB with either crm configure show or a pcs config do you see any strange location constraints?

When I do -

pcs resource create ha-nfs ocf:linbit:drbd \
        drbd_resource=ha-nfs \
    op monitor timeout=30 interval=31 role=Unpromoted \
    op monitor timeout=30 interval=30 role=Promoted

I got next errors

[root@memverge ~]# cat /var/log/messages
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: State transition S_IDLE -> S_POLICY_ENGINE
Jun 25 14:47:16 memverge pacemaker-fenced[2935]: notice: On loss of quorum: Ignore
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: notice: On loss of quorum: Ignore
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: notice: Actions: Start      ha-nfs                  (              memverge )
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: notice: Calculated transition 19, saving inputs in /var/lib/pacemaker/pengine/pe-input-414.bz2
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Initiating monitor operation ha-nfs_monitor_0 on memverge2
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Initiating monitor operation ha-nfs_monitor_0 locally on memverge
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Requesting local execution of probe operation for ha-nfs on memverge
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Result of probe operation for ha-nfs on memverge: Not running
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Initiating start operation ha-nfs_start_0 locally on memverge
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Requesting local execution of start operation for ha-nfs on memverge
Jun 25 14:47:16 memverge drbd(ha-nfs)[14871]: ERROR: you really should enable notify when using this RA (or set ignore_missing_notifications=true)
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Result of start operation for ha-nfs on memverge: Not configured (you really should enable notify when using this RA (or set ignore_missing_notifications=true))
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: ha-nfs_start_0@memverge output [ ocf-exit-reason:you really should enable notify when using this RA (or set ignore_missing_notifications=true) ]
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Transition 19 aborted by operation ha-nfs_start_0 'modify' on memverge: Event failed
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Transition 19 action 9 (ha-nfs_start_0 on memverge): expected 'OK' but got 'Not configured'
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Transition 19 (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-414.bz2): Complete
Jun 25 14:47:16 memverge pacemaker-attrd[2939]: notice: Setting last-failure-ha-nfs#start_0[memverge] in instance_attributes: (unset) -> 1750852036
Jun 25 14:47:16 memverge pacemaker-attrd[2939]: notice: Setting fail-count-ha-nfs#start_0[memverge] in instance_attributes: (unset) -> INFINITY
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: notice: On loss of quorum: Ignore
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: warning: Unexpected result (Not configured: you really should enable notify when using this RA (or set ignore_missing_notifications=true)) was recorded for start of ha-nfs on memverge at Jun 25 14:47:16 2025
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: error: Preventing ha-nfs from restarting anywhere because of fatal failure (Not configured: you really should enable notify when using this RA (or set ignore_missing_notifications=true))
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: warning: Unexpected result (Not configured: you really should enable notify when using this RA (or set ignore_missing_notifications=true)) was recorded for start of ha-nfs on memverge at Jun 25 14:47:16 2025
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: error: Preventing ha-nfs from restarting anywhere because of fatal failure (Not configured: you really should enable notify when using this RA (or set ignore_missing_notifications=true))
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: notice: Actions: Stop       ha-nfs                  (              memverge )  due to node availability
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: error: Calculated transition 20 (with errors), saving inputs in /var/lib/pacemaker/pengine/pe-error-146.bz2
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: notice: On loss of quorum: Ignore
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: warning: Unexpected result (Not configured: you really should enable notify when using this RA (or set ignore_missing_notifications=true)) was recorded for start of ha-nfs on memverge at Jun 25 14:47:16 2025
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: error: Preventing ha-nfs from restarting anywhere because of fatal failure (Not configured: you really should enable notify when using this RA (or set ignore_missing_notifications=true))
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: warning: Unexpected result (Not configured: you really should enable notify when using this RA (or set ignore_missing_notifications=true)) was recorded for start of ha-nfs on memverge at Jun 25 14:47:16 2025
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: error: Preventing ha-nfs from restarting anywhere because of fatal failure (Not configured: you really should enable notify when using this RA (or set ignore_missing_notifications=true))
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: warning: ha-nfs cannot run on memverge due to reaching migration threshold (clean up resource to allow again)
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: notice: Actions: Stop       ha-nfs                  (              memverge )  due to node availability
Jun 25 14:47:16 memverge pacemaker-schedulerd[2940]: error: Calculated transition 21 (with errors), saving inputs in /var/lib/pacemaker/pengine/pe-error-147.bz2
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Initiating stop operation ha-nfs_stop_0 locally on memverge
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Requesting local execution of stop operation for ha-nfs on memverge
Jun 25 14:47:16 memverge kernel: drbd: loading out-of-tree module taints kernel.
Jun 25 14:47:16 memverge kernel: drbd: module verification failed: signature and/or required key missing - tainting kernel
Jun 25 14:47:16 memverge kernel: drbd: initialized. Version: 9.2.14 (api:2/proto:118-123)
Jun 25 14:47:16 memverge kernel: drbd: GIT-hash: a1e7c10e591a844b327da120d169df7da7c933b7 build by mockbuild@bb9ac76aec2f426389d49f0aa6258039, 2025-06-07 13:21:34
Jun 25 14:47:16 memverge kernel: drbd: registered as block device major 147
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Result of stop operation for ha-nfs on memverge: OK
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: Transition 21 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-error-147.bz2): Complete
Jun 25 14:47:16 memverge pacemaker-controld[2941]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE

After

pcs resource promotable ha-nfs \
    meta promoted-max=1 promoted-node-max=1 \
    clone-node-max=1 clone-max=2 notify=true

I got new messages

Jun 25 14:57:08 memverge pacemaker-fenced[2935]: notice: On loss of quorum: Ignore
Jun 25 14:57:08 memverge pacemaker-controld[2941]: notice: Populating nodes and starting an election after cib_diff_notify event triggered by cibadmin
Jun 25 14:57:08 memverge pacemaker-attrd[2939]: notice: Updating all attributes after cib_diff_notify event triggered by cibadmin
Jun 25 14:57:08 memverge pacemaker-controld[2941]: notice: State transition S_IDLE -> S_ELECTION
Jun 25 14:57:08 memverge pacemaker-controld[2941]: notice: State transition S_ELECTION -> S_INTEGRATION
Jun 25 14:57:08 memverge pacemaker-schedulerd[2940]: notice: On loss of quorum: Ignore
Jun 25 14:57:08 memverge pacemaker-schedulerd[2940]: warning: Unexpected result (Not configured: you really should enable notify when using this RA (or set ignore_missing_notifications=true)) was recorded for start of ha-nfs:0 on memverge at Jun 25 14:47:16 2025
Jun 25 14:57:08 memverge pacemaker-schedulerd[2940]: error: Preventing ha-nfs-clone from restarting anywhere because of fatal failure (Not configured: you really should enable notify when using this RA (or set ignore_missing_notifications=true))
Jun 25 14:57:08 memverge pacemaker-schedulerd[2940]: warning: ha-nfs-clone cannot run on memverge due to reaching migration threshold (clean up resource to allow again)
Jun 25 14:57:08 memverge pacemaker-schedulerd[2940]: warning: ha-nfs-clone cannot run on memverge due to reaching migration threshold (clean up resource to allow again)
Jun 25 14:57:08 memverge pacemaker-schedulerd[2940]: error: Calculated transition 22 (with errors), saving inputs in /var/lib/pacemaker/pengine/pe-error-148.bz2
Jun 25 14:57:08 memverge pacemaker-controld[2941]: notice: Transition 22 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-error-148.bz2): Complete
Jun 25 14:57:08 memverge pacemaker-controld[2941]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE

And only after

pcs resource cleanup && pcs resource refresh

I see

Full List of Resources:
  * ipmi-fence-memverge (stonith:fence_ipmilan):         Started memverge2
  * ipmi-fence-memverge2        (stonith:fence_ipmilan):         Started memverge
  * Clone Set: ha-nfs-clone [ha-nfs] (promotable):
    * ha-nfs    (ocf:linbit:drbd):       Promoted memverge
    * ha-nfs    (ocf:linbit:drbd):       Unpromoted memverge2

[root@memverge ~]# drbdadm status
ha-nfs role:Primary
  volume:29 disk:UpToDate open:no
  volume:30 disk:UpToDate open:no
  memverge2 role:Secondary
    volume:29 peer-disk:UpToDate
    volume:30 peer-disk:UpToDate

After reboot memverge, on another node

[root@memverge2 ~]# drbdadm status
ha-nfs role:Primary
  volume:29 disk:UpToDate open:no
  volume:30 disk:UpToDate open:no
  memverge connection:Connecting

[root@memverge2 ~]# pcs config
Cluster Name: cluster_anton
Corosync Nodes:
 memverge memverge2
Pacemaker Nodes:
 memverge memverge2

Resources:
  Clone: ha-nfs-clone
    Meta Attributes: ha-nfs-clone-meta_attributes
      clone-max=2
      clone-node-max=1
      notify=true
      promotable=true
      promoted-max=1
      promoted-node-max=1
    Resource: ha-nfs (class=ocf provider=linbit type=drbd)
      Attributes: ha-nfs-instance_attributes
        drbd_resource=ha-nfs
      Operations:
        demote: ha-nfs-demote-interval-0s
          interval=0s timeout=90
        monitor: ha-nfs-monitor-interval-31
          interval=31 timeout=30 role=Unpromoted
        monitor: ha-nfs-monitor-interval-30
          interval=30 timeout=30 role=Promoted
        notify: ha-nfs-notify-interval-0s
          interval=0s timeout=90
        promote: ha-nfs-promote-interval-0s
          interval=0s timeout=90
        reload: ha-nfs-reload-interval-0s
          interval=0s timeout=30
        start: ha-nfs-start-interval-0s
          interval=0s timeout=240
        stop: ha-nfs-stop-interval-0s
          interval=0s timeout=100

Stonith Devices:
  Resource: ipmi-fence-memverge (class=stonith type=fence_ipmilan)
    Attributes: ipmi-fence-memverge-instance_attributes
      ip=10.72.14.151
      lanplus=1
      password=hpinvent
      pcmk_host_list=memverge
      power_timeout=60
      power_wait=4
      username=hpadmin
    Operations:
      monitor: ipmi-fence-memverge-monitor-interval-60
        interval=60 timeout=60
      start: ipmi-fence-memverge-start-interval-0s
        interval=0s timeout=60
  Resource: ipmi-fence-memverge2 (class=stonith type=fence_ipmilan)
    Attributes: ipmi-fence-memverge2-instance_attributes
      ip=10.72.14.153
      lanplus=1
      password=hpinvent
      pcmk_host_list=memverge2
      power_timeout=60
      power_wait=4
      username=hpadmin
    Operations:
      monitor: ipmi-fence-memverge2-monitor-interval-60
        interval=60 timeout=60
      start: ipmi-fence-memverge2-start-interval-0s
        interval=0s timeout=60

Location Constraints:
  resource 'ipmi-fence-memverge' avoids node 'memverge' with score INFINITY (id: location-ipmi-fence-memverge-memverge--INFINITY)
  resource 'ipmi-fence-memverge2' avoids node 'memverge2' with score INFINITY (id: location-ipmi-fence-memverge2-memverge2--INFINITY)
  resource 'ha-nfs-clone' (id: drbd-fence-by-handler-ha-nfs-ha-nfs-clone)
    Rules:
      Rule: role=Promoted score=-INFINITY (id: drbd-fence-by-handler-ha-nfs-rule-ha-nfs-clone)
        Expression: #uname ne memverge2 (id: drbd-fence-by-handler-ha-nfs-expr-28-ha-nfs-clone)

Resources Defaults:
  Meta Attrs: build-resource-defaults
    resource-stickiness=1 (id: build-resource-stickiness)

Cluster Properties: cib-bootstrap-options
  cluster-infrastructure=corosync
  cluster-name=cluster_anton
  dc-version=3.0.0-5.el10-fec8a9c
  have-watchdog=false
  last-lrm-refresh=1750852821
  no-quorum-policy=ignore
  stonith-action=off
  stonith-enabled=true
[root@memverge2 ~]#

I rebooted memverge2, no failback to memverge like we already saw.

May be if I remove next record below the cluster will be able for failback ?

Expression: #uname ne memverge2 (id: drbd-fence-by-handler-ha-nfs-expr-28-ha-nfs-clone)

Anton

This is likely the problem right here. This was how we implemented fencing handlers previously. By abusing the after-resync-target handler. However, with recent DRBD versions we no longer do this. You will want to instead use the unfence-peer handler. See the user’s guide here for an example: https://linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-pacemaker-fencing-cib

That is the “strange location constraint” I was talking about in my previous post. This basically says “never promote on a node where the uname is not equal to memverg2”. You can tell this was set by the fencing handler via the name: drbd-fence-by-handler...
This would perfectly explain the behavior you observe. This should be removed automatically by the unfence-peer handler, but you currently don’t have one configured.

2 Likes

OMG, what else I can say…

So many sleepless nights I spent.

I made correction, now it works perfectly!

Thank you Devin!!

1 Like