Resources standalone after PVE update

After the last PVE update (+ drbd/linstor over the enterprise repo), after a reboot (orderly reboot over the interface), all resources were in StandAlone mode. A had to run “drbdadm connect” on all of them. Even more so, three of the volumes wouldn’t come up, getting errors such as:

open(/dev/linstor_DRBD_back/pm-75d160d2_00000) failed: No such file or directory

I had to manually activate the lvs and do drbdadm adjust … to get it back up

# lvchange --setactivationskip n linstor_DRBD_back/pm-75d160d2_00000
  Logical volume linstor_DRBD_back/pm-75d160d2_00000 changed.
# lvchange -ay linstor_DRBD_back/pm-75d160d2_00000

The log files show this before the reboot (for each resource):

Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29/0 drbd1041: Would lose quorum, but using tiebreaker logic to keep
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29: Preparing cluster-wide state change 1115886162: 1->0 conn( Disconnecting )
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29: State change 1115886162: primary_nodes=0, weak_nodes=0
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29: Committing cluster-wide state change 1115886162 (0ms)
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve1: conn( Connected → Disconnecting ) peer( Secondary → Unknown ) [down]
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29/0 drbd1041 pve1: pdsk( UpToDate → DUnknown ) repl( Established → Off ) [down]
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve1: Terminating sender thread
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve1: Starting sender thread (peer-node-id 0)
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve1: Connection closed
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve1: helper command: /sbin/drbdadm disconnected
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve1: helper command: /sbin/drbdadm disconnected exit code 0
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve1: conn( Disconnecting → StandAlone ) [disconnected]
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve1: Terminating receiver thread
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve1: Terminating sender thread
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29: Preparing cluster-wide state change 1443733156: 1->2 conn( Disconnecting )
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29: State change 1443733156: primary_nodes=0, weak_nodes=0
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve3: Cluster is now split
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29: Committing cluster-wide state change 1443733156 (0ms)
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29: susp-io( no → quorum ) [down]
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve3: conn( Connected → Disconnecting ) peer( Secondary → Unknown ) [down]
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29/0 drbd1041: quorum( yes → no ) [down]
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29/0 drbd1041 pve3: pdsk( Diskless → DUnknown ) repl( Established → Off ) [down]
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve3: Terminating sender thread
Apr 14 21:09:26 pve2 kernel: drbd pm-05949a29 pve3: Starting sender thread (peer-node-id 2)
Apr 14 21:09:27 pve2 kernel: drbd pm-05949a29 pve3: Connection closed
Apr 14 21:09:27 pve2 kernel: drbd pm-05949a29 pve3: helper command: /sbin/drbdadm disconnected
Apr 14 21:09:27 pve2 kernel: drbd pm-05949a29 pve3: helper command: /sbin/drbdadm disconnected exit code 0
Apr 14 21:09:27 pve2 kernel: drbd pm-05949a29 pve3: conn( Disconnecting → StandAlone ) [disconnected]
Apr 14 21:09:27 pve2 kernel: drbd pm-05949a29 pve3: Terminating receiver thread
Apr 14 21:09:27 pve2 kernel: drbd pm-05949a29 pve3: Terminating sender thread
Apr 14 21:09:27 pve2 kernel: drbd pm-05949a29/0 drbd1041: disk( UpToDate → Detaching ) [down]
Apr 14 21:09:27 pve2 kernel: drbd pm-05949a29/0 drbd1041: disk( Detaching → Diskless ) [go-diskless]
Apr 14 21:09:27 pve2 kernel: drbd pm-05949a29/0 drbd1041: Freeing bitmap of size 7172 KiB

Did I do something wrong there?