3 node cluster / some vdisks stays on status inconsistent

fully patched proxmox 9.1.6

latest drdb patch

two nodes are fully synched - one node has some inconsistent vdisks

drbdadm status all

pm-479d5da8 role:Primary
  disk:UpToDate open:yes
  pveAMD-AI role:Secondary
    peer-disk:UpToDate
  pveAMD02 role:Secondary
    replication:SyncSource peer-disk:Inconsistent done:4.00

pm-8b1faeab role:Primary
  disk:UpToDate open:yes
  pveAMD-AI role:Secondary
    peer-disk:UpToDate
  pveAMD02 role:Secondary
    peer-disk:UpToDate

pm-8f4a8d6b role:Primary
  disk:UpToDate open:yes
  pveAMD-AI role:Secondary
    peer-disk:UpToDate
  pveAMD02 role:Secondary
    replication:SyncSource peer-disk:Inconsistent done:1.93


drbdadm adjust all

drbdadm -V
DRBDADM_BUILDTAG=GIT-hash:\ d10b5f53cdf6a445d6fc02cfc2477a129f4e7e83\ build\ by\ @buildsystem\,\ 2026-03-11\ 12:53:05
DRBDADM_API_VERSION=2
DRBD_KERNEL_VERSION_CODE=0x090300
DRBD_KERNEL_VERSION=9.3.0
DRBDADM_VERSION_CODE=0x092101
DRBDADM_VERSION=9.33.1

linstor -v
linstor-client 1.27.1; GIT-hash: 9c57f040eb3834500db508e4f04d361d006cb6b5

Simply no progress for two vdisks

Any suggestions?

Thanks and happy we.

I’ve found the hack…. you have to manually down / up the resource on the corresponding node by:

drbdadm down pm-479d5da8

drbdadm up pm-479d5da8

drbdadm status pm-479d5da8

pm-479d5da8 role:Secondary
disk:UpToDate open:no
pveAMD01 role:Primary
peer-disk:UpToDate
pveAMD02 role:Secondary
peer-disk:UpToDate