Proxmox snapshot backup of containers fails sometimes

My setup has two storage nodes, each with 2x nvme drives mirrored with mdadm, an lvm on that, and then drbd. In addition I have a VM running on TrueNAS as a diskless witness.
The two storage nodes are connected with a dedicated 25Gb link.

However, sometimes container backups will fail. It seems to happen when both hypervisors start a backup at the same time. It’s usually not the same container that fails. When failed I have to manually remove the snapshot and snapshot related resources.

Any ideas?

Linstor controller version: 1.33.1
Kernel: 6.17.2-2-pve.

Error from Proxmox:

INFO: starting new backup job: vzdump 106 --mode snapshot --storage nas-backup --notes-template ‘{{guestname}}’ --notification-mode notification-system --node hypervisor1 --compress zstd --remove 1
INFO: Starting Backup of VM 106 (lxc)
INFO: Backup started at 2026-01-07 20:39:10
INFO: status = running
INFO: CT Name: nextcloud
INFO: including mount point rootfs (‘/’) in backup
INFO: excluding bind mount point mp0 (‘/mnt/data’) from backup (not a volume)
INFO: excluding bind mount point mp1 (‘/mnt/uploadtemp’) from backup (not a volume)
INFO: found old vzdump snapshot (force removal)
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: create storage snapshot ‘vzdump’
mount: /mnt/vzsnap0: fsconfig() failed: /dev/drbd1019: Can’t open blockdev.
dmesg(1) may have more information after failed mount system call.
umount: /mnt/vzsnap0/: not mounted.
command ‘umount -l -d /mnt/vzsnap0/’ failed: exit code 32
ERROR: Backup of VM 106 failed - command ‘mount -o ro,noload /dev/drbd1019 /mnt/vzsnap0//’ failed: exit code 32
INFO: Failed at 2026-01-07 20:39:27
INFO: Backup job finished with errors

Error log from hypervisor where the backup failed (reverse direction):

Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: Began resync as SyncSource (will sync 14680064 KB [3670016 bits set]).
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: pdsk( Outdated → Inconsistent ) repl( WFBitMapS → SyncSource ) replication( yes → no ) [receive-b>
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: helper command: /sbin/drbdadm before-resync-source exit code 0
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: helper command: /sbin/drbdadm before-resync-source
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: receive bitmap stats [Bytes(packets)]: plain 0(0), RLE 24(1), total 24; compression: 100.0%
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: send bitmap stats [Bytes(packets)]: plain 0(0), RLE 24(1), total 24; compression: 100.0%
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: pdsk( Consistent → Outdated ) [peer-state]
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: pdsk( DUnknown → Consistent ) repl( Off → WFBitMapS ) [connected]
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019: quorum( no → yes ) [connected]
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: conn( Connecting → Connected ) peer( Unknown → Secondary ) [connected]
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: Committing cluster-wide state change 1782363112 (23ms)
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: State change 1782363112: primary_nodes=0, weak_nodes=0
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: uuid_compare()=source-if-both-failed by rule=both-off
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: peer C1B6BD4A543399DC:0000000000000000:0000000000000000:0000000000000000 bits:1750016 flags:1020
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: self C1B6BD4A543399DC:0000000000000000:0000000000000000:0000000000000000 bits:3670016 flags:22
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 hypervisor2: drbd_sync_handshake:
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: Preparing cluster-wide state change 1782363112: 0->1 role( Secondary ) conn( Connected )
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Peer authenticated using 20 bytes HMAC
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Feature flags enabled on protocol level: 0x1ff TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
Jan 07 20:39:37 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Handshake to peer 1 successful: Agreed network protocol version 123
Jan 07 20:39:27 hypervisor1 pvedaemon[2641647]: INFO: Backup job finished with errors
Jan 07 20:39:27 hypervisor1 pvedaemon[2641647]: ERROR: Backup of VM 106 failed - command ‘mount -o ro,noload /dev/drbd1019 /mnt/vzsnap0//’ failed: exit code 32
Jan 07 20:39:27 hypervisor1 pvedaemon[2641647]: command ‘umount -l -d /mnt/vzsnap0/’ failed: exit code 32
Jan 07 20:39:27 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:27.204 [DeviceManager] INFO LINSTOR/Satellite/3c8fd2 SYSTEM - Begin DeviceManager cycle 106
Jan 07 20:39:27 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:27.204 [DeviceManager] INFO LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 105
Jan 07 20:39:27 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:27.143 [DeviceManager] INFO LINSTOR/Satellite/952b0b SYSTEM - Resource ‘snap_pm-b5d0916f_vzdump’ [DRBD] adjusted.
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: conn( Unconnected → Connecting ) [connecting]
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Restarting receiver thread
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: conn( NetworkFailure → Unconnected ) [disconnected]
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: helper command: /sbin/drbdadm disconnected exit code 0
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: helper command: /sbin/drbdadm disconnected
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Connection closed
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Starting sender thread (peer-node-id 1)
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Terminating sender thread
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Failure to connect: Interrupted state change (-21); retrying
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: Aborting cluster-wide state change 884033480 (19ms) rv = -21
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: meta connection shut down by peer.
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: conn( Connecting → NetworkFailure )
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: sock was shut down by peer
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: meta connection shut down by peer.
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: Preparing cluster-wide state change 884033480: 0->1 role( Secondary ) conn( Connected )
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Peer authenticated using 20 bytes HMAC
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Feature flags enabled on protocol level: 0x1ff TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
Jan 07 20:39:25 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Handshake to peer 1 successful: Agreed network protocol version 123
Jan 07 20:39:16 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:16.540 [DeviceManager] INFO LINSTOR/Satellite/952b0b SYSTEM - Begin DeviceManager cycle 105
Jan 07 20:39:16 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:16.540 [DeviceManager] INFO LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 104
Jan 07 20:39:16 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:16.484 [MainWorkerPool-12] INFO LINSTOR/Satellite/6d995a SYSTEM - Storage pool ‘pve-storage’ for node ‘hypervisor1’ updated.
Jan 07 20:39:16 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:16.484 [DeviceManager] INFO LINSTOR/Satellite/f42f14 SYSTEM - Begin DeviceManager cycle 104
Jan 07 20:39:16 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:16.484 [DeviceManager] INFO LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 103
Jan 07 20:39:16 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:16.483 [MainWorkerPool-11] INFO LINSTOR/Satellite/6fac8f SYSTEM - Storage pool ‘pve-storage’ for node ‘hypervisor2’ updated.
Jan 07 20:39:16 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:16.482 [MainWorkerPool-8] INFO LINSTOR/Satellite/001f93 SYSTEM - SpaceInfo: pve-storage → 458506245/742039552
Jan 07 20:39:16 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:16.419 [MainWorkerPool-8] INFO LINSTOR/Satellite/001f93 SYSTEM - SpaceInfo: DfltDisklessStorPool → 9223372036854775807/922>
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump qdevice: Committing remote state change 260165721 (primary_nodes=0)
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: Preparing remote state change 260165721: 1->2 role( Secondary ) conn( Connected )
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 qdevice: pdsk( DUnknown → Diskless ) repl( Off → Established ) [connected]
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump qdevice: conn( Connecting → Connected ) peer( Unknown → Secondary ) [connected]
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: Committing cluster-wide state change 296415547 (21ms)
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: State change 296415547: primary_nodes=0, weak_nodes=0
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 qdevice: peer’s exposed UUID: 0000000000000000
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019 qdevice: self C1B6BD4A543399DC:0000000000000000:0000000000000000:0000000000000000 bits:0 flags:0
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: Preparing cluster-wide state change 296415547: 0->2 role( Secondary ) conn( Connected )
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump qdevice: Peer authenticated using 20 bytes HMAC
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump qdevice: Feature flags enabled on protocol level: 0x1ff TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
Jan 07 20:39:12 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump qdevice: Handshake to peer 2 successful: Agreed network protocol version 123
Jan 07 20:39:12 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:12.057 [DeviceManager] INFO LINSTOR/Satellite/b6d228 SYSTEM - Begin DeviceManager cycle 103
Jan 07 20:39:12 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:12.057 [DeviceManager] INFO LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 102
Jan 07 20:39:12 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:12.052 [DeviceManager] INFO LINSTOR/Satellite/86027d SYSTEM - Resource ‘snap_pm-b5d0916f_vzdump’ [DRBD] adjusted.
Jan 07 20:39:12 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:12.039 [DeviceManager] INFO LINSTOR/Satellite/86027d SYSTEM - Begin DeviceManager cycle 102
Jan 07 20:39:12 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:12.039 [DeviceManager] INFO LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 101
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump qdevice: conn( Unconnected → Connecting ) [connecting]
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump qdevice: Starting receiver thread (peer-node-id 2)
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: conn( Unconnected → Connecting ) [connecting]
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.913 [DeviceManager] INFO LINSTOR/Satellite/f73358 SYSTEM - Resource ‘snap_pm-b5d0916f_vzdump’ [DRBD] adjusted.
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump qdevice: conn( StandAlone → Unconnected ) [connect]
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Starting receiver thread (peer-node-id 1)
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: conn( StandAlone → Unconnected ) [connect]
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019: Setting exposed data uuid: C1B6BD4A543399DC
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019: attached to current UUID: C1B6BD4A543399DC
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019: disk( Attaching → UpToDate ) [attach]
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019: size = 14 GB (14680984 KB)
Jan 07 20:39:11 hypervisor1 kernel: drbd1019: detected capacity change from 0 to 29361968
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019: resync bitmap: bits=3670246 bits_4k=3670246 words=401436 pages=785
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019: drbd_bm_resize called with capacity == 29361968
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: Method to ensure write ordering: flush
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019: Maximum number of peer devices = 7
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019: disk( Diskless → Attaching ) [attach]
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump/0 drbd1019: meta-data IO uses: blk-bio
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump qdevice: Starting sender thread (peer-node-id 2)
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump hypervisor2: Starting sender thread (peer-node-id 1)
Jan 07 20:39:11 hypervisor1 kernel: drbd snap_pm-b5d0916f_vzdump: Starting worker thread (node-id 0)
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.874 [DeviceManager] INFO LINSTOR/Satellite/f73358 SYSTEM - DRBD regenerated resource file: /var/lib/linstor.d/snap_pm-b5>
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.871 [DeviceManager] INFO LINSTOR/Satellite/f73358 SYSTEM - Volume number 0 of resource ‘snap_pm-b5d0916f_vzdump’ [LVM-Th>
Jan 07 20:39:11 hypervisor1 dmeventd[979]: Monitoring thin pool linstor_vg-thinpool-tpool.
Jan 07 20:39:11 hypervisor1 dmeventd[979]: No longer monitoring thin pool linstor_vg-thinpool-tpool.
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.658 [DeviceManager] INFO LINSTOR/Satellite/f73358 SYSTEM - Begin DeviceManager cycle 101
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.658 [DeviceManager] INFO LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 100
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.653 [DeviceManager] INFO LINSTOR/Satellite/f09f31 SYSTEM - Resource ‘pm-b5d0916f’ [DRBD] adjusted.
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.642 [MainWorkerPool-1] INFO LINSTOR/Satellite/318524 SYSTEM - Snapshot ‘snap_pm-b5d0916f_vzdump’ of resource 'pm-b5d0916>
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.571 [DeviceManager] INFO LINSTOR/Satellite/f09f31 SYSTEM - Begin DeviceManager cycle 100
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.571 [DeviceManager] INFO LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 99
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.566 [DeviceManager] INFO LINSTOR/Satellite/8d9e26 SYSTEM - Resource ‘pm-b5d0916f’ [DRBD] adjusted.
Jan 07 20:39:11 hypervisor1 kernel: drbd pm-b5d0916f: susp-io( user → no ) [resume-io]
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.548 [MainWorkerPool-6] INFO LINSTOR/Satellite/fcbfd7 SYSTEM - Snapshot ‘snap_pm-b5d0916f_vzdump’ of resource 'pm-b5d0916>
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.544 [DeviceManager] INFO LINSTOR/Satellite/8d9e26 SYSTEM - Begin DeviceManager cycle 99
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.544 [DeviceManager] INFO LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 98
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.443 [DeviceManager] INFO LINSTOR/Satellite/537707 SYSTEM - Snapshot [LVM-Thin] with name ‘snap_pm-b5d0916f_vzdump’ of re>
Jan 07 20:39:11 hypervisor1 dmeventd[979]: Monitoring thin pool linstor_vg-thinpool-tpool.
Jan 07 20:39:11 hypervisor1 dmeventd[979]: No longer monitoring thin pool linstor_vg-thinpool-tpool.
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.314 [DeviceManager] INFO LINSTOR/Satellite/537707 SYSTEM - Resource ‘pm-b5d0916f’ [DRBD] adjusted.
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.303 [MainWorkerPool-3] INFO LINSTOR/Satellite/de03aa SYSTEM - Snapshot ‘snap_pm-b5d0916f_vzdump’ of resource 'pm-b5d0916>
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.301 [DeviceManager] INFO LINSTOR/Satellite/537707 SYSTEM - Begin DeviceManager cycle 98
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.300 [DeviceManager] INFO LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 97
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.243 [DeviceManager] INFO LINSTOR/Satellite/f80f19 SYSTEM - Resource ‘pm-b5d0916f’ [DRBD] adjusted.
Jan 07 20:39:11 hypervisor1 kernel: drbd pm-b5d0916f: susp-io( no → user ) [suspend-io]
Jan 07 20:39:11 hypervisor1 Satellite[3554249]: 2026-01-07 20:39:11.123 [MainWorkerPool-16] INFO LINSTOR/Satellite/05a4c2 SYSTEM - Snapshot ‘snap_pm-b5d0916f_vzdump’ of resource 'pm-b5d091>
Jan 07 20:39:10 hypervisor1 pvedaemon[2641647]: INFO: Starting Backup of VM 106 (lxc)