Hi there,
as you can see in the topic “Migrating Proxmox-VM(s) fails with 'Wrong medium type” I’m kind of struggling with a small Proxmox/DRBD cluster.
Yesterday I’ve been running updates on all nodes just to make sure, that everything is up to date, and maybe the mentioned problem may solve itself. ![]()
Before the update I wanted to hibernate all the VMs on node1 where they have been running on … but that didn’t work!
Then I shut all the VMs down, ran updates and rebooted.
Afterwards I had to “drbdadm primary vm-1xx-disk-1” on all the disks that are in use (DRBD-distributed to two other nodes).
And now … the disks reside in ‘datapool’ and have a 00000 added to their name, plus there’s a …state_suspende$DATE_00000 of all the disks there.
The disks are defined to be in ‘drbdstorage’ where their are also visible, but node1 uses them from ‘datapool’ which it treats as a local storage.
So migrating to the other nodes now works … but it syncs the disks to the other node instead of relying on the underlying DRBD device (in ‘drbdstorage’).
What is that?
And how can I get rid of it?
Seems like I’m a bit lost with all those pools, storages, groups and so on.
Setting it up worked like a charm, but now the vm-disks seem to have ‘moved’ somehow and started behaving like thei’re on local storage instead on DRBD storage anymore.
Can please anybody show me a way out of this … ?
Best regards
Matthias
ps
datapool:
linstor storage-pool create zfs nodeX LinstorData datapool
linstor resource make-available --diskful nodeX vm-10x-disk-1
etc.
/etc/pve/storage:
drbd: drbdstorage
content images, rootdir
controller 10.10.10.1,10.10.10.2,10.10.10.3
resourcegroup pve-rg
preferlocal yes