New resource becomes Standalone when using LUKS and mixing Storage Drivers

Hello everyone, I’m evaluating Linstor as the solution for running compute workloads with local disks while replicating it to a storage layer.

At this moment, I have 4 compute nodes and 1 storage node. Each compute node has 1 disk on LVM for VM creation, while the storage node runs on ZFS for long term persistence.

My goal is to have the compute nodes capable of accessing the storage locally, as well as ensure there is replication on the backup node for long term. Next I’d like to also include Backup routines to a object storage off-site in a cloud storage, which also means ensuring encryption is enabled on the disks.

Per my test so far, I’ve been able to create simple storage resources across the 2 drivers after allowing it to be mixed (AllowMixingStoragePoolDriver true). But I’m not being able to do it when LUKS get added, likely due to sector sizes/extend sizes or something mismatching.

So far, I’ve created 2 storage pools: backup with the ZFS_thin driver, and compute for the LVM_thin driver.

root@romulus:~# linstor storage-pool list -p
+-----------------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool          | Node      | Driver   | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName                     |
|===============================================================================================================================================|
| DfltDisklessStorPool | omv       | DISKLESS |          |              |               | False        | Ok    | omv;DfltDisklessStorPool       |
| DfltDisklessStorPool | romulus   | DISKLESS |          |              |               | False        | Ok    | romulus;DfltDisklessStorPool   |
| DfltDisklessStorPool | rotterdam | DISKLESS |          |              |               | False        | Ok    | rotterdam;DfltDisklessStorPool |
| DfltDisklessStorPool | ryzen     | DISKLESS |          |              |               | False        | Ok    | ryzen;DfltDisklessStorPool     |
| DfltDisklessStorPool | tiny      | DISKLESS |          |              |               | False        | Ok    | tiny;DfltDisklessStorPool      |
| backup               | omv       | ZFS_THIN | nas      |     5.17 TiB |      7.27 TiB | True         | Ok    | omv;backup                     |
| compute              | romulus   | LVM_THIN | pve/data |   264.72 GiB |    348.82 GiB | True         | Ok    | romulus;compute                |
| compute              | rotterdam | LVM_THIN | pve/data |   348.61 GiB |    348.82 GiB | True         | Ok    | rotterdam;compute              |
| compute              | ryzen     | LVM_THIN | pve/data |     1.60 TiB |      1.67 TiB | True         | Ok    | ryzen;compute                  |
| compute              | tiny      | LVM_THIN | pve/data |   319.97 GiB |    319.97 GiB | True         | Ok    | tiny;compute                   |
+-----------------------------------------------------------------------------------------------------------------------------------------------+

Next, I’ve created the resource group to use both storage pools:

linstor resource-group create nomad --storage-pool backup compute --place-count 2 -l DRBD,LUKS,STORAGE
linstor resource-group set-property nomad AllowMixingStoragePoolDriver  true
LINSTOR ==> rg list -p
+---------------------------------------------------------------------------------+
| ResourceGroup  | SelectFilter                            | VlmNrs | Description |
|=================================================================================|
| DfltRscGrp     | PlaceCount: 2                           |        |             |
|---------------------------------------------------------------------------------|
| linstor-db-grp | PlaceCount: 3                           | 0      |             |
|---------------------------------------------------------------------------------|
| nomad          | PlaceCount: 2                           |        |             |
|                | StoragePool(s): compute, backup         |        |             |
|                | DisklessOnRemaining: False              |        |             |
|                | LayerStack: ['DRBD', 'LUKS', 'STORAGE'] |        |             |
|---------------------------------------------------------------------------------|
| proxmox        | PlaceCount: 2                           | 0      |             |
|                | StoragePool(s): compute, backup         |        |             |
+---------------------------------------------------------------------------------+

resource-group query-size-info nomad
╭──────────────────────────────────────────────────────────────╮
┊ MaxVolumeSize ┊ AvailableSize ┊ Capacity ┊ Next Spawn Result ┊
╞══════════════════════════════════════════════════════════════╡
┊     31.96 TiB ┊      2.51 TiB ┊ 2.66 TiB ┊ backup on omv     ┊
┊               ┊               ┊          ┊ compute on ryzen  ┊
╰──────────────────────────────────────────────────────────────╯

Spawning a resource based on these configurations indeed create the disks across the two storage groups, but it fails immediately with a Standalone error:

LINSTOR ==> rg spawn nomad a-encrypted 5Gib -l DRBD,LUKS,STORAGE
LINSTOR ==> resource list -r a-encrypted
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node  ┊ Layers            ┊ Usage  ┊ Conns             ┊      State ┊ CreatedOn           ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ a-encrypted  ┊ omv   ┊ DRBD,LUKS,STORAGE ┊ Unused ┊ StandAlone(ryzen) ┊   Outdated ┊ 2026-01-14 11:06:04 ┊
┊ a-encrypted  ┊ ryzen ┊ DRBD,LUKS,STORAGE ┊ Unused ┊ StandAlone(omv)   ┊   UpToDate ┊ 2026-01-14 11:06:01 ┊
┊ a-encrypted  ┊ tiny  ┊ DRBD,STORAGE      ┊ Unused ┊ Ok                ┊ TieBreaker ┊ 2026-01-14 11:05:51 ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Creating a plain resource with mixed drivers, without LUKS, actually seems to work

LINSTOR ==> rg spawn nomad a-plain 5Gib -l DRBD,STORAGE
LINSTOR ==> resource list -r a-plain
╭───────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node    ┊ Layers       ┊ Usage  ┊ Conns ┊      State ┊ CreatedOn           ┊
╞═══════════════════════════════════════════════════════════════════════════════════════════╡
┊ a-plain      ┊ omv     ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊   UpToDate ┊ 2026-01-14 11:08:44 ┊
┊ a-plain      ┊ romulus ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊ TieBreaker ┊ 2026-01-14 11:08:40 ┊
┊ a-plain      ┊ ryzen   ┊ DRBD,STORAGE ┊ Unused ┊ Ok    ┊   UpToDate ┊ 2026-01-14 11:08:40 ┊
╰───────────────────────────────────────────────────────────────────────────────────────────╯

Looking at dmesg on the nodes, these are the logs when creating the LUKS volume that goes out of sync

From the ZFS node

[303001.501572] drbd a-encrypted: Starting worker thread (node-id 0)
[303001.524393] drbd a-encrypted ryzen: Starting sender thread (peer-node-id 1)
[303001.540059] drbd a-encrypted tiny: Starting sender thread (peer-node-id 2)
[303001.646442] drbd a-encrypted/0 drbd1019: meta-data IO uses: blk-bio
[303001.650803] drbd a-encrypted/0 drbd1019: disk( Diskless -> Attaching ) [attach]
[303001.650830] drbd a-encrypted/0 drbd1019: Maximum number of peer devices = 7
[303001.651123] drbd a-encrypted: Method to ensure write ordering: flush
[303001.651161] drbd a-encrypted/0 drbd1019: drbd_bm_resize called with capacity == 10458872
[303001.655182] drbd a-encrypted/0 drbd1019: resync bitmap: bits=1307359 bits_4k=1307359 words=142996 pages=280
[303001.655197] drbd a-encrypted/0 drbd1019: size = 5107 MB (5229436 KB)
[303001.684153] drbd a-encrypted/0 drbd1019: bitmap READ of 280 pages took 20 ms
[303001.686357] drbd a-encrypted/0 drbd1019: recounting of set bits took additional 4ms
[303001.686392] drbd a-encrypted/0 drbd1019: disk( Attaching -> Inconsistent ) [attach]
[303001.686401] drbd a-encrypted/0 drbd1019: attached to current UUID: 0000000000000004
[303001.712765] drbd a-encrypted ryzen: conn( StandAlone -> Unconnected ) [connect]
[303001.716188] drbd a-encrypted tiny: conn( StandAlone -> Unconnected ) [connect]
[303001.727844] drbd a-encrypted ryzen: Starting receiver thread (peer-node-id 1)
[303001.730340] drbd a-encrypted tiny: Starting receiver thread (peer-node-id 2)
[303001.730410] drbd a-encrypted tiny: conn( Unconnected -> Connecting ) [connecting]
[303001.730667] drbd a-encrypted ryzen: conn( Unconnected -> Connecting ) [connecting]
[303002.240663] drbd a-encrypted tiny: Handshake to peer 2 successful: Agreed network protocol version 123
[303002.240687] drbd a-encrypted tiny: Feature flags enabled on protocol level: 0x1ff TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
[303002.240717] drbd a-encrypted ryzen: Handshake to peer 1 successful: Agreed network protocol version 123
[303002.240741] drbd a-encrypted ryzen: Feature flags enabled on protocol level: 0x1ff TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
[303002.241745] drbd a-encrypted tiny: Peer authenticated using 20 bytes HMAC
[303002.241851] drbd a-encrypted ryzen: Peer authenticated using 20 bytes HMAC
[303002.244628] drbd a-encrypted: Preparing cluster-wide state change 3153436799: 0->1 role( Secondary ) conn( Connected )
[303002.264641] drbd a-encrypted/0 drbd1019 ryzen: drbd_sync_handshake:
[303002.264663] drbd a-encrypted/0 drbd1019 ryzen: self 0000000000000004:0000000000000000:DC68FCE9F561873C:0000000000000000 bits:0 flags:24
[303002.264678] drbd a-encrypted/0 drbd1019 ryzen: peer 15C8116C05BB675E:DC68FCE9F561873D:0000000000000000:0000000000000000 bits:0 flags:1020
[303002.264691] drbd a-encrypted/0 drbd1019 ryzen: uuid_compare()=target-set-bitmap by rule=just-created-self
[303002.264702] drbd a-encrypted/0 drbd1019 ryzen: Setting and writing the whole bitmap, fresh node
[303002.283914] drbd a-encrypted/0 drbd1019: bitmap WRITE of 280 pages took 16 ms
[303002.284017] drbd a-encrypted: Declined by peer ryzen (id: 1), see the kernel log there
[303002.284062] drbd a-encrypted: Aborting cluster-wide state change 3153436799 (36ms) rv = -10
[303002.284164] drbd a-encrypted ryzen: conn( Connecting -> Disconnecting ) [connect-failed]
[303002.284170] drbd a-encrypted/0 drbd1019: disk( Inconsistent -> Outdated ) [connect-failed]
[303002.286713] drbd a-encrypted ryzen: Terminating sender thread
[303002.286772] drbd a-encrypted ryzen: Starting sender thread (peer-node-id 1)
[303002.296474] drbd a-encrypted: Preparing cluster-wide state change 2076488740: 0->2 role( Secondary ) conn( Connected )
[303002.344349] drbd a-encrypted ryzen: Connection closed
[303002.344392] drbd a-encrypted ryzen: helper command: /sbin/drbdadm disconnected
[303002.348467] drbd a-encrypted/0 drbd1019 tiny: self 0000000000000004:0000000000000000:DC68FCE9F561873C:0000000000000000 bits:0 flags:0
[303002.348494] drbd a-encrypted/0 drbd1019 tiny: peer's exposed UUID: 15C8116C05BB675E
[303002.348531] drbd a-encrypted: State change 2076488740: primary_nodes=0, weak_nodes=0
[303002.348543] drbd a-encrypted: Committing cluster-wide state change 2076488740 (52ms)
[303002.348660] drbd a-encrypted tiny: conn( Connecting -> Connected ) peer( Unknown -> Secondary ) [connected]
[303002.348672] drbd a-encrypted/0 drbd1019 tiny: pdsk( DUnknown -> Diskless ) repl( Off -> Established ) [connected]
[303002.351735] drbd a-encrypted ryzen: helper command: /sbin/drbdadm disconnected exit code 0
[303002.351828] drbd a-encrypted ryzen: conn( Disconnecting -> StandAlone ) [disconnected]
[303002.351848] drbd a-encrypted ryzen: Terminating receiver thread
[303016.993769] drbd a-encrypted ryzen: conn( StandAlone -> Unconnected ) [connect]
[303016.993891] drbd a-encrypted ryzen: Starting receiver thread (peer-node-id 1)
[303016.994145] drbd a-encrypted ryzen: conn( Unconnected -> Connecting ) [connecting]
[303017.505346] drbd a-encrypted ryzen: Handshake to peer 1 successful: Agreed network protocol version 123
[303017.505369] drbd a-encrypted ryzen: Feature flags enabled on protocol level: 0x1ff TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
[303017.506955] drbd a-encrypted ryzen: Peer authenticated using 20 bytes HMAC
[303017.509083] drbd a-encrypted: Preparing cluster-wide state change 342049303: 0->1 role( Secondary ) conn( Connected )
[303017.529196] drbd a-encrypted/0 drbd1019 ryzen: Peer sent bogus sizes, disconnecting
[303017.529286] drbd a-encrypted/0 drbd1019 ryzen: drbd_sync_handshake:
[303017.529295] drbd a-encrypted/0 drbd1019 ryzen: self 0000000000000004:0000000000000000:DC68FCE9F561873C:0000000000000000 bits:1307359 flags:20
[303017.529309] drbd a-encrypted/0 drbd1019 ryzen: peer 15C8116C05BB675E:DC68FCE9F561873D:0000000000000000:0000000000000000 bits:1311454 flags:1020
[303017.529323] drbd a-encrypted/0 drbd1019 ryzen: uuid_compare()=target-set-bitmap by rule=just-created-self
[303017.529334] drbd a-encrypted/0 drbd1019 ryzen: Setting and writing the whole bitmap, fresh node
[303017.529995] drbd a-encrypted: Declined by peer ryzen (id: 1), see the kernel log there
[303017.530029] drbd a-encrypted: Aborting cluster-wide state change 342049303 (20ms) rv = -10
[303017.530204] drbd a-encrypted ryzen: conn( Connecting -> Disconnecting ) [connect-failed]
[303017.530912] drbd a-encrypted ryzen: Terminating sender thread
[303017.531000] drbd a-encrypted ryzen: Starting sender thread (peer-node-id 1)
[303017.563675] drbd a-encrypted ryzen: Connection closed
[303017.563725] drbd a-encrypted ryzen: helper command: /sbin/drbdadm disconnected
[303017.567904] drbd a-encrypted ryzen: helper command: /sbin/drbdadm disconnected exit code 0
[303017.567999] drbd a-encrypted ryzen: conn( Disconnecting -> StandAlone ) [disconnected]
[303017.568016] drbd a-encrypted ryzen: Terminating receiver thread

From the LVM node

[325022.351965] drbd a-encrypted: Starting worker thread (node-id 1)
[325022.354138] drbd a-encrypted omv: Starting sender thread (peer-node-id 0)
[325022.354893] drbd a-encrypted tiny: Starting sender thread (peer-node-id 2)
[325022.364434] drbd a-encrypted/0 drbd1019: meta-data IO uses: blk-bio
[325022.364578] drbd a-encrypted/0 drbd1019: disk( Diskless -> Attaching ) [attach]
[325022.364587] drbd a-encrypted/0 drbd1019: Maximum number of peer devices = 7
[325022.364762] drbd a-encrypted: Method to ensure write ordering: flush
[325022.364770] drbd a-encrypted/0 drbd1019: drbd_bm_resize called with capacity == 10491632
[325022.365208] drbd a-encrypted/0 drbd1019: resync bitmap: bits=1311454 bits_4k=1311454 words=143444 pages=281
[325022.365215] drbd a-encrypted/0 drbd1019: size = 5123 MB (5245816 KB)
[325022.366834] drbd a-encrypted/0 drbd1019: disk( Attaching -> Inconsistent ) [attach]
[325022.366837] drbd a-encrypted/0 drbd1019: attached to current UUID: 0000000000000004
[325022.368655] drbd a-encrypted omv: conn( StandAlone -> Unconnected ) [connect]
[325022.369315] drbd a-encrypted tiny: conn( StandAlone -> Unconnected ) [connect]
[325022.369447] drbd a-encrypted omv: Starting receiver thread (peer-node-id 0)
[325022.369490] drbd a-encrypted omv: conn( Unconnected -> Connecting ) [connecting]
[325022.369498] drbd a-encrypted tiny: Starting receiver thread (peer-node-id 2)
[325022.369531] drbd a-encrypted tiny: conn( Unconnected -> Connecting ) [connecting]
[325022.547466] drbd a-encrypted: Preparing cluster-wide state change 608941056: 1->all role( Primary ) disk( UpToDate )
[325022.547470] drbd a-encrypted: Committing cluster-wide state change 608941056 (0ms)
[325022.547474] drbd a-encrypted: role( Secondary -> Primary ) [primary]
[325022.547477] drbd a-encrypted/0 drbd1019: disk( Inconsistent -> UpToDate ) quorum( no -> yes ) [primary]
[325022.547479] drbd a-encrypted/0 drbd1019 tiny: pdsk( DUnknown -> Outdated ) [primary]
[325022.547481] drbd a-encrypted/0 drbd1019 omv: pdsk( DUnknown -> Outdated ) [primary]
[325022.547533] drbd a-encrypted/0 drbd1019: persisting effective size = 5123 MB (5245816 KB)
[325022.548407] drbd a-encrypted: Forced to consider local data as UpToDate!
[325022.548410] drbd a-encrypted: Forced to consider peers as Outdated!
[325022.550072] drbd a-encrypted/0 drbd1019: new current UUID: 15C8116C05BB675F weak: FFFFFFFFFFFFFFFD
[325022.553188] drbd a-encrypted: Preparing cluster-wide state change 3382306153: 1->all role( Secondary )
[325022.553193] drbd a-encrypted: Committing cluster-wide state change 3382306153 (0ms)
[325022.553199] drbd a-encrypted: role( Primary -> Secondary ) [secondary]
[325022.876012] drbd a-encrypted tiny: Handshake to peer 2 successful: Agreed network protocol version 123
[325022.876020] drbd a-encrypted tiny: Feature flags enabled on protocol level: 0x1ff TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
[325022.876853] drbd a-encrypted tiny: Peer authenticated using 20 bytes HMAC
[325022.877157] drbd a-encrypted: Preparing cluster-wide state change 3910019162: 1->2 role( Secondary ) conn( Connected )
[325022.885829] drbd a-encrypted/0 drbd1019 tiny: self 15C8116C05BB675E:DC68FCE9F561873D:0000000000000000:0000000000000000 bits:0 flags:0
[325022.885836] drbd a-encrypted/0 drbd1019 tiny: peer's exposed UUID: 0000000000000000
[325022.885848] drbd a-encrypted: State change 3910019162: primary_nodes=0, weak_nodes=0
[325022.885852] drbd a-encrypted: Committing cluster-wide state change 3910019162 (8ms)
[325022.885872] drbd a-encrypted tiny: conn( Connecting -> Connected ) peer( Unknown -> Secondary ) [connected]
[325022.885876] drbd a-encrypted/0 drbd1019 tiny: pdsk( Outdated -> Diskless ) repl( Off -> Established ) [connected]
[325026.389115] drbd a-encrypted omv: Handshake to peer 0 successful: Agreed network protocol version 123
[325026.389124] drbd a-encrypted omv: Feature flags enabled on protocol level: 0x1ff TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
[325026.390144] drbd a-encrypted omv: Peer authenticated using 20 bytes HMAC
[325026.393047] drbd a-encrypted: Preparing remote state change 3153436799: 0->1 role( Secondary ) conn( Connected )
[325026.401122] drbd a-encrypted/0 drbd1019 omv: The peer's disk size is too small! (10458872 < 10491632 sectors)
[325026.401149] drbd a-encrypted/0 drbd1019 omv: drbd_sync_handshake:
[325026.401152] drbd a-encrypted/0 drbd1019 omv: self 15C8116C05BB675E:DC68FCE9F561873D:0000000000000000:0000000000000000 bits:0 flags:20
[325026.401157] drbd a-encrypted/0 drbd1019 omv: peer 0000000000000004:0000000000000000:0000000000000000:0000000000000000 bits:0 flags:24
[325026.401161] drbd a-encrypted/0 drbd1019 omv: uuid_compare()=source-set-bitmap by rule=just-created-peer
[325026.401164] drbd a-encrypted/0 drbd1019 omv: Setting and writing one bitmap slot, after drbd_sync_handshake
[325026.432424] drbd a-encrypted omv: Aborting remote state change 3153436799
[325026.432466] drbd a-encrypted omv: conn( Connecting -> Disconnecting ) [receive-disconnect]
[325026.432526] drbd a-encrypted omv: Terminating sender thread
[325026.432532] drbd a-encrypted omv: Starting sender thread (peer-node-id 0)
[325026.443098] drbd a-encrypted omv: Connection closed
[325026.443109] drbd a-encrypted omv: helper command: /sbin/drbdadm disconnected
[325026.444639] drbd a-encrypted omv: helper command: /sbin/drbdadm disconnected exit code 0
[325026.444666] drbd a-encrypted omv: conn( Disconnecting -> StandAlone ) [disconnected]
[325026.444673] drbd a-encrypted omv: Terminating receiver thread
[325026.445080] drbd a-encrypted: Preparing remote state change 2076488740: 0->2 role( Secondary ) conn( Connected )
[325026.448369] drbd a-encrypted/0 drbd1019 tiny: The peer's disk size is too small! (10458872 < 10491632 sectors)
[325026.497454] drbd a-encrypted tiny: Committing remote state change 2076488740 (primary_nodes=0)
[325041.093022] drbd a-encrypted omv: conn( StandAlone -> Unconnected ) [connect]
[325041.093068] drbd a-encrypted omv: Starting receiver thread (peer-node-id 0)
[325041.093133] drbd a-encrypted omv: conn( Unconnected -> Connecting ) [connecting]
[325041.653577] drbd a-encrypted omv: Handshake to peer 0 successful: Agreed network protocol version 123
[325041.653584] drbd a-encrypted omv: Feature flags enabled on protocol level: 0x1ff TRIM THIN_RESYNC WRITE_SAME WRITE_ZEROES RESYNC_DAGTAG
[325041.655700] drbd a-encrypted omv: Peer authenticated using 20 bytes HMAC
[325041.657265] drbd a-encrypted: Preparing remote state change 342049303: 0->1 role( Secondary ) conn( Connected )
[325041.666090] drbd a-encrypted/0 drbd1019 omv: The peer's disk size is too small! (10458872 < 10491632 sectors)
[325041.666127] drbd a-encrypted/0 drbd1019 omv: drbd_sync_handshake:
[325041.666131] drbd a-encrypted/0 drbd1019 omv: self 15C8116C05BB675E:DC68FCE9F561873D:0000000000000000:0000000000000000 bits:1311454 flags:20
[325041.666135] drbd a-encrypted/0 drbd1019 omv: peer 0000000000000004:0000000000000000:DC68FCE9F561873C:0000000000000000 bits:1307359 flags:20
[325041.666141] drbd a-encrypted/0 drbd1019 omv: uuid_compare()=source-set-bitmap by rule=just-created-peer
[325041.666144] drbd a-encrypted/0 drbd1019 omv: Setting and writing one bitmap slot, after drbd_sync_handshake
[325041.677819] drbd a-encrypted omv: Aborting remote state change 342049303
[325041.678061] drbd a-encrypted omv: conn( Connecting -> Disconnecting ) [receive-disconnect]
[325041.678141] drbd a-encrypted omv: Terminating sender thread
[325041.678148] drbd a-encrypted omv: Starting sender thread (peer-node-id 0)
[325041.682956] drbd a-encrypted omv: Connection closed
[325041.682965] drbd a-encrypted omv: helper command: /sbin/drbdadm disconnected
[325041.684293] drbd a-encrypted omv: helper command: /sbin/drbdadm disconnected exit code 0
[325041.684316] drbd a-encrypted omv: conn( Disconnecting -> StandAlone ) [disconnected]
[325041.684322] drbd a-encrypted omv: Terminating receiver thread

Questions

  • Is it actually possible to run LUKS across different storage drivers, or am I trying unsupported features together?
  • Is there any other suggestion on how to achieve encrypted backups to an off-site location without using the LUKS layer using Linstor backups?

Thank you for anyone that can help figure this feature integrations and deployment design hurdles

Testing it further, this seems to be an issue on the LUKs layer implementation, not an intrinsic issue on DRBD.

Using the same setup, I’ve been able to manually encrypt the DRBD,STORAGE resource, and have it synced without causing the Standalone state. Unfortunatelly this also means loosing the automatic luksOpen operation that Linstor provides when accessing the drives, thus making the CSI integration not viable.

To reproduce it, I’ve created a resource without LUKS, using my mixed storage drivers resource group. This allocated the resource accordingly, exposing it as a device on the ZFS and on the LVM nodes.

Then following the typical LUKS and mkfs volume creation, I was able to write a file on the ZFS node. After un-mounting and closing the LUKS container, I was able to luksOpen on the LVM node, and read the file there.

# controller
linstor resource-group spawn plain-w-encrypted 1Gib -l DRBD,STORAGE
# ZFS node
cryptsetup luksFormat  /dev/drbd/by-res/plain-w-encrypted/0
cryptsetup luksOpen  /dev/drbd/by-res/plain-w-encrypted/0 storage
mkfs.xfs /dev/mapper/storage

mkdir mountpoint
mount /dev/mapper/storage mountpoint
echo a > mountpoint/hello.txt

umount mountpoint
cryptsetup luksClose storage
# LVM node
cryptsetup luksOpen  /dev/drbd/by-res/plain-w-encrypted/0 storage
mkdir mountpoint
mount /dev/mapper/storage mountpoint
cat mountpoint/hello.txt

umount mountpoint
cryptsetup luksClose storage

There seems to be an issue on the LUKS layer implementation that is causing issues on initial creation and initial sync. My guess is the layer tries to luksFormat the first 2 drives, which leads to different RNG blocks being used and causing a split brain scenario.

It seems I would be able to use LUKS manually, but automated volume coordination feature is lost.

There seems to be more oddities to the LUKS layer implementation. To validate if ordering of creation can workaround the resource creation issues, I’ve tried to create a single placement of the resource, and then, add it to a node with a different storage driver.

To validate it, I’ve tried creating the resources on each storage driver first, then adding the disk. Here is with ZFS first.

linstor rg spawn nomad enc-zfs-first 1Gib -l DRBD,LUKS,STORAGE --place-count 1 --storage-pool backup
# wait for resource creation completion
linstor resource create ryzen enc-zfs-first --storage-pool compute

This worked, but it triggered a full sync of the ZFS thin drive, making it 16Mb on ZFS and 1Gib on LVM. This might be an acceptable workaround for smaller sizes, but not effective for larger volumes as this is effectively copying random data.

LINSTOR ==> r list-volumes -r enc-zfs-first
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource      ┊ Node    ┊ StoragePool          ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊      State ┊ Repl           ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ enc-zfs-first ┊ omv     ┊ backup               ┊     0 ┊    1020 ┊ /dev/drbd1020 ┊ 16.39 MiB ┊ Unused ┊   UpToDate ┊ Established(2) ┊
┊ enc-zfs-first ┊ romulus ┊ DfltDisklessStorPool ┊     0 ┊    1020 ┊ /dev/drbd1020 ┊           ┊ Unused ┊ TieBreaker ┊ Established(2) ┊
┊ enc-zfs-first ┊ ryzen   ┊ compute              ┊     0 ┊    1020 ┊ /dev/drbd1020 ┊  1.00 GiB ┊ Unused ┊   UpToDate ┊ Established(2) ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

This does not happen if I create the plain storage volume with the manual cryptsetup luksFormat on top. The disk usage is minimal in this case. Here it is for comparison:

LINSTOR ==> r list-volumes -r plain-w-encrypted
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource          ┊ Node  ┊ StoragePool          ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊      State ┊ Repl           ┊
╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ plain-w-encrypted ┊ omv   ┊ backup               ┊     0 ┊    1019 ┊ /dev/drbd1019 ┊    76 KiB ┊ Unused ┊   UpToDate ┊ Established(2) ┊
┊ plain-w-encrypted ┊ ryzen ┊ compute              ┊     0 ┊    1019 ┊ /dev/drbd1019 ┊   421 KiB ┊ Unused ┊   UpToDate ┊ Established(2) ┊
┊ plain-w-encrypted ┊ tiny  ┊ DfltDisklessStorPool ┊     0 ┊    1019 ┊ /dev/drbd1019 ┊           ┊ Unused ┊ TieBreaker ┊ Established(2) ┊
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Despite the slow full-copy the resulting volume is usable from the LMV machine, with the LUKS open fully automated

# LVM machine
mkfs.xfs /dev/drbd/by-res/enc-zfs-first/0
mkdir -p mountpoint
mount /dev/drbd/by-res/enc-zfs-first/0 mountpoint
echo hello > mountpoint/hello.txt
umount mounpoint
# ZFS machine
mount /dev/drbd/by-res/enc-zfs-first/0 mountpoint
echo hello > mountpoint/hello.txt
umount mounpoint

As a second test, I’ve tried the other way around, starting with a single LVM volume and adding the ZFS node later.

linstor rg spawn nomad enc-lvm-first 1Gib -l DRBD,LUKS,STORAGE --place-count 1 --storage-pool compute
# wait for resource creation completion
linstor resource create ryzen enc-lvm-first --storage-pool backup

After a while, the state is copied to a much closer value, but still different as expected due to different sector and extent sizes between the drivers. This led to the immediate split brain with the StandAlone state, causing the volumes to not be usable.

LINSTOR ==> r list-volumes -r enc-lvm-first
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource      ┊ Node      ┊ StoragePool          ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊  Allocated ┊ InUse  ┊        State ┊ Repl           ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ enc-lvm-first ┊ omv       ┊ backup               ┊     0 ┊    1022 ┊ /dev/drbd1022 ┊  16.39 MiB ┊ Unused ┊ Inconsistent ┊ Established(1) ┊
┊ enc-lvm-first ┊ rotterdam ┊ DfltDisklessStorPool ┊     0 ┊    1022 ┊ /dev/drbd1022 ┊            ┊ Unused ┊   TieBreaker ┊ Established(2) ┊
┊ enc-lvm-first ┊ ryzen     ┊ compute              ┊     0 ┊    1022 ┊ /dev/drbd1022 ┊ 104.61 MiB ┊ Unused ┊     UpToDate ┊ Established(1) ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Seem how the manual process to create the LUKS volume works as expected on top of DRDB storage, it seems that the LUKS layer coordination/implementation is what prevents the different storage drives to work together. I’ll try to report this as a GitHub issue later on when I can, but I thought of posting my research so far if someone else has any ideas on how to mitigate/workaround or even fix this.

Thanks for the detail on your progress here. While I can’t speak to whether or not use of LUKS with a mixed storage pool is supported right now, I’d be interested to see what the Controller logs look like during the tests with it, as well as if there are any LINSTOR error reports generated alongside of that. You might need to increase your log verbosity temporarily, here is how to do that:

You can find both of these easily in the sos-report by running something like this: linstor sos-report download --since 4h

Browsing the repo, I came across this commit which introduces a new Whitelisted property to configure LUKS format open arguments. Unfortunately, the new property does not seem to be updated across linstor-server , linstor-client or linstor-gui.

rg set-property nomad StorDriver/LukscreateOptions "a"

Description:
    Invalid property key: StorDriver/LukscreateOptions
Cause:
    The key 'StorDriver/LukscreateOptions' is not whitelisted.
Details:
    Resource group: nomad
Show reports:
    linstor error-reports show 696A7515-00000-000009

ERROR:
Description:
    Invalid property key: StorDriver/LukscreateOptions
Cause:
    The key 'StorDriver/LukscreateOptions' is not whitelisted.
Details:
    Resource group: nomad
Show reports:
    linstor error-reports show 696A7515-00000-000007

LINSTOR ==>  error-reports show 696A7515-00000-000007
ERROR REPORT 696A7515-00000-000007

============================================================

Application:                        LINBIT? LINSTOR
Module:                             Controller
Version:                            1.32.3
Build ID:                           853b9e1be82f5ab63628327887645cb3e7236c89
Build time:                         2026-01-12T13:37:52+00:00
Error time:                         2026-01-16 18:16:05
Node:                               4946cc403c95
Thread:                             grizzly-http-server-27
Access context information

Identity:                           PUBLIC
Role:                               PUBLIC
Domain:                             PUBLIC

Peer:                               RestClient(172.17.0.1; 'PythonLinstor/1.27.1 (API1.0.4): Client 1.27.1')

============================================================

Reported error:
===============

Category:                           RuntimeException
Class name:                         ApiRcException
Class canonical name:               com.linbit.linstor.core.apicallhandler.response.ApiRcException
Generated at:                       Method 'fillProperties', Source file 'CtrlPropsHelper.java', Line #629

Error message:                      Invalid property key: StorDriver/LukscreateOptions

Error context:
        Invalid property key: StorDriver/LukscreateOptions
Call backtrace:

    Method                                   Native Class:Line number
    fillProperties                           N      com.linbit.linstor.core.apicallhandler.controller.CtrlPropsHelper:629
    modifyInTransaction                      N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscGrpApiCallHandler:546
    lambda$modify$3                          N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscGrpApiCallHandler:465
    doInScope                                N      com.linbit.linstor.core.apicallhandler.ScopeRunner:178
    lambda$fluxInScope$0                     N      com.linbit.linstor.core.apicallhandler.ScopeRunner:101
    call                                     N      reactor.core.publisher.MonoCallable:72
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:128
    subscribeOrReturn                        N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.Flux:8833
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:196
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2570
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:141
    subscribe                                N      reactor.core.publisher.MonoJust:55
    subscribe                                N      reactor.core.publisher.MonoDeferContextual:55
    subscribe                                N      reactor.core.publisher.Flux:8848
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:196
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2570
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:141
    subscribe                                N      reactor.core.publisher.MonoJust:55
    subscribe                                N      reactor.core.publisher.MonoDeferContextual:55
    subscribe                                N      reactor.core.publisher.InternalMonoOperator:76
    subscribe                                N      reactor.core.publisher.MonoUsing:102
    subscribe                                N      reactor.core.publisher.Mono:4576
    subscribeWith                            N      reactor.core.publisher.Mono:4642
    subscribe                                N      reactor.core.publisher.Mono:4542
    subscribe                                N      reactor.core.publisher.Mono:4478
    subscribe                                N      reactor.core.publisher.Mono:4450
    doFlux                                   N      com.linbit.linstor.api.rest.v1.RequestHelper:345
    modifyResourceGroup                      N      com.linbit.linstor.api.rest.v1.ResourceGroups:230
    invoke0                                  Y      jdk.internal.reflect.NativeMethodAccessorImpl:unknown
    invoke                                   N      jdk.internal.reflect.NativeMethodAccessorImpl:77
    invoke                                   N      jdk.internal.reflect.DelegatingMethodAccessorImpl:43
    invoke                                   N      java.lang.reflect.Method:569
    lambda$static$0                          N      org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory:52
    run                                      N      org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1:146
    invoke                                   N      org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher:189
    doDispatch                               N      org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker:159
    dispatch                                 N      org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher:93
    invoke                                   N      org.glassfish.jersey.server.model.ResourceMethodInvoker:478
    apply                                    N      org.glassfish.jersey.server.model.ResourceMethodInvoker:400
    apply                                    N      org.glassfish.jersey.server.model.ResourceMethodInvoker:81
    run                                      N      org.glassfish.jersey.server.ServerRuntime$1:256
    call                                     N      org.glassfish.jersey.internal.Errors$1:248
    call                                     N      org.glassfish.jersey.internal.Errors$1:244
    process                                  N      org.glassfish.jersey.internal.Errors:292
    process                                  N      org.glassfish.jersey.internal.Errors:274
    process                                  N      org.glassfish.jersey.internal.Errors:244
    runInScope                               N      org.glassfish.jersey.process.internal.RequestScope:265
    process                                  N      org.glassfish.jersey.server.ServerRuntime:235
    handle                                   N      org.glassfish.jersey.server.ApplicationHandler:684
    service                                  N      org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer:356
    run                                      N      org.glassfish.grizzly.http.server.HttpHandler$1:190
    doWork                                   N      org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker:535
    run                                      N      org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker:515
    run                                      N      java.lang.Thread:840


END OF ERROR REPORT.

I also came across the PR#472 which includes a fixed offset size to ensure consistent offsets on different nodes. I’ve tried using a custom build with this PR included, but spawning a resource failed due to process synchronization.

The custom build process had an error where the .resfile would not be created fast enough for the drbdadm process to start

rg spawn nomad x-enc-lvm 1Gib -l DRBD,LUKS,STORAGE --place-count 1 --storage-pool compute
rg spawn nomad x-enc-lvm 1Gib -l DRBD,LUKS,STORAGE --place-count 1 --storage-pool compute

SUCCESS:
    Volume definition with number '0' successfully  created in resource definition 'x-enc-lvm'.
SUCCESS:
Description:
    New resource definition 'x-enc-lvm' created.
Details:
    Resource definition 'x-enc-lvm' UUID is: 366f2c72-7fdb-4f7f-a283-921f795c0ef0
SUCCESS:
    Successfully set property key(s): StorPoolName
SUCCESS:
Description:
    Resource 'x-enc-lvm' successfully autoplaced on 1 nodes
Details:
    Used nodes (storage pool name): 'tiny (compute)'
INFO:
    Updated x-enc-lvm DRBD auto verify algorithm to 'sha512'
ERROR:
    (tiny) Failed to create meta-data for DRBD volume x-enc-lvm/0
Show reports:
    linstor error-reports show 696A7581-63103-000004


r list
╭──────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Layers            ┊ Usage  ┊ Conns ┊   State ┊ CreatedOn           ┊
╞══════════════════════════════════════════════════════════════════════════════════════════╡
┊ x-enc-lvm    ┊ tiny ┊ DRBD,LUKS,STORAGE ┊ Unused ┊       ┊ Unknown ┊ 2026-01-16 18:31:08 ┊
╰──────────────────────────────────────────────────────────────────────────────────────────╯
/etc/drbd.d/linstor-resources.res:1: no match for include pattern '/var/lib/linstor.d/*.res'.
no resources defined!
LINSTOR ==> error-reports show 696A7581-63103-000004
ERROR REPORT 696A7581-63103-000004

============================================================

Application:                        LINBIT? LINSTOR
Module:                             Satellite
Version:                            1.32.3
Build ID:                           853b9e1be82f5ab63628327887645cb3e7236c89
Build time:                         2026-01-12T13:37:52+00:00
Error time:                         2026-01-16 18:25:23
Node:                               tiny
Thread:                             DeviceManager

============================================================

Reported error:
===============

Category:                           LinStorException
Class name:                         VolumeException
Class canonical name:               com.linbit.linstor.core.devmgr.exceptions.VolumeException
Generated at:                       Method 'createMetaData', Source file 'DrbdLayer.java', Line #1369

Error message:                      Failed to create meta-data for DRBD volume x-enc-lvm/0

Error context:
        An error occurred while processing resource 'Node: 'tiny', Rsc: 'x-enc-lvm''
ErrorContext:


Call backtrace:

    Method                                   Native Class:Line number
    createMetaData                           N      com.linbit.linstor.layer.drbd.DrbdLayer:1369
    adjustDrbd                               N      com.linbit.linstor.layer.drbd.DrbdLayer:622
    processResource                          N      com.linbit.linstor.layer.drbd.DrbdLayer:281
    lambda$processResource$1                 N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:1352
    processGeneric                           N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:1395
    processResource                          N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:1348
    processResources                         N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:382
    dispatchResources                        N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:224
    dispatchResources                        N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:333
    phaseDispatchDeviceHandlers              N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:1139
    devMgrLoop                               N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:778
    run                                      N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:674
    run                                      N      java.lang.Thread:840

Caused by:
==========

Category:                           LinStorException
Class name:                         ExtCmdFailedException
Class canonical name:               com.linbit.extproc.ExtCmdFailedException
Generated at:                       Method 'execute', Source file 'DrbdAdm.java', Line #797

Error message:                      The external command 'drbdadm' exited with error code 1


ErrorContext:
  Description: Execution of the external command 'drbdadm' failed.
  Cause:       The external command exited with error code 1.
  Correction:  - Check whether the external program is operating properly.
- Check whether the command line is correct.
  Contact a system administrator or a developer if the command line is no longer valid
  for the installed version of the external program.
  Details:     The full command line executed was:
drbdadm -vvv --max-peers 7 -- --force create-md x-enc-lvm/0

The external command sent the following output data:


The external command sent the following error information:
/etc/drbd.d/linstor-resources.res:1: no match for include pattern '/var/lib/linstor.d/*.res'.
no resources defined!




Call backtrace:

    Method                                   Native Class:Line number
    execute                                  N      com.linbit.linstor.layer.drbd.utils.DrbdAdm:797
    simpleAdmCommand                         N      com.linbit.linstor.layer.drbd.utils.DrbdAdm:759
    createMd                                 N      com.linbit.linstor.layer.drbd.utils.DrbdAdm:386
    createMetaData                           N      com.linbit.linstor.layer.drbd.DrbdLayer:1331
    adjustDrbd                               N      com.linbit.linstor.layer.drbd.DrbdLayer:622
    processResource                          N      com.linbit.linstor.layer.drbd.DrbdLayer:281
    lambda$processResource$1                 N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:1352
    processGeneric                           N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:1395
    processResource                          N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:1348
    processResources                         N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:382
    dispatchResources                        N      com.linbit.linstor.core.devmgr.DeviceHandlerImpl:224
    dispatchResources                        N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:333
    phaseDispatchDeviceHandlers              N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:1139
    devMgrLoop                               N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:778
    run                                      N      com.linbit.linstor.core.devmgr.DeviceManagerImpl:674
    run                                      N      java.lang.Thread:840


END OF ERROR REPORT.


2026-01-16 18:30:24.238 [MainWorkerPool-2] INFO  LINSTOR/Satellite/000002 SYSTEM - SpaceInfo: compute -> 335511552/335511552
2026-01-16 18:30:24.238 [MainWorkerPool-2] INFO  LINSTOR/Satellite/000002 SYSTEM - SpaceInfo: DfltDisklessStorPool -> 9223372036854775807/9223372036854775807
2026-01-16 18:30:24.239 [MainWorkerPool-2] INFO  LINSTOR/Satellite/000002 SYSTEM - FullSync sending response 3
2026-01-16 18:30:40.316 [MainWorkerPool-3] INFO  LINSTOR/Satellite/000003 SYSTEM - SpaceInfo: compute -> 335511552/335511552
2026-01-16 18:30:40.317 [MainWorkerPool-3] INFO  LINSTOR/Satellite/000003 SYSTEM - SpaceInfo: DfltDisklessStorPool -> 9223372036854775807/9223372036854775807
2026-01-16 18:30:40.450 [DeviceManager] INFO  LINSTOR/Satellite/c5edc2 SYSTEM - Aligning x-enc-lvm/0 size from 1065224 KiB to 1069056 KiB to be a multiple of extent size 4096 KiB (from Storage Pool)
2026-01-16 18:30:40.583 [DeviceManager] INFO  LINSTOR/Satellite/c5edc2 SYSTEM - Volume number 0 of resource 'x-enc-lvm' [LVM-Thin] created
2026-01-16 18:30:52.774 [DeviceManager] ERROR LINSTOR/Satellite/c5edc2 SYSTEM - Failed to create meta-data for DRBD volume x-enc-lvm/0 [Report number 696A83BB-63103-000000]

2026-01-16 18:30:52.905 [DeviceManager] INFO  LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 2
2026-01-16 18:30:52.905 [DeviceManager] INFO  LINSTOR/Satellite/256ead SYSTEM - Begin DeviceManager cycle 3
2026-01-16 18:30:53.652 [MainWorkerPool-7] INFO  LINSTOR/Satellite/000006 SYSTEM - SpaceInfo: compute -> 335478000/335511552
2026-01-16 18:30:53.652 [MainWorkerPool-7] INFO  LINSTOR/Satellite/000006 SYSTEM - SpaceInfo: DfltDisklessStorPool -> 9223372036854775807/9223372036854775807
2026-01-16 18:31:08.580 [DeviceManager] INFO  LINSTOR/Satellite/256ead SYSTEM - Aligning /dev/pve/x-enc-lvm_00000 size from 1065224 KiB to 1069056 KiB to be a multiple of extent size 4096 KiB (from Storage Pool)
2026-01-16 18:31:08.668 [DeviceManager] INFO  LINSTOR/Satellite/256ead SYSTEM - DRBD regenerated resource file: /var/lib/linstor.d/x-enc-lvm.res
2026-01-16 18:31:08.712 [DeviceManager] INFO  LINSTOR/Satellite/256ead SYSTEM - DRBD meta data created for x-enc-lvm/0
2026-01-16 18:31:08.719 [DeviceManager] INFO  LINSTOR/Satellite/256ead SYSTEM - DRBD skipping initial sync for x-enc-lvm/0
2026-01-16 18:31:08.723 [DeviceManager] INFO  LINSTOR/Satellite/256ead SYSTEM - Resource 'x-enc-lvm' [DRBD] adjusted.
2026-01-16 18:31:08.789 [DeviceManager] INFO  LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 3
2026-01-16 18:31:08.789 [MainWorkerPool-4] INFO  LINSTOR/Satellite/431642 SYSTEM - Primary Resource x-enc-lvm
2026-01-16 18:31:08.789 [MainWorkerPool-4] INFO  LINSTOR/Satellite/431642 SYSTEM - Primary bool set on Resource x-enc-lvm
2026-01-16 18:31:08.789 [DeviceManager] INFO  LINSTOR/Satellite/94878c SYSTEM - Begin DeviceManager cycle 4
2026-01-16 18:31:08.795 [DeviceManager] INFO  LINSTOR/Satellite/94878c SYSTEM - Aligning /dev/pve/x-enc-lvm_00000 size from 1065224 KiB to 1069056 KiB to be a multiple of extent size 4096 KiB (from Storage Pool)
2026-01-16 18:31:08.840 [DeviceManager] ERROR LINSTOR/Satellite/94878c SYSTEM - Failed to set resource 'x-enc-lvm' to primary [Report number 696A83BB-63103-000001]

2026-01-16 18:31:08.843 [DeviceManager] INFO  LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 4
2026-01-16 18:31:08.844 [DeviceManager] INFO  LINSTOR/Satellite/f16b64 SYSTEM - Begin DeviceManager cycle 5
2026-01-16 18:31:23.580 [DeviceManager] INFO  LINSTOR/Satellite/f16b64 SYSTEM - Aligning /dev/pve/x-enc-lvm_00000 size from 1065224 KiB to 1069056 KiB to be a multiple of extent size 4096 KiB (from Storage Pool)
2026-01-16 18:31:23.698 [DeviceManager] INFO  LINSTOR/Satellite/f16b64 SYSTEM - Resource 'x-enc-lvm' [DRBD] adjusted.
2026-01-16 18:31:23.739 [DeviceManager] INFO  LINSTOR/Satellite/ SYSTEM - End DeviceManager cycle 5
2026-01-16 18:31:23.739 [DeviceManager] INFO  LINSTOR/Satellite/813d5b SYSTEM - Begin DeviceManager cycle 6
2026-01-16 18:31:31.944 [MainWorkerPool-1] INFO  LINSTOR/Satellite/00000b SYSTEM - SpaceInfo: compute -> 335478000/335511552
2026-01-16 18:31:31.944 [MainWorkerPool-1] INFO  LINSTOR/Satellite/00000b SYSTEM - SpaceInfo: DfltDisklessStorPool -> 9223372036854775807/9223372036854775807

If I manually call drbdadm up $res it can connect with status OK, but it also failed on the second node similarly. I’ve tried running the /var/lib/linstor.d/ from tempfs as well as the container storage, but there seems to be a missing await somewhere on the server during creation on the latest release.

I guess if the StorDrievr/Luks* flags were updated, I could try setting a fixed --sector-size flag, as there is some introspection on the underlying storage drive during creation based on cryptsetup help


       --sector-size bytes
           Set encryption sector size for use with LUKS2 device type. It
           must be a power of two and in the 512 - 4096 bytes range.

           The encryption sector size is set based on the underlying data
           device if not specified explicitly. For native 4096-byte
           physical sector devices, it is set to 4096 bytes. For
           4096/512e (4096-byte physical sector size with 512-byte sector
           emulation), it is set to 4096 bytes. For drives reporting only
           a 512-byte physical sector size, it is set to 512 bytes. If
           the data device is a regular file (container), it is set to
           4096 bytes. 

I’ll try later to fetch the logs with increased log levels, but it seems that the current release lacks the tuning options to attempt fixing values across storage drivers at this moment.

I’ve tried moving the LUKS layer last, to simulate the manual attempt, but as expected and mentioned in the docs, STORAGE must be the last layer.


LINSTOR ==> rg spawn nomad experiment-luks-last-layer-lvm-first 1Gib -l DRBD,STORAGE,LUKS --place-count 1 --storage-pool compute
ERROR:
Description:
    The layer stack [DRBD, STORAGE, LUKS, STORAGE] is invalid
Details:
    Resource group: nomad
Show reports:
    linstor error-reports show 696A860E-00000-000000

I’ve collected the logs using TRACE. Is there an email I can send it to to avoid leaking any PII of the cluster, such as public ipv6 addresses?

I’m running further tests trying an workaround based on the observed behaiours so far.

Given that creating the volume on ZFS first I’m able to access it over LVM with only the fact it effectively becomes a “thick” storage copying unused bytes, I’ve tried the following:

  • Create a Resource Definition only on backup ZFS storage
    • Make it small, 50MiB
  • Add a diskful resource on a compute LVM node
  • Let it copy 50MiB (36MiB)
  • Change size to 1GiB

This way, the resulting volume size on LVM is just about the headers sizes + files modified on the ZFS system. Testing it further, I’ve resized again, to see how it behaves adding a 3rd diskful volume

  • Resize from 1GiB to 2GiB
    • No additional copy happens on ZFS or LVM disks
  • Add a diskful volume on a 3rd node on LVM
    • Node 3 copies the full 2Gib, Node 2 copies 30MiB, Node 1 has allocated 16Mib
rg spawn nomad test-resize 30Mib --storage backup --place-count
r make-available romulus test-resize --diskful
vd  set-size test-resize 0 1GiB
vd  set-size test-resize 0 2GiB
r make-available tiny test-resize --diskful

Resulting usage:


LINSTOR ==> r lv -r test-resize
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Resource    ┊ Node      ┊ StoragePool          ┊ VolNr ┊ MinorNr ┊ DeviceName    ┊ Allocated ┊ InUse  ┊    State ┊ Repl           ┊
╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ test-resize ┊ omv       ┊ backup               ┊     0 ┊    1004 ┊ /dev/drbd1004 ┊ 16.93 MiB ┊ Unused ┊ UpToDate ┊ Established(4) ┊
┊ test-resize ┊ romulus   ┊ compute              ┊     0 ┊    1004 ┊ /dev/drbd1004 ┊ 31.02 MiB ┊ Unused ┊ UpToDate ┊ Established(4) ┊
┊ test-resize ┊ rotterdam ┊ DfltDisklessStorPool ┊     0 ┊    1004 ┊ /dev/drbd1004 ┊           ┊ Unused ┊ Diskless ┊ Established(3) ┊
┊ test-resize ┊ ryzen     ┊ DfltDisklessStorPool ┊     0 ┊    1004 ┊ /dev/drbd1004 ┊           ┊ Unused ┊ Diskless ┊ Established(3) ┊
┊ test-resize ┊ tiny      ┊ compute              ┊     0 ┊    1004 ┊ /dev/drbd1004 ┊  2.00 GiB ┊ Unused ┊ UpToDate ┊ Established(4) ┊
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

So it seems like LUKS is working, but has a few oddities when the underling storage driver are different. I’ll try running like this for a while to see if I can observe further issues but starting with a small volume for at least 2 copies should work for my (non-production) use case.

I’d not recommend relying on this myself tho.