Different AllocationGranularity on my nodes

I am running a 3-node-proxmox cluster with linstor/drbd.

Recently, I upgraded one servers discs from HDD to SSD. For this I recreated the ZFS zpool on that node, which now has a different AllocationGranularity due to a more recent ZFS version, I’d assume.

The command “linstor -m sp l” shows:
"StorDriver/internal/AllocationGranularity": "16" on node 1
"StorDriver/internal/AllocationGranularity": "8" on nodes 2 and 3

At the same time, from beginning on, I use StorDriver/ZfscreateOptions = “-b 64k” on my resource-group, setting volblocksize to 64k.

Currently I am not able to migrate resources from nodes 2 and 3 to node 1, what is not really a problem. I can think of various ways to fix this.

However, my question is, what makes the most sense? I assume I should recreate the zpools on nodes 2 and 3 to have AllocationGranularity = 16. But I’m wondering how this goes together with my volblocksize of 64k. Are there other/better options?

Any ideas on this will help me :slight_smile:
Thanks in advance.


linstor -m storage-pool list
[
  [
    {
      "storage_pool_name": "DfltDisklessStorPool",
      "node_name": "pve1",
      "provider_kind": "DISKLESS",
      "static_traits": {
        "SupportsSnapshots": "false"
      },
      "free_capacity": 9223372036854775807,
      "total_capacity": 9223372036854775807,
      "free_space_mgr_name": "pve1;DfltDisklessStorPool",
      "uuid": "0d9b1f16-c3a2-499c-a23e-fc20e6b157e0",
      "supports_snapshots": false,
      "external_locking": false
    },
    {
      "storage_pool_name": "DfltDisklessStorPool",
      "node_name": "pve2",
      "provider_kind": "DISKLESS",
      "static_traits": {
        "SupportsSnapshots": "false"
      },
      "free_capacity": 9223372036854775807,
      "total_capacity": 9223372036854775807,
      "free_space_mgr_name": "pve2;DfltDisklessStorPool",
      "uuid": "dcd4d766-9b50-4d86-85b7-4714aa967196",
      "supports_snapshots": false,
      "external_locking": false
    },
    {
      "storage_pool_name": "DfltDisklessStorPool",
      "node_name": "pve3",
      "provider_kind": "DISKLESS",
      "static_traits": {
        "SupportsSnapshots": "false"
      },
      "free_capacity": 9223372036854775807,
      "total_capacity": 9223372036854775807,
      "free_space_mgr_name": "pve3;DfltDisklessStorPool",
      "uuid": "d52a9bb1-f562-4dec-b326-21acf54c194d",
      "supports_snapshots": false,
      "external_locking": false
    },
    {
      "storage_pool_name": "drbd_disk",
      "node_name": "pve1",
      "provider_kind": "ZFS",
      "props": {
        "StorDriver/StorPoolName": "zpool_disk_drbd",
        "StorDriver/internal/AllocationGranularity": "16"
      },
      "static_traits": {
        "Provisioning": "Fat",
        "SupportsSnapshots": "true"
      },
      "free_capacity": 7344034195,
      "total_capacity": 9361686528,
      "free_space_mgr_name": "pve1;drbd_disk",
      "uuid": "c448a177-5185-44fb-89ff-e81ade460277",
      "supports_snapshots": true,
      "external_locking": false
    },
    {
      "storage_pool_name": "drbd_disk",
      "node_name": "pve2",
      "provider_kind": "ZFS",
      "props": {
        "StorDriver/StorPoolName": "zpool_disk_drbd",
        "StorDriver/internal/AllocationGranularity": "8"
      },
      "static_traits": {
        "Provisioning": "Fat",
        "SupportsSnapshots": "true"
      },
      "free_capacity": 4304483389,
      "total_capacity": 9361686528,
      "free_space_mgr_name": "pve2;drbd_disk",
      "uuid": "cb73c1f4-baae-4a59-90c1-9cfa8ff9a934",
      "supports_snapshots": true,
      "external_locking": false
    },
    {
      "storage_pool_name": "drbd_disk",
      "node_name": "pve3",
      "provider_kind": "ZFS",
      "props": {
        "StorDriver/StorPoolName": "zpool_disk_drbd",
        "StorDriver/internal/AllocationGranularity": "8"
      },
      "static_traits": {
        "Provisioning": "Fat",
        "SupportsSnapshots": "true"
      },
      "free_capacity": 4304441461,
      "total_capacity": 9361686528,
      "free_space_mgr_name": "pve3;drbd_disk",
      "uuid": "f0805868-e47d-43dc-a44c-9e9ed3460df2",
      "supports_snapshots": true,
      "external_locking": false
    }
  ]
]

Hi there,

From my understanding, the AllocationGranularity property is referencing the default volblocksize when creating a new zvol on a system with ZFS. This has recently changed from 8K to 16K and is reflected in the properties after upgrading your node and creating new storage pools.

By overriding your volblocksize to 64K, the values above are somewhat irrelevant, but unfortunately LINSTOR will refuse to “mix” the storage pools. As for the best volblocksize value to use, that’s going to depend on your workloads.

Because you’re upgrading (and recreating) your storage pools, it might be best to use new names for the storage pools and resource groups you’re recreating in LINSTOR. If it’s possible to backup Proxmox VMs to separate storage (like NFS), using Proxmox’s backup and restore functionality should be an easy way to to migrate VMs from the old storage pool to the new one backed by SSDs.

Hi,

thanks for your reply. That is exactly what I did. :slight_smile:

1 Like