Unable to Freshly Reinstall Linstor

Hi everyone,

I’m new to Linstor.

I managed to successfully configure a 3 node Linstor cluster for PROXMOX based on the following instructions: How to Setup LINSTOR on Proxmox VE - LINBIT

However, after making an experimental temporary deployment, I’ve been unable to properly purge linstor completely to then install fresh again.

I couldn’t find any documentation or support online to this aim. Is there such information available, and if not can someone please indicate how this is to be done properly?

Thank you in advance.

Hi linstor, welcome to the LINBIT forums. Clever username. :wink:

What is the exact issue you have on a fresh re-install? Is there a specific error or something you could share?

If I had to guess, I would suspect the issue might be leftover LVM2 signatures on the disk causing issues. You should be able to remove those with a wipefs --all.

1 Like

Hi @Devin,

Thanks for getting back to me rather quickly.

Funny name, isn’t it? Couldn’t believe it hadn’t been taken before.

Anyway, the problem I’m facing and can’t seem to fix, and why I’m attempting to set linstor back to default, is that none of the satellites will properly join the cluster I want. They will appear as “connected” but not “online”.

In the linstor GUI, I see the following for each node:

I did wipe the hard drives that were part of a ZFS pool, and then tried to add them to linstor, but that didn’t work out either:

The controller is trying to (re-) establish a connection to the satellite. The controller stored the changes and as soon the satellite is connected, it will receive this update.

Linstor GUI also had an error log entry stating:

URL: http://10.1.14.13:3370/v1/nodes/pve3/net-interfaces

Timestamp: 2/14/2025, 12:36:31 PM

Message: Node 'pve3' not found.

Cause: The specified node 'pve3' could not be found in the database

Adding the first node:

root@pve3:~# linstor node create pve3 10.1.14.13  --node-type=Combined
SUCCESS:
Description:
    New node 'pve3' registered.
Details:
    Node 'pve3' UUID is: fd4e473c-115d-44a8-471e-158613f41123
SUCCESS:
Description:
    Node 'pve3' authenticated
Details:
    Supported storage providers: [diskless, lvm, lvm_thin, zfs, zfs_thin, file, file_thin, remote_spdk, ebs_init, ebs_target]
    Supported resource layers  : [drbd, luks, nvme, writecache, cache, bcache, storage]
    Unsupported storage providers:
        SPDK: IO exception occured when running 'rpc.py spdk_get_version': Cannot run program "rpc.py": error=2, No such file or directory
        EXOS: IO exception occured when running 'lsscsi --version': Cannot run program "lsscsi": error=2, No such file or directory
              '/bin/bash -c 'cat /sys/class/sas_phy/*/sas_address'' returned with exit code 1
              '/bin/bash -c 'cat /sys/class/sas_device/end_device-*/sas_address'' returned with exit code 1
        STORAGE_SPACES: This tool does not exist on the Linux platform.
        STORAGE_SPACES_THIN: This tool does not exist on the Linux platform.
root@pve3:~#

Adding the second node (pve4):

root@pve3:~# linstor node create pve4 10.1.14.14 --node-type=Combined
SUCCESS:
Description:
    New node 'pve4' registered.
Details:
    Node 'pve4' UUID is: e6ae1372-41ba-46ae-b686-cee334027d93
root@pve3:~#

On both nodes, directory /var/log/linstor-satellite/ only contains error-report.mv.db

And journalctl only shows attempts by pve3 to reach pve4.

I also tried adding node pve2, but it has identical behavior as pve4.

Any advice is appreciated.

Thank you in advance!

Hi @Devin,

Good news. It turns out the problem all along was that the interface I had chosen was using jumbo frames. After using a different interface with an MTU of 1500, all satellites suddenly came online and begun to function correctly. I’ve since set up HA for my linstor environment and, after simulating failure, I can confirm everything is working well.

That said, I’d still appreciate some expert advice here.
Are jumbo frames preferred for the Linstor storage network, or is it not required? And if so, then is it still recommended to have a dedicated network card for the storage network? I think it might be better to stick to one 100Gbit CX5 network card, and that way free up a PCI-E slot, reduce network complexity, power consumption and the cost for hardware.

Thanks in advance.

Are jumbo frames preferred for the Linstor storage network, or is it not required? And if so, then is it still recommended to have a dedicated network card for the storage network? I think it might be better to stick to one 100Gbit CX5 network card, and that way free up a PCI-E slot, reduce network complexity, power consumption and the cost for hardware.

Jumbo frames are not required, but I would suggest enabling them as long as all the devices on the network path support them.

Dedicated devices/networks for storage are also not a requirement.