Hi @Devin,
Thanks for getting back to me rather quickly.
Funny name, isn’t it? Couldn’t believe it hadn’t been taken before.
Anyway, the problem I’m facing and can’t seem to fix, and why I’m attempting to set linstor back to default, is that none of the satellites will properly join the cluster I want. They will appear as “connected” but not “online”.
In the linstor GUI, I see the following for each node:
I did wipe the hard drives that were part of a ZFS pool, and then tried to add them to linstor, but that didn’t work out either:
The controller is trying to (re-) establish a connection to the satellite. The controller stored the changes and as soon the satellite is connected, it will receive this update.
Linstor GUI also had an error log entry stating:
URL: http://10.1.14.13:3370/v1/nodes/pve3/net-interfaces
Timestamp: 2/14/2025, 12:36:31 PM
Message: Node 'pve3' not found.
Cause: The specified node 'pve3' could not be found in the database
Adding the first node:
root@pve3:~# linstor node create pve3 10.1.14.13 --node-type=Combined
SUCCESS:
Description:
New node 'pve3' registered.
Details:
Node 'pve3' UUID is: fd4e473c-115d-44a8-471e-158613f41123
SUCCESS:
Description:
Node 'pve3' authenticated
Details:
Supported storage providers: [diskless, lvm, lvm_thin, zfs, zfs_thin, file, file_thin, remote_spdk, ebs_init, ebs_target]
Supported resource layers : [drbd, luks, nvme, writecache, cache, bcache, storage]
Unsupported storage providers:
SPDK: IO exception occured when running 'rpc.py spdk_get_version': Cannot run program "rpc.py": error=2, No such file or directory
EXOS: IO exception occured when running 'lsscsi --version': Cannot run program "lsscsi": error=2, No such file or directory
'/bin/bash -c 'cat /sys/class/sas_phy/*/sas_address'' returned with exit code 1
'/bin/bash -c 'cat /sys/class/sas_device/end_device-*/sas_address'' returned with exit code 1
STORAGE_SPACES: This tool does not exist on the Linux platform.
STORAGE_SPACES_THIN: This tool does not exist on the Linux platform.
root@pve3:~#
Adding the second node (pve4):
root@pve3:~# linstor node create pve4 10.1.14.14 --node-type=Combined
SUCCESS:
Description:
New node 'pve4' registered.
Details:
Node 'pve4' UUID is: e6ae1372-41ba-46ae-b686-cee334027d93
root@pve3:~#
On both nodes, directory /var/log/linstor-satellite/ only contains error-report.mv.db
And journalctl only shows attempts by pve3 to reach pve4.
I also tried adding node pve2, but it has identical behavior as pve4.
Any advice is appreciated.
Thank you in advance!