I tried using Linstor for proxmox with zfs as a backend. I am using NVME disks, but still i am getting very low performance. I have 6 proxmox nodes. 3 nodes for computation(diskless), 2 nodes for storage with nvme disks and 1 node acting as a controller. Each of these nodes are connected via 10G network interface.
I am getting around 1200 Mbps of Read and 300Mbps of write.
Can you guys help me do some optimization and get the max write performance.
I have create a zfs shared storage using linstor. The data is being replicated between 2 proxmox storage node which consists of 3.84tb x 2 Nvme drives each. The VMs are created on other 3 proxmox node which are diskless nodes. I am trying to achive maximum nvme performance using zfs shared storage and compression.
Currently on a single VMs created on the shared pool, i am getting around 1200 Mbps of Read speed and 300 Mbps of write.
I want to achieve atleast 1000 Mbps of write speed.
I don’t know where i am going wrong but is it possible for you to provide me any guide which can help me get maximum performance out of ZFS shared pool with linstor and proxmox.
Are you able to get that kind of speed using the same VMs when the VM’s virtual disks are placed directly on the ZFS storage, without DRBD or LINSTOR? I ask because in all the testing I’ve seen, ZFS is not nearly as performant as thin LVM or traditional LVM.
That said, DRBD will always add some amount of overhead to writes, since each write needs to be sent over the network to a peer, which means more latency.