Dear community, please help me figure out this issue.
Right now I have a task to build cold storage, and for this purpose I decided to use an HDD pool consisting of Toshiba SAS 1.2 TB 2.5" 10K 128 MB drives.
This pool was assembled by me without any cache/SSD tier disks.
When I started testing performance, my monitoring system noticed w_await (write latency) at 100–150 milliseconds, and at the very beginning of the load it even spikes up to 1000 ms, but then settles down to a stable 100–150 ms.
I’m testing with the command:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=32k --iodepth=10 --size=8G --readwrite=randrw --rwmixread=75
Performance reaches around 300 IOPS for reads and 100 IOPS for writes, which was pretty much expected.
But the latency is what bothers me.
Previously we used the same HDDs in a traditional SAN/storage array with RAID 5 pools. There, even under peak load, latency never exceeded 8–10 milliseconds.
In the case of LINSTOR + DRBD, even if I just start an idle MinIO instance on these disks (practically no load), the latency immediately jumps to around 25 milliseconds.
Could you please tell me:
-
Is this normal behavior for DRBD without cache on an HDD-only pool?
-
Or did I configure something wrong?
It would also be great if someone could suggest how to fix or significantly improve this latency.
Thanks in advance!