Error: unknown systemd FreezerState

Running drbd-reactorctl status get err message when use ha linstor

Follow the documentation

https://linbit.com/drbd-user-guide/linstor-guide-1_0-cn/#s-linstor_ha

Not sure if this error caused linstor-gateway to fail to create iscsi.

here is linstor-gateway output (command : linstor-gateway iscsi create iqn.2025-01.rcr.test:ss 172.17.0.0/8 2G -r iso_res_group --loglevel debug)

DEBU[0000] {"iqn":"iqn.2025-01.rcr.test:info","resource_group":"iso_res_group","volumes":[{"number":1,"size_kib":2097152,"file_system_root_owner":{"User":"","Group":""}}],"service_ips":["172.17.0.0/8"],"status":{"state":"Unknown","service":"Stopped","primary":"","nodes":null,"volumes":null},"gross_size":false,"implementation":""} 
DEBU[0000] curl -X 'POST' -d '{"iqn":"iqn.2025-01.rcr.test:info","resource_group":"iso_res_group","volumes":[{"number":1,"size_kib":2097152,"file_system_root_owner":{"User":"","Group":""}}],"service_ips":["172.17.0.0/8"],"status":{"state":"Unknown","service":"Stopped","primary":"","nodes":null,"volumes":null},"gross_size":false,"implementation":""}
' -H 'Accept: application/json' -H 'Content-Type: application/json' -H 'User-Agent: linstor-gateway/1.7.0-g6e676b4f35e3e2b90cffb32637e44e16ae3c0559' 'http://localhost:8080/api/v2/iscsi' 
DEBU[0000] Status code not within 200 to 400, but 400 (Bad Request) 
ERRO[0000] failed to create iscsi resource: failed to retrieve existing configs: failed to fetch file list: Get "http://localhost:3370/v1/files?content=true&limit=0&offset=0": dial tcp [::1]:3370: connect: connection refused 

and journalctl -xe log

Looks like the linstor-controller is not actually runnning.

The error of drbd-reactorctl is interesting. Can you show the output of:

systemctl show --property=FreezerState drbd-promote@linstor_db.service
systemctl show --property=FreezerState var-lib-linstor.mount

The linstor-controller is running on node2.

And i also check drbd-promote@linstor_db.service

And var-lib-linstor.mount

For the gateway error: you need to add all possible controller urls to the /etc/linstor-gateway/linstor-gateway.toml file:

[linstor]
controllers = ["10.10.1.1", "10.10.1.2", "10.10.1.3"]

(Use the right DNS names/IP Addresses for your nodes), then restart the linstor-gateway service

Still same error, regardless of whether to add /etc/linstor-gateway/linstor-gateway.toml or not, I can see corresponding logs in linstor-controller, but the final result is failure.

Have you seen this error

Jan 10 16:09:43 node1 ocf-rs-wrapper[5496]: Jan 10 16:09:43 INFO: Running start for /dev/drbd/by-res/ss/0 on /srv/ha/internal/ss
Jan 10 16:09:43 node1 ocf-rs-wrapper[5496]: Jan 10 16:09:43 ERROR: There is one or more mounts mounted under /srv/ha/internal/ss.
Jan 10 16:09:43 node1 ocf-rs-wrapper[5492]: ERROR [ocf_rs_wrapper] Filesystem:fs_cluster_private_ss,s-a-m,start: FAILED with exit code 6

Is there already something mounted in /srv/ha/internal/ss?

Nothing, i can’t see anything under /srv

I found that there are two lines of ExecStop in the Service section of drbd-promote@linstor_db.service. Is this the reason for the abnormal output of drbd-
reactorctl?

I’m noticing this earlier shared journal output is from node1:

Jan 10 16:09:26 node1 ocf-rs-wrapper[5362]: Jan 10 16:09:26 INFO: Running start for /dev/drbd/by-res/ss/0 on /srv/ha/internal/ss
Jan 10 16:09:26 node1 ocf-rs-wrapper[5362]: Jan 10 16:09:26 ERROR: There is one or more mounts mounted under /srv/ha/internal/ss.
Jan 10 16:09:26 node1 ocf-rs-wrapper[5358]: ERROR [ocf_rs_wrapper] Filesystem:fs_cluster_private_ss,s-a-m,start: FAILED with exit code 6
Jan 10 16:09:26 node1 systemd[1]: ocf.rs@fs_cluster_private_ss.service: Main process exited, code=exited, status=6/NOTCONFIGURED

But the screenshot you shared for /srv is from node2.

It seems like promotion was attempted on node1 based on this output, (which would necessarily preclude promotion on node2) but I am curious about what might be mounted under /srv/ha/internal/ss for node1?

The results of node1 and node2 are the same. Sorry, I forgot to post the results related to node1 before.
image

If you guys need more logs and other information, fell free to contact me anytime.