Linstor-gateway, connect iscsi target, nfs export to linux, vmware, window via linstor-gateway

Hi Linbit team,

We are having problems connecting to HyperV and VMware.
We are deploying linstor cluster storage to storage of Hyper-V, Vmware via linstor-gateway. We are done for all requirement of linstor-gateway, we are done of installing of packet (targetcli, drbd-reactor, drbd9, linstor-satelite, linstor-controller…) and done of setup cluster, storage, resource-group, resource…We have also successfully created iscsi target, virtual ip for HA iscsi target, resources (attact image)

Can you see the iSCSI target from a different host on the network:

iscsiadm --mode discovery --type sendtargets --portal 10.0.15.13

We run command above but not fail, notify " iscsiadm: connect to 10.0.15.13 timed out". All rule firewall open.
We check command ss -napt | grep 3260, result " LISTEN 0 256 10.0.15.13:3260 0.0.0.0:*"
We check 10.0.8.13:8080, result “404 page not found”
We don’t understand what’s wrong with the configuration.

Can you ping the VIP (service IP) from another host?

Are your subnets configured correctly? It looks like you are using a 10.0.15.13/24 subnet on the VIP but then you mention a different subnet here:

We check 10.0.8.13:8080, result “404 page not found”

Im sory, it is 10.0.15.13 not 10.0.8.13, I typed incorrect while comment. But result is still same.
I ping VIP IP on another host and ping OK and that host have subnet same linstor host . VIP IP address (10.0.15.13) display on network card of linstor host controller.

It looks like iSCSI is listening based on your ss -napt output, but maybe something else is failing to start. Are any of the DRBD Reactor managed services failing to start?

drbd-reactorctl status

hi kermat, drbd-reactorctl is stable, please wait, i buil again and send you status.

Hi
drbd-reactorctl seem to work nomarly.
We build again and get result of LAB.
Our LAB


we install linstor-gateway via command:

wget -O /usr/sbin/linstor-gateway https://github.com/LINBIT/linstor-gateway/releases/latest/download/linstor-gateway-linux-amd64

chmod +x /usr/sbin/linstor-gateway

Please help us handle this issue

drbd-reactor status:
image

linstor-gateway iscsi list

That all looks good.

You will have to test the end-to-end connectivity between the target and initiator. I usually use netcat for simple tests.

On pve01:

nc -l -p 3261 -s 10.0.15.15

On the client system:

nc 10.0.15.15 3261

See if you can send data between the systems by typing into either systems console and pressing enter. If data goes both ways, it’s not a connectivity issue.

i run command on pve01 and client (pve5), result display:


Data not process.

If you have the firewalls open, you should be able to type something into the buffers.

Like this:
nc-ex

Is that working for you? It wasn’t clear from your screenshots.

2 Likes

Hi,
we checked, firewall is opened.
Media1

Thank you, but you did not use the Virtual IP address (10.0.15.15) which is what you need to use when connecting the iSCSI initiator to the iSCSI target.

Does that also work?

If I use virtual IP 10.0.15.15, it is still OK

:thinking: that looks almost okay to me… what are those special characters on the “client side” I wonder?

Could there be bad MTU settings that are allowing some smaller things through but failing on larger ones? Could try testing MTU sizes using this shell function to “ping” the VIP from a client:

$ mtu_discover_using_ping() ( target=$1 ; i=0; good=1;  bad=${2:-15000}; mtu=${3:-1400}; lmtu=$good; while (( $bad - $good > 1 )); do let i+=1; if ping -w1 -i 0.1 -c2 -M do -s $mtu $1 &>/dev/null; then good=$mtu; else bad=$mtu; fi; lmtu=$mtu; mtu=$(( (good + bad)/2 )); printf "i:%u,\t""mtu:%u,\t""bad:%6u,\t""good:%6u,\t""diff:%6d\n" $i $mtu $bad $good $(( bad-good )); done >&2 ; echo >&2 "found in $i iterations using: ping -w1 -i0.1 -c2 -M do -s \$mtu $target" ; echo MTU=$mtu )
$ mtu_discover_using_ping 10.0.15.15
i:1,	mtu:8200,	bad: 15000,	good:  1400,	diff: 13600
i:2,	mtu:4800,	bad:  8200,	good:  1400,	diff:  6800
i:3,	mtu:3100,	bad:  4800,	good:  1400,	diff:  3400
i:4,	mtu:2250,	bad:  3100,	good:  1400,	diff:  1700
i:5,	mtu:1825,	bad:  2250,	good:  1400,	diff:   850
i:6,	mtu:1612,	bad:  1825,	good:  1400,	diff:   425
i:7,	mtu:1506,	bad:  1612,	good:  1400,	diff:   212
i:8,	mtu:1453,	bad:  1506,	good:  1400,	diff:   106
i:9,	mtu:1479,	bad:  1506,	good:  1453,	diff:    53
i:10,	mtu:1466,	bad:  1479,	good:  1453,	diff:    26
i:11,	mtu:1472,	bad:  1479,	good:  1466,	diff:    13
i:12,	mtu:1475,	bad:  1479,	good:  1472,	diff:     7
i:13,	mtu:1473,	bad:  1475,	good:  1472,	diff:     3
i:14,	mtu:1472,	bad:  1473,	good:  1472,	diff:     1
found in 14 iterations using: ping -w1 -i0.1 -c2 -M do -s $mtu 10.0.15.15
MTU=1472

I also found some posts claiming that ARP can get in the way when you have multiple interfaces on the same subnet. If that’s the case you can try restricting ARP replies:

# cat << EOF >> /etc/sysctl.conf
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2
EOF 
# sysctl -p /etc/sysctl.conf

i I followed your instructions and the results are as shown, I still cannot connect to the iscsi target


Is there any other approach or treatment?

I would start looking at packets coming into and out of the interfaces on the active (pve1) node while trying to connect the client. Something isn’t making it through or back for some reason.

pve1 # tcpdump -i <interface> port 3260 -w iscsi_traffic.pcap

You can review the iscsi_traffic.pcap in Wireshark and see if that provides any clues as to why your systems cannot connect.

i run tcp dump on pve01 and discovery on pve5 and connect iscsi on anather client but nothing happened and no packets show up