Resource connection path for ring topology

I’m beginning to experiment with linstore on a 3-node proxmox cluster. So far this is looking really promising.

Here’s my context and questions. Each node has 3 nics: a management/master nic, and two other nics that are direct-connect to other nodes.

I want to configure the data replication to use the direct-connected nics and fall back to the mgmt nic if the link is down. Some questions or issues have arisen that I would like advice on.

To elaborate, the data replication would use these links if they are up and only fall back to using the mgmt nic if a link is down. The normal drbd repl traffic would use the direct connects like:

A → B
A → C
B → C

1: I cannot figure out how to create resource-connection paths for a resource group. I have done it for the individual resource “linstor_db” I created from the HA docs. The problem is that I have a resource group for replicated storage created when I create a VM. I had rather not have to do that every time I create a new VM, but had rather instead do it at the resource group level. Any ideas or suggestions?

2: Given my desired connection topology, what would be the right setting for the PrefNic property since that seems to apply to single-nic connected systems instead of direct connected ring topology?

As a side note, I’m running OSPF and BFD routing protocols on these nodes so they detect and reroute link failures very quickly. I’m not sure how that would interplay with linstor/drbd’s own failure handling logic.

Thanks for any and all advice!

You can only create a resource-connection path on a specific resource. However, you can use node-connection path to configure multiple paths between all cluster resources in a cluster.

For example, a test cluster I currently have up has nodes with the following names and IP addresses:

  • swarm-0
    • eth0: 192.168.121.120
    • eth1: 192.168.222.50
  • swarm-1
    • eth0: 192.168.121.209
    • eth1: 192.168.222.51
  • swarm-2
    • eth0: 192.168.121.51
    • eth1: 192.168.222.52

The following commands are used to create multiple paths between all resource in the cluster:

# create interfaces on all nodes
linstor node interface create swarm-0 eth0 192.168.121.120
linstor node interface create swarm-0 eth1 192.168.222.50
linstor node interface create swarm-1 eth0 192.168.121.209
linstor node interface create swarm-1 eth1 192.168.222.51
linstor node interface create swarm-2 eth0 192.168.121.51
linstor node interface create swarm-2 eth1 192.168.222.52

# create all paths that use eth0
linstor node-connection path create swarm-0 swarm-1 repl0 eth0 eth0
linstor node-connection path create swarm-0 swarm-2 repl0 eth0 eth0
linstor node-connection path create swarm-1 swarm-2 repl0 eth0 eth0

# create all paths that use eth1
linstor node-connection path create swarm-0 swarm-1 repl1 eth1 eth1
linstor node-connection path create swarm-0 swarm-2 repl1 eth1 eth1
linstor node-connection path create swarm-1 swarm-2 repl1 eth1 eth1

Does that answer your questions?

Also, the path-mesh feature should be coming in the next LINSTOR release:

1 Like

Hi kermat,

Thanks for the detailed answer! I think that might do mostly what I’m trying to achieve. However, I can’t see how I could configure it to prefer the direct paths as opposed to the “backup” path over the switched network.

I have three nics. From your example, lets say eth0 and eth1 are specified exactly as you show and those are direct cable links between the nodes (no switch etc. in the middle).

Now I have a third interface which is the broader network-attached interface eth2 that is available on the general network. All three nodes are available via this interface as well. I could add eth3 as well:

linstore node-connection path create swarm-0 swarm-1 eth2 eth2
linstore node-connection path create swarm-0 swarm-2 eth2 eth2
linstore node-connection path create swarm-1 swarm-1 eth2 eth2

What I would like to do is have the data replication only take the eth2 path if their direct link is down. I’ll watch that path-mesh feature to see what they say.

I had another idea I may experiment with given that I’m running OSPF and BFD routing protocols on these nodes. I might try creating the node IP on the “lo” interface instead of directly on the network adapters and let the OSPF and BFD link failure detection figure this out. That may be too much voodoo but I may play with it :slight_smile:

Again, thanks for your detailed reply.

Regards