How to install Reactor for Linstor-controller-HA in a k8s environment?

This link explains how to install Reactor for linstor HA in a general environment, not k8s.

I am looking for a way to Linstor controller HA in a k8s environment.
Is there a way to install Reactor in a k8s container environment?

Generally, you don’t need to. In a Kubernetes environment using either Piraeus or LINBIT SDS Operator, the LINSTOR Controller is configured to use the Kubernetes API as a database.

So as long as your Kubernetes Control Plane is available, which should generally be the case in any production cluster, the LINSTOR Controller can run. If the node running the controller goes offline, we have tuned the default tolerances for the Kubernetes deployment so that the controller is quickly restarted.

  1. What does “database” mean in “… use the Kubernetes API as a database”?

  2. Can I understand it as follows?
    In non-k8s, controller HA implementation uses reactor and in k8s, HA effect is achieved by deploying pods in kind:deployment method.

  3. In K8s, if the DB used by the controller is stored on a satellite, when the controller pod fails over (i.e. deploy) due to a node failure, the controller must be assigned to the node with the replicated secondary volume. Who does this assignment?
    In other words, in non-k8s, the reactor finds the primary, but in k8s, who does this work? Should I use stork or node affinity?

  1. What does “database” mean in “… use the Kubernetes API as a database”?

LINSTOR stores it’s own state in some Custom Resources. So assuming your Kubernetes Control Plane is HA, LINSTOR is HA as well

  1. Can I understand it as follows?
    In non-k8s, controller HA implementation uses reactor and in k8s, HA effect is achieved by deploying pods in kind:deployment method.

Yes

  1. In K8s, if the DB used by the controller is stored on a satellite, when the controller pod fails over (i.e. deploy) due to a node failure, the controller must be assigned to the node with the replicated secondary volume. Who does this assignment?
    In other words, in non-k8s, the reactor finds the primary, but in k8s, who does this work? Should I use stork or node affinity?

No. There is no replicated secondary volume. All the LINSTOR state is part of the Kubernetes API server. This API server is usually backed by etcd, which itself does replication on usually 3 (or more) nodes.

When a node goes down, Kubernetes will mark the node as unreachable after a short timeout (usually 30s). Because our LINSTOR Controller deployment has a negative toleration for the “unreachable” node taint, a replacement Pod is quickly rescheduled.