You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As per the k3s docs, a default route is required and is used to determine the primary IP for a cluster. However in some cases this means the node IPs are not all on the same network, e.g.
where NetA has a default route, and NetB doesn't, with a dummy route set on compute1 via cloud-init using #539.
In this case the k3s server on the control node gets an InternalIP on NetA, while compute1's is on NetB.
The IP the nodes should use for the k3s server is set by templating out K3S_URL at boot via ansible-init. However it turns out this is not sufficient in the above case and e.g. shelling into a container running on compute from k9s on the control node does not work.
In manual testing, setting --node-ip (available for both server and agent sub--commands) got this working.
Although those links show it isn't "natively" exposed as an environment variable, INSTALL_K3S_EXEC could be set to something like --node-ip $K3S_NODE_IP and then template out "K3S_NODE_IP=$ip" into an environment file via ansible-init (with the environment file reference possibly added by a dropin, configured to not start until that exists).
The docs suggest that even if setting --node-ip a default route is still required (emphasis added):
K3s requires a default route in order to auto-detect the node's primary IP, and for kube-proxy ClusterIP routing to function properly
The text was updated successfully, but these errors were encountered:
Actually for the k3s-server, maybe we should make it so --node-ip is always set to the same IP as K3S_URL? The assumption there is that the latter is an IP, not a hostname, but given our dns state that is probably OK.
Maybe we can also make the --node-ip (and k3s-url IP) set to the interface with access_network=true. That probably makes sense, when we natively support multi-homed hosts.
As per the k3s docs, a default route is required and is used to determine the primary IP for a cluster. However in some cases this means the node IPs are not all on the same network, e.g.
where NetA has a default route, and NetB doesn't, with a dummy route set on
compute1
via cloud-init using #539.In this case the k3s server on the control node gets an InternalIP on NetA, while compute1's is on NetB.
The IP the nodes should use for the k3s server is set by templating out K3S_URL at boot via ansible-init. However it turns out this is not sufficient in the above case and e.g. shelling into a container running on
compute
from k9s on the control node does not work.In manual testing, setting
--node-ip
(available for both server and agent sub--commands) got this working.Although those links show it isn't "natively" exposed as an environment variable, INSTALL_K3S_EXEC could be set to something like
--node-ip $K3S_NODE_IP
and then template out "K3S_NODE_IP=$ip" into an environment file via ansible-init (with the environment file reference possibly added by a dropin, configured to not start until that exists).The docs suggest that even if setting
--node-ip
a default route is still required (emphasis added):The text was updated successfully, but these errors were encountered: