For kube-proxy, node addition and node update is semantically
considered as similar event, we have exactly same handler
logic for these two events resulting in duplicate code and
unit tests.
This merges the `NodeHandler` interface methods OnNodeAdd and
OnNodeUpdate into OnNodeChange along with the implementation
of the interface.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
ProxyHealthServer now consumes NodeManager to get the latest
updated node object for determining node eligibility.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Co-authored-by: Dan Winship <danwinship@redhat.com>
NodeManager, if configured with to watch for PodCIDR watch, watches
for changes in PodCIDRs and crashes kube-proxy if a change is
detected in PodCIDRs.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Co-authored-by: Dan Winship <danwinship@redhat.com>
NodeManager initialises node informers, waits for cache sync and polls for
node object to retrieve NodeIPs, handle node events and crashes kube-proxy
when change in NodeIPs is detected.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Co-authored-by: Dan Winship <danwinship@redhat.com>
For kube-proxy, node addition and node update is semantically
considered as similar event, we have exactly same handler
logic for these two events resulting in duplicate code and
unit tests.
This merges the `NodeHandler` interface methods OnNodeAdd and
OnNodeUpdate into OnNodeChange along with the implementation
of the interface.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
ProxyHealthServer now consumes NodeManager to get the latest
updated node object for determining node eligibility.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
NodeManager, if configured with to watch for PodCIDR watch, watches
for changes in PodCIDRs and crashes kube-proxy if a change is
detected in PodCIDRs.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
NodeManager initialises node informers, waits for cache sync and polls for
node object to retrieve NodeIPs, handle node events and crashes kube-proxy
when change in NodeIPs is detected.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
KubeProxy operates with a single health server and two proxies,
one for each IP family. The use of the term 'proxier' in the
types and functions within pkg/proxy/healthcheck can be
misleading, as it may suggest the existence of two health
servers, one for each IP family.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
TL;DR: we want to start failing the LB HC if a node is tainted with ToBeDeletedByClusterAutoscaler.
This field might need refinement, but currently is deemed our best way of understanding if
a node is about to get deleted. We want to do this only for eTP:Cluster services.
The goal is to connection draining terminating nodes
Since kube-proxy in LocalModeNodeCIDR needs to obtain the PodCIDR
assigned to the node it watches for the Node object.
However, kube-proxy startup process requires to have these watches in
different places, that opens the possibility of having a race condition
if the same node is recreated and a different PodCIDR is assigned.
Initializing the second watch with the value obtained in the first one
allows us to detect this situation.
Change-Id: I6adeedb6914ad2afd3e0694dcab619c2a66135f8
Signed-off-by: Antonio Ojea <aojea@google.com>
Kube/proxy, in NodeCIDR local detector mode, uses the node.Spec.PodCIDRs
field to build the Services iptables rules.
The Node object depends on the kubelet, but if kube-proxy runs as a
static pods or as a standalone binary, it is not possible to guarantee
that the values obtained at bootsrap are valid, causing traffic outages.
Kube-proxy has to react on node changes to avoid this problems, it
simply restarts if detect that the node PodCIDRs have changed.
In case that the Node has been deleted, kube-proxy will only log an
error and keep working, since it may break graceful shutdowns of the
node.