Remove a bunch of comments that are either inaccurate ("the proxier
can only be tested by e2e tests") or weirdly overspecific about
obvious details ("the proxier will not exit if an iptables call
fails").
Remove the utilexec.Interface args from the iptables/ipvs constructors
(which have been unused since the conntrack cleanup code was ported to
netlink).
Remove the EventRecorder fields from the iptables/ipvs Proxiers, which
have been unused since we removed the port-opener code in 2022.
Remove the strictARP field from the ipvs Proxier, which has apparently
always been unused (strictARP is only looked at at construct time).
Endpoint chain contents are fairly predictable from their name and
existing affinity sets. Skip endpoint chain updates, when we can be sure
that rules in that chain are still correct.
Add unit test to verify first transaction is optimized.
Change baseRules ordering to make it accepted by nft.ParseDump.
Signed-off-by: Nadia Pinaeva <npinaeva@redhat.com>
We used to flush and re-add all map/set elements on nftables
setup, but it is faster to read the existing elements and only
transact the diff.
Signed-off-by: Nadia Pinaeva <npinaeva@redhat.com>
KubeProxy operates with a single health server and two proxies,
one for each IP family. The use of the term 'proxier' in the
types and functions within pkg/proxy/healthcheck can be
misleading, as it may suggest the existence of two health
servers, one for each IP family.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
kube-proxy needs to delete stale conntrack entries for UDP services to
avoid blackholing traffic. Instead of using the conntrack binary it
can use netlink calls directly, reducing the containers images size and
the security surface.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Co-authored-by: Antonio Ojea <aojea@google.com>
objects.
Change the order of operations to stop current iteration if no changes
to the service chains are needed.
Bump syncProxy frequency to 1 hour.
In a test kind cluster creation of 10K services, 2 endpoints each,
takes ~25m before the fix and ~9min after. Maximum memory usage
during creation is ~650MiB and 260MiB respectively.
Another important metric is the time it takes to create 1 new service
when 10K svc already exist. It used to take ~8m before the fix,
with partialSync it takes ~141ms.
Signed-off-by: Nadia Pinaeva <n.m.pinaeva@gmail.com>
* LocalTrafficDetector construction and test improvements
* Reorder getLocalDetector unit test fields so "input" args come before "output" args
* Don't pass DetectLocalMode as a separate arg to getLocalDetector
It's already part of `config`
* Clarify test names in preparation for merging
* Merge single-stack/dual-stack LocalTrafficDetector construction
Also, only warn if the *primary* IP family is not correctly configured
(since we don't actually know if the cluster is really dual-stack or
not), and pass the pair of detectors to the proxiers as a map rather
than an array.
* Remove the rest of Test_getDualStackLocalDetectorTuple
The constructors only return an error if you pass them invalid data,
but we only ever pass them data which has already been validated,
making the error checking just annoying. Just make them return garbage
output if you give them garbage input.
Windows proxy metric registration was in a separate file, which had
led to some metrics (eg the new ProxyHealthzTotal and ProxyLivezTotal)
not being registered for Windows even though they were implemented by
platform-generic code.
(A few other metrics were neither registered on, nor implemented on
Windows, and that's probably a bug.)
Also, beyond linux-vs-windows, make it clearer which metrics are
specific to individual backends.
This reverts commit 8bccf4873b, except
for the nftables unit test changes, since we still want the "new"
results (not to mention the bugfixes), just for a different reason
now.