There is not a single definition of "non-special IP" that makes sense
in all contexts. Rename ValidateNonSpecialIP to ValidateEndpointIP and
clarify that it shouldn't be used for other validations.
Also add a few more unit tests.
Remove unnecessary duplicate checks for pod.spec.podIPs /
pod.spec.hostIPs / node.spec.podCIDRs. (A list that is known to
contain exactly 2 values, where one is IPv4 and the other is IPv6,
cannot possibly contain duplicates.)
Fix a bad CIDR in the NetworkPolicy validation tests.
Fix some comment typos.
I fixed up the TestValidateEndpointsCreate path to show the matcher
instead of manual origin checking.
I picked TestValidateTopologySpreadConstraints because it was the last
failing test on my screen when I changed on of the commonly hard-coded
error strings. I fixed exactly those validation errors that were needed
to make this test pass. Some of the Origin values can be debated.
The `field/testing.Matcher` interface allows tests to configure the
criteria by which they want to match expected and actual errors. The
hope is that everyone will use Origin for Invalid errors.
There's some collateral impact for tests which use exact-comparisons and
don't expect origins. These are all candidates for using the matcher.
Update ValidateEndpointsCreate validation tests to use the new Origin field for more precise error comparisons. It leverage the Origin field instead of detailed error messages, improving test robustness and readability.
Co-authored-by: Tim Hockin <thockin@google.com>
Previously, ValidateNodeSelector did not check that labels are valid. Now it
does for resource.k8s.io, regardless whether an object already was created with
invalid labels in an earlier Kubernetes release. Theoretically this is a
breaking change and could cause problems during an upgrade, but that is highly
unlikely in practice.
In contrast to node affinity, DRA does not ignore parse errors
(= uses NewNodeSelector, not NewLazyErrorNodeSelector), so invalid labels would
have been found instead of being silently ignored.
Even if some object has invalid labels, this only affects an alpha -> beta
upgrade which isn't guaranteed to work seamlessly.
1. The effective container requests cannot be greater than pod-level requests
2. Inidividual container limits cannot be greater than pod-level limits
3. Only CPU & Memory are supported at pod-level
4. Inplace container resources updates are not supported if pod-level resources are set
Note: effective container requests cannot be greater than pod-level limits is supported by transitivity. Effective container requests <= pod-level requests && pod-level requests <= pod-level limits; Therefore effective container requests <= pod-level limits
Signed-off-by: ndixita <ndixita@google.com>