* Add feature gate, API, and conflict validation tests for enablecrashloopbackoffmax
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Handle when current base is longer than node max
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Update pkg/features/kube_features.go
Co-authored-by: Tsubasa Nagasawa <toversus2357@gmail.com>
* Fix indentation
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Follow convention for success test
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Normalize casing, and change field to Duration
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Fix json name and some other casing errors
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Another one I missed before
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Don't clobber global max function
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Change to flat value in defaults.go
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Streamline validation and defaults
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Fix typecheck
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Lint
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Tighten up validation for subsecond values
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Rename field from MaxBackOffPeriod to MaxContainerRestartPeriod
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* A few missed references to renames
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Only compare flags in flags test
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Don't mess with SetDefault signature
Nobody messes with SetDefault signature
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Fix stale signature change, and update test data
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Inspect current feature gates at defaulting time
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Don't use the global feature gate for temp usage
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Expose default error, and some comments
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
* Hint fuzzer for less arbitrary values to FeatureGates
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
---------
Signed-off-by: Laura Lorenz <lauralorenz@google.com>
Co-authored-by: Tsubasa Nagasawa <toversus2357@gmail.com>
This had been left out unintentionally earlier. Because theoretically there
might now be existing objects with parameters that are larger than whatever
limit gets enforced now, the limit only gets checked when parameters get
created or modified.
This is similar to the validation of CEL expressions and for consistency, the
same 10 Ki limit as for those is chosen.
Because the limit is not enforced for stored parameters, it can be increased in
the future, with the caveat that users who need larger parameters then depend
on the newer Kubernetes release with a higher limit. Lowering the limit is
harder because creating deployments that worked in older Kubernetes will not
work anymore with newer Kubernetes.
This enables a future extension where capacity of a single device gets consumed
by different claims. The semantic without any additional fields is the same as
before: a capacity cannot be split up and is only an attribute of a device.
Because its semantically the same as before, two-way conversion to v1alpha3 is
possible.
Using the "normal" logic for a feature gated field simplifies the
implementation of the feature gate.
There is one (entirely theoretic!) problem with updating from 1.31: if a claim
was allocated in 1.31 with admin access, the status field was not set because
it didn't exist yet. If a driver now follows the current definition of "unset =
off", then it will not grant admin access even though it should. This is
theoretic because drivers are starting to support admin access with 1.32, so
there shouldn't be any claim where this problem could occur.
The new DRAAdminAccess feature gate has the following effects:
- If disabled in the apiserver, the spec.devices.requests[*].adminAccess
field gets cleared. Same in the status. In both cases the scenario
that it was already set and a claim or claim template get updated
is special: in those cases, the field is not cleared.
Also, allocating a claim with admin access is allowed regardless of the
feature gate and the field is not cleared. In practice, the scheduler
will not do that.
- If disabled in the resource claim controller, creating ResourceClaims
with the field set gets rejected. This prevents running workloads
which depend on admin access.
- If disabled in the scheduler, claims with admin access don't get
allocated. The effect is the same.
The alternative would have been to ignore the fields in claim controller and
scheduler. This is bad because a monitoring workload then runs, blocking
resources that probably were meant for production workloads.
Drivers need to know that because admin access may also grant additional
permissions. The allocator needs to ignore such results when determining which
devices are considered as allocated.
In both cases it is conceptually cleaner to not rely on the content of the
ClaimSpec.
The main purpose is to protect against denial-of-service attacks. Scheduling
time depends a lot on unpredictable factors and expected scheduling time also
varies, so no attempt is made to limit the overall time spent on evaluating CEL
expressions per claim.
This removes the DRAControlPlaneController feature gate, the fields controlled
by it (claim.spec.controller, claim.status.deallocationRequested,
claim.status.allocation.controller, class.spec.suitableNodes), the
PodSchedulingContext type, and all code related to the feature.
The feature gets removed because there is no path towards beta and GA and DRA
with "structured parameters" should be able to replace it.