- Remove `info` field from `Version` struct
- Modify `WithInfo` and `Info` methods to be deprecated
- Update version information retrieval to use base version info
- Simplify version information generation in compatibility tests
- Remove unnecessary version info passing in build and test scenarios
- Add new version fields to version.Info struct:
* EmulationMajor and EmulationMinor to track emulated version
* MinCompatibilityMajor and MinCompatibilityMinor for compatibility tracking
- Update related code to populate and use these new fields
- Improve version information documentation and OpenAPI generation
- Modify version routes and documentation to reflect new version information structure
The original limit of 32 seemed sufficient for a single GPU on a node. But for
shared non-local resources it is too low. For example, a ResourceClaim might be
used to allocate an interconnect channel that connects all pods of a workload
running on several different nodes, in which case the number of pods can be
considerably larger.
256 is high enough for currently planned systems. If we need something even
higher in the future, an alternative approach might be needed to avoid
scalability problems.
Normally, increasing such a limit would have to be done incrementally over two
releases. In this case we decided on
Slack (https://kubernetes.slack.com/archives/CJUQN3E4T/p1734593174791519) to
make an exception and apply this change to current master for 1.33 and backport
it to the next 1.32.x patch release for production usage.
This breaks downgrades to a 1.32 release without this change if there are
ResourceClaims with a number of consumers > 32 in ReservedFor. In practice,
this breakage is very unlikely because there are no workloads yet which need so
many consumers and such downgrades to a previous patch release are also
unlikely. Downgrades to 1.31 already weren't supported when using DRA v1beta1.
This had been left out unintentionally earlier. Because theoretically there
might now be existing objects with parameters that are larger than whatever
limit gets enforced now, the limit only gets checked when parameters get
created or modified.
This is similar to the validation of CEL expressions and for consistency, the
same 10 Ki limit as for those is chosen.
Because the limit is not enforced for stored parameters, it can be increased in
the future, with the caveat that users who need larger parameters then depend
on the newer Kubernetes release with a higher limit. Lowering the limit is
harder because creating deployments that worked in older Kubernetes will not
work anymore with newer Kubernetes.