The new DRAAdminAccess feature gate has the following effects:
- If disabled in the apiserver, the spec.devices.requests[*].adminAccess
field gets cleared. Same in the status. In both cases the scenario
that it was already set and a claim or claim template get updated
is special: in those cases, the field is not cleared.
Also, allocating a claim with admin access is allowed regardless of the
feature gate and the field is not cleared. In practice, the scheduler
will not do that.
- If disabled in the resource claim controller, creating ResourceClaims
with the field set gets rejected. This prevents running workloads
which depend on admin access.
- If disabled in the scheduler, claims with admin access don't get
allocated. The effect is the same.
The alternative would have been to ignore the fields in claim controller and
scheduler. This is bad because a monitoring workload then runs, blocking
resources that probably were meant for production workloads.
Since 1.31 the core component cloud provider logic should not exist,
this disables the existing code in the kube-controller-manager that still
expects to work with the cloud-provider logic to avoid having time bombs
in the code base that can break the component.
The code can not be completely removed because this will impact existing
users that may be using some of the flags breaking their deployments, so
this just removes the code that is no longer to be used becuase it
depends on options that no longer are exposed to users.
It also adds validation on the configuration/flag level to ensure that
the --cloud-provider flag can only be set to external or empty.
This removes the DRAControlPlaneController feature gate, the fields controlled
by it (claim.spec.controller, claim.status.deallocationRequested,
claim.status.allocation.controller, class.spec.suitableNodes), the
PodSchedulingContext type, and all code related to the feature.
The feature gets removed because there is no path towards beta and GA and DRA
with "structured parameters" should be able to replace it.
Refactor Healthz with Metrics Address for internal configuration of
kube-proxy adhering to the v1alpha2 version specifications as detailed
in https://kep.k8s.io/784.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
In Go unit tests, the entire test output becomes the failure message because
`go test` doesn't track why a test fails. This can make the failure message
pretty large, in particular in integration tests.
We cannot identify the real failure either because Kubernetes has no convention
for how to format test failures. What we can do is recognize log output added
by klog.
prune-junit-xml now moves the full text to to the test output and only keep
those lines in the failure which are not from klog.
The klog output parsing might eventually get moved to
k8s.io/logtools/logparse. For now it is developed as a sub-package of
prune-junit-xml.
The current dryrun client implemnetation is suboptimal
and sparse. It has the following problems:
- When an object CREATE or UPDATE reaches the default dryrun client
the operation is a NO-OP, which means subsequent GET calls must
fully emulate the object that exists in the store.
- There are multiple implmentations of a DryRunGetter interface
such the one in init_dryrun.go but there are no implementations
for reset, upgrade, join.
- There is a specific DryRunGetter that is backed by a real
client in clientbacked_dryrun.go, but this is used for upgrade
and does not work in conjuction with a fake client.
This commit does the following changes:
- Removes all existing *dryrun*.go implementations.
- Add a new DryRun implementation in dryrun.go that implements
3 clients - fake clientset, real clientset, real dynamic client.
- The DryRun object uses the method chaining pattern.
- Allows the user opt-in into real clients only if needed, by passing
a real kubeconfig. By default only constructs a fake client.
- The default reactor chain for the fake client, always logs the
object action, then for GET or LIST actions attempts to use the
real dynamic client to get the object. If a real object does not
exist it attempts to get the object from the fake object store.
- The user can prepend or append reactors to the chain.
- All known needed reactors for operations during init, join,
reset, upgrade are added as methods of the DryRun struct.
- Adds detailed unit test for the DryRun struct and its methods
including reactors.
Additional changes:
- Use the new DryRun implementation in all command workflows -
init, join, reset, upgrade.
- Ensure that --dry-run works even if there is no active cluster
by returning faked objects. For join, a faked cluster-info
with a fake bootstrap token and CA are used.
Since we have a Kubernetes-specific fork of go-yaml, use that
consistently throughout the project. This doesn't eliminate the
dependencies on gopkg.in/yaml, which are still used indirectly; but it
ensures that the whole project benefits from fixes or changes to
sigs.k8s.io/yaml.
Signed-off-by: Stephen Kitt <skitt@redhat.com>