As discussed in https://github.com/kubernetes/kubernetes/pull/93658,
relying on a watch to deliver all events is not acceptable, not even
for a test, because it can and did (at least for OpenShift testing) to
test flakes.
The solution in #93658 was to replace log flooding with a clear test
failure, but that didn't solve the flakiness.
A better solution is to use a RetryWatcher which "under normal
circumstances" (https://github.com/kubernetes/kubernetes/pull/93777#discussion_r467932080)
should always deliver all changes.
This test might flake when run on a multi-zone cluster (similar to
persistent volumes, see
https://github.com/kubernetes/kubernetes/issues/75776). We don't do
that at the moment, but it's better to fix this anyway.
Since 1.19 endpoint slices is enabled by default, so all the e2e
tests should consider them.
The e2e networking tests for services use the jig object for
all the tests, but was not taking into account endpoint slices.
This considers endpoints slices for the method waitForAvailableEndpoint()
Date: Sun Aug 9 12:34:06 2020 +0200
To reduce costs and to increase speed. Formating 5GiB volume took too long
in a badly overloaded CI cluster.
All in-tree volume plugins support at least 1GiB volumes.
Although rare, the EndpointSlice controller can create duplicate
EndpointSlices. This is considered a valid state and tests that find
this state should not fail.
In our current mock CSI driver e2e test, we are not waiting
for the CSI driver register successfully to perform test
including provision PVC. This can lead to timeout when the
csi driver takes longer to register the socket.
This change adds the waiting part so that the system will
wait for up to 10 minutes for the driver to be ready. This
normally won't take this long. However, under a resource
constraint environment it can take longer than expected time.
https://github.com/kubernetes/kubernetes/issues/93358
As a part of cleaning up inactive members (who haven't been active since
beginning of 2019) from OWNERS files, this commit moves abrarshivani to
emeritus_approvers section.
- Test that client-side apply users don't encounter a conflict with
server-side apply for objects that previously didn't track managedFields
- Test that we stop tracking managed fields with `managedFields: []`
- Test that we stop tracking managed fields when the feature is disabled
As of now, the kubelet is passing the security context to container runtime even
if the security context has invalid options for a particular OS. As a result,
the pod fails to come up on the node. This error is particularly pronounced on
the Windows nodes where kubelet is allowing Linux specific options like SELinux,
RunAsUser etc where as in [documentation](https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#v1-container),
we clearly state they are not supported. This PR ensures that the kubelet strips
the security contexts of the pod, if they don't make sense on the Windows OS.
This PR fixes a few things for e2e storage suite to run on Windows
cluster.
1. increaes timeout due to longer pod startup time for windows
2. Only set SELinuxOptions or fsGroup if os is not windows
3. Add VolumeSnapshot delete policy for windows3. Add VolumeSnapshot
delete policy for windows3. Add VolumeSnapshot delete policy for windows