- Remove the ServerDryRun field and delegate it entirely to the resource.Helper
- Use resource.Helper for deletions (as in `kubectl apply --force`)
instead of using the pruner's method that uses a dynamic client
- Reduce the resource.Helpers and times we check for server-side dry-run
in apply
the test "executing a command with run and attach without stdin"
is inherently flaky, there are several discussion but seems that
it requires changing the way the kubectl run and attach works.
The test fails if we are not able to attach before the container prints
"stdin closed", but hasn't exited yet.
Because the race seems difficult to solve, we can wait 5 seconds
before printing to give time to kubectl to attach to the container.
- use in-cache Pod instead of real-time Pod (by calling API server) to mark it as unschedulable
in internal schedulingQ
- remove the backoff logic as now we don't call API server
- the whole logic is changed to a synchronous call
We have little coverage around node addition and removal. Since distinct event handlers interact, it is important to cover this in integration tests.
Signed-off-by: Aldo Culquicondor <acondor@google.com>
ensure that when a pod servicing UDP traffic is deleted the conntrack entries
are cleaned up and another backend can pick up the traffic with minimal
interruption
When using NodePort services and long running connections that on pod deletion
stale conntrack entries can halt the flow of traffic. Add a test case to check
that conntrack entries are cleaned up.
In case two or more controllers share the informers created through InitTestScheduler,
it's not safe to start the informers until all controllers set their informer
indexers. Otherwise, some controller might fail to register their indexers
in time. Thus, it's responsibility of each consumer to make sure all informers
are started after all controllers had time to get initiliazed.
ginkgo has a weird bug that - AfterEach does not get called when
testsuite exits with certain kind of interrupt (Ctrl-C for example).
More info - https://github.com/onsi/ginkgo/issues/222
We workaround this issue in Kubernetes by adding a special hook into
AfterSuite call, but AfterSuite can not be used to peforms certain
kind of cleanup because it can race with AfterEach hook and
framework.AfterEach hook will set framework.ClientSet to nil.
This presents a problem in cleaning up CSI driver and testpods. This
PR removes cleanup of driver manifest via CleanupAction because that
is not safe and racy (such as f.ClientSet may disappear!) and makes
AfterSuite hooks run in a ordered fashion
We should read and verify the data before actually closing the
connection to avoid connection based-races within the test.
Signed-off-by: Sascha Grunert <sgrunert@suse.com>