As a quick fix for a flake, bceec5a3ff
introduced polling with wait.Poll in all callers of CheckDaemonStatus.
This commit reverts all callers to what they were before (CheckDaemonStatus +
ExpectNoError) and implements polling according to E2E best practices
(https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/writing-good-e2e-tests.md#polling-and-timeouts):
- no logging while polling
- support for progress reporting while polling
- last but not least, produce an informative failure message in case of a
timeout, including a dump of the daemon set as YAML
The util for checking on daemonstatus was checking once if the Status of
the daemonset was reporting that all the desired Pods are scheduled and
ready.
However, the pattern used in the e2e test for this function was not
taking into consideration that the controller needs to propagate the Pod
status to the DeamonSet status, and was asserting on the condition only
once after waiting for all the Pods to be ready.
In order to avoid more churn code, change the CheckDaemonStatus
signature to the wait.Condition type and use it in a async poll loop on
the tests.
The LoadBalancer test "should handle updates to ExternalTrafficPolicy
field" had a bunch of problems, but the biggest is that (without doing
[Disruptive] things to the cluster or making unwarranted assumptions
about source IPs) it's very hard to *prove* that the cloud load
balancer is doing Cluster traffic policy semantics (distributing
connections to all nodes) rather than Local (distributing only to
nodes with endpoints).
So split the test into 2 new tests with more focused semantics:
- "should implement NodePort and HealthCheckNodePort correctly when
ExternalTrafficPolicy changes" (in service.go) tests that the
service proxy is correctly implementing the proxy side of
Cluster-vs-Local traffic policy for LoadBalancer Services, without
testing the cloud load balancer itself at all.
- "should target all nodes with endpoints" (in loadbalancer.go)
complements the existing "should only target nodes with
endpoints", to ensure that when a service has
`externalTrafficPolicy: Local`, and there are endpoints on
multiple nodes, that the cloud is correctly configured to target
all of those endpoints, not just one.
validating that one endpoint is reachable from one part of the cluster
is not enough condition to consider it will be reachable from any node,
as different Services proxies on different nodes will have different
propagation delays for the EndpointSlices and Services information.
The existing test had two problems:
- It only made connections from within the cluster, so for VIP-type
LBs, the connections would always be short-circuited and so this
only tested kube-proxy's LBSR implementation, not the cloud's.
- For non-VIP-type LBs, it would only work if pod-to-LB connections
were not masqueraded, which is not the case for most network
plugins.
Fix this by (a) testing connectivity from the test binary, so as to
test filtering external IPs, and ensure we're testing the cloud's
behavior; and (b) using both pod and node IPs when testing the
in-cluster case.
Also some general cleanup of the test case.
Refactor the code related to creating an internal type load balancer in the e2e tests for network load balancers. The modification removes the check for the "azure" provider and updates it to only check for "gke" and "gce" providers. This change ensures that the test only runs when the cluster is using "gke" or "gce" as the provider. The counterpart test is in the out-of-tree cloud provider azure.
This changes the text registration so that tags for which the framework has a
dedicated API (features, feature gates, slow, serial, etc.) those APIs are
used.
Arbitrary, custom tags are still left in place for now.
This touches cases where FromInt() is used on numeric constants, or
values which are already int32s, or int variables which are defined
close by and can be changed to int32s with little impact.
Signed-off-by: Stephen Kitt <skitt@redhat.com>
The recently introduced failure handling in ExpectNoError depends on error
wrapping: if an error prefix gets added with `fmt.Errorf("foo: %v", err)`, then
ExpectNoError cannot detect that the root cause is an assertion failure and
then will add another useless "unexpected error" prefix and will not dump the
additional failure information (currently the backtrace inside the E2E
framework).
Instead of manually deciding on a case-by-case basis where %w is needed, all
error wrapping was updated automatically with
sed -i "s/fmt.Errorf\(.*\): '*\(%s\|%v\)'*\",\(.* err)\)/fmt.Errorf\1: %w\",\3/" $(git grep -l 'fmt.Errorf' test/e2e*)
This may be unnecessary in some cases, but it's not wrong.
The recently introduced failure handling in ExpectNoError depends on error
wrapping: if an error prefix gets added with `fmt.Errorf("foo: %v", err)`, then
ExpectNoError cannot detect that the root cause is an assertion failure and
then will add another useless "unexpected error" prefix and will not dump the
additional failure information (currently the backtrace inside the E2E
framework).
Instead of manually deciding on a case-by-case basis where %w is needed, all
error wrapping was updated automatically with
sed -i "s/fmt.Errorf\(.*\): '*\(%s\|%v\)'*\",\(.* err)\)/fmt.Errorf\1: %w\",\3/" $(git grep -l 'fmt.Errorf' test/e2e*)
This may be unnecessary in some cases, but it's not wrong.
The NodePort functionality can be tested within the cluster.
Testing from outside the cluster assumes that there is connectivity
between the e2e.test binary and the cluster under test, that is not
always true, and in some cases is exposed to external factors or
misconfigurations like wrong routes or firewall rules that impact
on the test.
Change-Id: Ie2fc8929723e80273c0933dbaeb6a42729c819d0
All code must use the context from Ginkgo when doing API calls or polling for a
change, otherwise the code would not return immediately when the test gets
aborted.
ginkgo.DeferCleanup has multiple advantages:
- The cleanup operation can get registered if and only if needed.
- No need to return a cleanup function that the caller must invoke.
- Automatically determines whether a context is needed, which will
simplify the introduction of context parameters.
- Ginkgo's timeline shows when it executes the cleanup operation.
Add a test case with a DaemonSet behind a simple load balancer whose
address is being constantly hit via HTTP requests.
The test passes if there are no errors when doing HTTP requests to the
load balancer address, during DaemonSet `RollingUpdate` operations.
Signed-off-by: Ionut Balutoiu <ibalutoiu@cloudbasesolutions.com>
Every ginkgo callback should return immediately when a timeout occurs or the
test run manually gets aborted with CTRL-C. To do that, they must take a ctx
parameter and pass it through to all code which might block.
This is a first automated step towards that: the additional parameter got added
with
sed -i 's/\(framework.ConformanceIt\|ginkgo.It\)\(.*\)func() {$/\1\2func(ctx context.Context) {/' \
$(git grep -l -e framework.ConformanceIt -e ginkgo.It )
$GOPATH/bin/goimports -w $(git status | grep modified: | sed -e 's/.* //')
log_test.go was left unchanged.
The cloud-provider and the e2e test were racing on deleting the
cloud resources.
Also, the cloud-provider should not leave orphan resources, that will
be detected by the job and fail, thus we should not have additional
logic to cleanup masking these errors.