DRA had integration tests as part of test/integration/scheduler_perf (for the
scheduler plugin) and some others scattered in different
places (e.g. test/integration/resourceclaim for device status).
The new test/integration/dra is meant to become the common location for all
DRA-related integration tests. This makes it simpler to share common setup
code.
The previous Eventually loop was not properly checking if the pod was
scheduled and running. Thus, the node name could not be retrieved from
the pod specs. The plugin could not be retrieved and the UpdateStatus
was called on a nil object.
TestPod function is now used instead, so the test waits for the pod to
be scheduled. The Eventually loop to get the pod and resourceClaim is
then no longer needed.
Signed-off-by: Lionel Jouin <lionel.jouin@est.tech>
This is just refactoring / renaming.
The SELinux e2e tests grab node metrics so far, so mention `Node` in the
function names. Kube-controller-metrics will follow in a subsequent commit.
add (admittedly pretty crude) CPU allocatable check.
A more incisive refactoring is needed, but we need
to unbreak CI first, so this seems the minimal decently clean test.
Signed-off-by: Francesco Romani <fromani@redhat.com>
The test `Pods should support retriving logs from the container
over websockets` flakes as it doesn't always wait until
container is running and is able to produce expected output.
Waiting for pod to be in the `Running` state is not enough
as it doesn't mean that container is running.
Waiting for container to be in `Running` state should fix
the test.
IIUC, before using the translator handler, the ping data can be delivered from
the client to the runtime side since kube-apiserver does not parse any client
data. However, with WebSocket, the server responds with a pong to the client
without forwarding the data to the runtime side. If a proxy is present, it may
close the connection due to inactivity. SPDY's PingPeriod can help address this
issue.
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Co-authored-by: Antonio Ojea <aojea@google.com>
Our CI machines happen to have 1 fully allocatable CPU for test workloads.
This is really, really the minimal amount. But still should be sufficient for the tests to run
the tests; the CFS quota pod, however, does create a series of pods (at time of writing, 6)
and does the cleanup only at the very end the end. This means pods
requiring resources accumulate on the CI machine node.
The fix implemented here is to just clean up after each subcase.
Doing so the cpu test footprint is equal to the higher requirement (say, 1000 millicores) vs
the sum of all the subcases requirements.
Doing like this doesn't change the test behavior, and make it possible
to run it on very barebones machines.