Automatic merge from submit-queue
e2e: Enable persistent volume test
The test is already there and all packages should be already available on all test machines.
It tests:
- binding
- using bound claim in a pod
- recycling NFS volume
(we should see shortly if all nfs packages are really installed as Jenkins tests it...)
Automatic merge from submit-queue
e2e/framework/util.StartPods: don't wait for pods that are not created
When running ``[k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]`` pods can be created in a way in which additional pods have to be create to fully saturate node's capacity CPU in a cluster. Additional pods are created by calling ``framework.StartPods``. The function creates pods with a given label and waits for them (if ``waitForRunning`` is ``true``). This is fine as long as the number of pods to created is non-zero. If there are zero pods to be created and ``waitForRunning`` is ``true``, the function waits forever as there is not going to be any pods with requested label. Thus, resulting in ``Error waiting for 0 pods to be running - probably a timeout``. Causing the e2e test to fail even if it should not.
Adding condition to return from the function immediately if there is not pod to create.
Automatic merge from submit-queue
Petset controller
Took longer than I expected. Main parts of this pr are:
1. Identity generation based on petset spec (volumes are mapped per discussion in #18016)
2. Ensure that we create/delete pets in sequence
3. Ensuring that we create, wait for healthy, create; or delete, wait for terminationGrace, delete
4. Controller that watches apiserver and drives actual -> desired
PVCs are not deleted, yet.
Automatic merge from submit-queue
kubectl rolling-update support for same image
Fixes#23497.
Enables `kubectl rolling-update --image` to the same image, adding a `--image-pull-policy` flag to remove ambiguity. This allows rolling-update to behave as an "update and/or restart" (https://github.com/kubernetes/kubernetes/issues/23497#issuecomment-212349730), or as a forced update when the same tag can mean multiple versions (e.g. `:latest`). cc @janetkuo @nikhiljindal
Automatic merge from submit-queue
Use tagged redis image for kubectl test, move json test file out of deprecated examples
Closes#24642
Changes the redis image to use the :e2e tagged version on gcr.io.
Since the examples/ subdir is deprecated in favor of the new kubernetes/kubernetes.github.io I just copied this file to test-manifests/kubectl like some other files.
The test is already there and all packages should be already available on all
test machines (namely: mount.nfs).
It tests:
- binding
- using bound claim in a pod
- recycling NFS volume
Automatic merge from submit-queue
Framework support for node e2e.
This should let us port existing e2e tests to the node e2e suite, if the tests are node specific.
Automatic merge from submit-queue
Promote Pod Hostname & Subdomain to fields (were annotations)
Deprecating the podHostName, subdomain and PodHostnames annotations and created corresponding new fields for them on PodSpec and Endpoints types.
Annotation doc: #22564
Annotation code: #20688
Automatic merge from submit-queue
Add support for running clusters on GCI
Google Container-VM Image (GCI) is the next revision of Container-VM. See documentation at https://cloud.google.com/compute/docs/containers/vm-image/. This change adds support for starting a Kubernetes cluster using GCI.
With this change, users can start a kubernetes cluster using the latest kubelet and kubectl release binary built in the GCI image by running:
$ KUBE_OS_DISTRIBUTION="gci" cluster/kube-up.sh
Or run a testing cluster on GCI by running:
$ KUBE_OS_DISTRIBUTION="gci" go run hack/e2e.go -v --up
The commands above will choose the latest GCI image by default.
Automatic merge from submit-queue
Quota ignores pod compute resources on updates
Scenario:
1. define a quota Q that tracks memory and cpu
2. create pod P that uses memory=100Mi, cpu=100m
3. update pod P to use memory=50Mi,cpu=10m
Expected Results:
Step 3 should fail with validation error.
Quota Q should not have changed.
Actual Results:
Step 3 fails validation, but quota Q is decremented to have memory usage down 50Mi and cpu usage down 40m. This is because the quota was getting updated even though the pod was going to fail validation.
Fix:
Quota should only support modifying pod compute resources when pods themselves support modifying their compute resources.
This also fixes https://github.com/kubernetes/kubernetes/issues/24352
/cc @smarterclayton - this is what we discussed.
fyi: @kubernetes/rh-cluster-infra