Previously this code used http.Get and failed to read/close resp.Body, which
prevented network connection reuse, leaking fds. Now we use http.Head
instead, because its response always has a nil Body, so we don't have to
worry about read/close.
Automatic merge from submit-queue
skip benchmark in jenkins serial test
This PR changes jenkins-serial.properties to skip benchmark tests (with tag [Benchmark]) in jenkins serial tests. It also add more comments in run_e2e.go.
Automatic merge from submit-queue
[Kubelet] Optionally consume configuration from <node-name> named config maps
This extends the Kubelet to check the API server for new node-specific config, and exit when it finds said new config.
/cc @kubernetes/sig-node @mikedanese @timstclair @vishh
**Release note**:
```
Extends Kubelet with Alpha Dynamic Kubelet Configuration. Please note that this alpha feature does not currently work with cloud provider auto-detection.
```
Automatic merge from submit-queue
[GarbageCollector] Allow per-resource default garbage collection behavior
What's the bug:
When deleting an RC with `deleteOptions.OrphanDependents==nil`, garbage collector is supposed to treat it as `deleteOptions.OrphanDependents==true", and orphan the pods created by it. But the apiserver is not doing that.
What's in the pr:
Allow each resource to specify the default garbage collection behavior in the registry. For example, RC registry's default GC behavior is Orphan, and Pod registry's default GC behavior is CascadingDeletion.
Automatic merge from submit-queue
federation: Adding support for namespace admission controls in federation-apiserver
Now that we have namespaces in federation apiserver, we can support namespace admission controls.
There are 3 of these:
namespace/autoprovision, namespace/exists and namespace/lifecycle.
namespace/autoprovision, namespace/exists should be deprecated in kubernetes(https://github.com/kubernetes/kubernetes/issues/31195). Adding support for namespace/lifecycle to federation-apiserver.
As in kube-apiserver, enabling namespace/lifecycle by default.
```release-note
Action required: If you have a running federation control plane, you will have to ensure that for all federation resources, the corresponding namespace exists in federation control plane.
federation-apiserver now supports NamespaceLifecycle admission control, which is enabled by default. Set the --admission-control flag on the server to change that.
```
cc @kubernetes/sig-cluster-federation @quinton-hoole
1. /validate service does not exist, so remove the test for it and add some that actually do exist
2. The namespace does not exist so this will always return NotFound
Note: DoRaw() ignores the StatusCode.
This is in preparation for the next commit
Automatic merge from submit-queue
[e2e density test] Fix unnecessary Delete RC requests when not running latency test
As the following code block
https://github.com/kubernetes/kubernetes/blob/master/test/e2e/density.go#L666-L670
shows, after running each density test case, it will attempt to delete "additional replication controllers" even though there is **no additional replication controller**.
When we are not running latency test, API Server will return "404 error code". So, I propose to move the above code block inside thedetermine statementsif `itArg.runLatencyTest{ }` , looks like:
```
if itArg.runLatencyTest {
...
for i := 1; i <= nodeCount; i++ {
name := additionalPodsPrefix + "-" + strconv.Itoa(i)
c.ReplicationControllers(ns).Delete(name, nil)
}
}
```
In this way, removing RC will be executed only if we set `itArg.runLatencyTest` to be `true`. It can avoid post some necessary requests to API Server.
Issuse is #30977
Automatic merge from submit-queue
[e2e test] Fix e2e test pause image hard code
Use `framework.GetPauseImageName(f.Client)` instead of hard code(such as `"gcr.io/google_containers/pause-amd64:3.0"`) to represent pause image name.
Related issus is #30967
Automatic merge from submit-queue
Node E2E: Remove fatal error in e2e_node_suite_test.go
Addresses https://github.com/kubernetes/kubernetes/issues/30779#issuecomment-240532190.
Currently we run node e2e test in parallel, and ginkgo makes sure that we only initialize test framework in the first test node.
However, because we throw out some fatal error during the initialization. Once there is an fatal error, the first test node will die immediately without reporting any error, and the other nodes will exit because the first node is gone with meaningless error.
If kubelet start fails, we'll get something like:
```
------------------------------
Failure [132.485 seconds]
[BeforeSuite] BeforeSuite
/usr/local/google/home/lantaol/workspace/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:138
BeforeSuite on Node 1 failed
/usr/local/google/home/lantaol/workspace/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:138
------------------------------
......
------------------------------
Failure [132.465 seconds]
[BeforeSuite] BeforeSuite
/usr/local/google/home/lantaol/workspace/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:138
BeforeSuite on Node 1 failed
/usr/local/google/home/lantaol/workspace/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:138
```
This PR replaces these fatal errors with gomega assertion, with this PR, we'll get:
```
Failure [132.482 seconds]
[BeforeSuite] BeforeSuite
/usr/local/google/home/lantaol/workspace/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:138
should be able to start node services.
Expected success, but got an error:
<*errors.errorString | 0xc8203351b0>: {
s: "failed to run server start command \"/tmp/ginkgo869068712/e2e_node.test --run-services-mode --server-start-timeout 2m0s --report-dir --node-name lantaol0.mtv.corp.google.com --disable-kubenet=true --cgroups-per-qos=false --manifest-path /tmp/node-e2e-pod221291440 --eviction-hard memory.available<250Mi\": exit status 255",
}
failed to run server start command "/tmp/ginkgo869068712/e2e_node.test --run-services-mode --server-start-timeout 2m0s --report-dir --node-name lantaol0.mtv.corp.google.com --disable-kubenet=true --cgroups-per-qos=false --manifest-path /tmp/node-e2e-pod221291440 --eviction-hard memory.available<250Mi": exit status 255
/usr/local/google/home/lantaol/workspace/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:117
------------------------------
Failure [132.485 seconds]
[BeforeSuite] BeforeSuite
/usr/local/google/home/lantaol/workspace/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:138
BeforeSuite on Node 1 failed
/usr/local/google/home/lantaol/workspace/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:138
------------------------------
......
------------------------------
Failure [132.465 seconds]
[BeforeSuite] BeforeSuite
/usr/local/google/home/lantaol/workspace/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:138
BeforeSuite on Node 1 failed
/usr/local/google/home/lantaol/workspace/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:138
```
This is much more informative.
/cc @kubernetes/sig-node