When kubelet is restarted, it will now remove the resources for huge
page sizes no longer supported. This is required when:
- node disables huge pages
- changing the default huge page size in older versions of linux
(because it will then only support the newly set default).
- Software updates that change what sizes are supported (eg. by changing
boot parameters).
docker folks added NumCPU implementation for windows that
supported hot-plugging of CPUs. The implementation used the
GetProcessAffinityMask to be able to check which CPUs are
active as well.
3707a76921
The golang "runtime" package has also bene using GetProcessAffinityMask
since 1.6 beta1:
6410e67a1e
So we don't seem to need the sysinfo.NumCPU from docker/docker.
(Note that this is PR is an effort to get away from dependencies from
docker/docker)
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
containerMap is used in CPU Manager to store all containers information in the node.
containerMap provides a mapping from (pod, container) -> containerID for all containers a pod
It is reusable in another component in pkg/kubelet/cm which needs to track changes of all containers in the node.
Signed-off-by: Byonggon Chun <bg.chun@samsung.com>
do a conversion from the cgroups v1 limits to cgroups v2.
e.g. cpu.shares on cgroups v1 has a range of [2-262144] while the
equivalent on cgroups v2 is cpu.weight that uses a range [1-10000].
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
On Windows, the podAdmitHandler returned by the GetAllocateResourcesPodAdmitHandler() func
and registered by the Kubelet is nil.
We implement a noopWindowsResourceAllocator that would admit any pod for Windows in order
to be consistent with the original implementation.
When we clobber PodIP we should also overwrite PodIPs and not rely
on the apiserver to fix it for us - this caused the Kubelet status
manager to report a large string of the following warnings when
it tried to reconcile a host network pod:
```
I0309 19:41:05.283623 1326 status_manager.go:846] Pod status is inconsistent with cached status for pod "machine-config-daemon-jvwz4_openshift-machine-config-operator(61176279-f752-4e1c-ac8a-b48f0a68d54a)", a reconciliation should be triggered:
&v1.PodStatus{
... // 5 identical fields
HostIP: "10.0.32.2",
PodIP: "10.0.32.2",
- PodIPs: []v1.PodIP{{IP: "10.0.32.2"}},
+ PodIPs: []v1.PodIP{},
StartTime: s"2020-03-09 19:41:05 +0000 UTC",
InitContainerStatuses: nil,
... // 3 identical fields
}
```
With the changes to the apiserver, this only happens once, but it is
still a bug.
Most of these could have been refactored automatically but it wouldn't
have been uglier. The unsophisticated tooling left lots of unnecessary
struct -> pointer -> struct transitions.
HPA needs metrics for exited init containers before it will
take action. By setting memory and CPU usage to zero for any
containers that cAdvisor didn't provide statistics for, we
are assured that HPA will be able to correctly calculate
pod resource usage.
The status manager syncBatch() method processes the current state
of the cache, which should include all entries in the channel. Flush
the channel before we call a batch to avoid unnecessary work and
to unblock pod workers when the node is congested.
Discovered while investigating long shutdown intervals on the node
where the status channel stayed full for tens of seconds.
Add a for loop around the select statement to avoid unnecessary
invocations of the wait.Forever closure each time.
When constructing the API status of a pod, if the pod is marked for
deletion no containers should be started. Previously, if a container
inside of a terminating pod failed to start due to a container
runtime error (that populates reasonCache) the reasonCache would
remain populated (it is only updated by syncPod for non-terminating
pods) and the delete action on the pod would be delayed until the
reasonCache entry expired due to other pods.
This dramatically reduces the amount of time the Kubelet waits to
delete pods that are terminating and encountered a container runtime
error.
After a pod reaches a terminal state and all containers are complete
we can delete the pod from the API server. The dispatchWork method
needs to wait for all container status to be available before invoking
delete. Even after the worker stops, status updates will continue to
be delivered and the sync handler will continue to sync the pods, so
dispatchWork gets multiple opportunities to see status.
The previous code assumed that a pod in Failed or Succeeded had no
running containers, but eviction or deletion of running pods could
still have running containers whose status needed to be reported.
This modifies earlier test to guarantee that the "fallback" exit
code 137 is never reported to match the expectation that all pods
exit with valid status for all containers (unless some exceptional
failure like eviction were to occur while the test is running).
The kubelet must not allow a container that was reported failed in a
restartPolicy=Never pod to be reported to the apiserver as success.
If a client deletes a restartPolicy=Never pod, the dispatchWork and
status manager race to update the container status. When dispatchWork
(specifically podIsTerminated) returns true, it means all containers
are stopped, which means status in the container is accurate. However,
the TerminatePod method then clears this status. This results in a
pod that has been reported with status.phase=Failed getting reset to
status.phase.Succeeded, which is a violation of the guarantees around
terminal phase.
Ensure the Kubelet never reports that a container succeeded when it
hasn't run or been executed by guarding the terminate pod loop from
ever reporting 0 in the absence of container status.