With graduation of device plugins to GA in 1.26, the feature gate is
enabled by default so `devicePluginEnabled` field no longer needs to
be passed at the time of Container Manager creation.
In addition to that, we remove the `ManagerStub` as it is no longer
needed.
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
Tests scheduler enforcement of the ReadWriteOncePod PVC access mode.
- Creates a pod using a PVC with ReadWriteOncePod
- Creates a second pod using the same PVC
- Observes the second pod fails to schedule because PVC is in-use
- Deletes the first pod
- Observes the second pod successfully schedules
When a volume is already mounted with an unexpected SELinux label,
kubelet must unmount it first and then mount it back with the expected one.
Report an error to user, just in case the unmount takes too long.
In therory, this error should not happen too often, because two Pods with
different SELinux label will not enter Desired State of World, see
dsw.AddPodToVolume. It can happen when DSW and ASW SELinux labels only when
a volume has been deleted from DSW (= Pod was deleted) or a volume was
reconstructed after kubelet restart. In both cases, volume manager should
unmount the volume quickly.
In PodExistsInVolume with volumeObj.seLinuxMountContext != nil we know that
the volume has been previously mounted with a given SELinuxMountContext.
Either it has been mounted by this kubelet and we know it's correct or it
was by a previous instance of kubelet and the context has been
reconstructed from the filesystem. In both cases, the actual context is
correct, regardless if the volume plugin or PV access mode supports SELinux
mounts.
.. that are not migrated to CSI in 1.26 *and* are based on a block device.
NFS and CephFS may use the same volume as several PVs and then mounting
with -o context won't work.
The Desired State of World can require a different SELinux mount context than
is in the Actual State of World and it's perfectly OK. For example when
user changes SELinux context of Pods or when the context is reconstructed
after kubelet restart.
Don't spam log and don't report errors to the user as event - reconciler
will do the right thing and unmount the old volume (with wrong context) and
mount a new one in the next reconciliation. It's not an error, it's
expected workflow.
In order to improve the observability of the cpumanager,
add and populate metrics to track if the combination of
the kubelet configuration and podspec would trigger
exclusive core allocation and pinning.
We should avoid leaking any node/machine specific information
(e.g. core ids, even though this is admittedly an extreme example);
tracking these metrics seems to be a good first step, because
it allows us to get feedback without exposing details.
Signed-off-by: Francesco Romani <fromani@redhat.com>