Other validation errors, like using hostNetwork, don't put
pod.spec.HostNetwork in the error message.
Let's remove align with that.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
Now if a pod tries to use user namespaces (hostUsers: false) and a
volume device, it will see this error:
$ kubectl apply -f pod.yaml
...
* spec.ephemeralContainers[0].volumeDevices: Forbidden: when `pod.Spec.HostUsers` is false
* spec.initContainers[0].volumeDevices: Forbidden: when `pod.Spec.HostUsers` is false
* spec.containers[0].volumeDevices: Forbidden: when `pod.Spec.HostUsers` is false
Note that if a pod is already created with volumeDevices and userns,
then we allow modifications to that object.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
The hugepage aggregated container limits cannot be greater than pod-level limits.
This was already enforced with the defaulted requests from the specfied
limits, however it did not make it clear about both hugepage requests and limits.
* Add FileKeyRef field and struct to the Pod API
* Add the implementation code in the kubelet.
* Add validation code
* Add basic functionality e2e tests
* add codes for drop disabled pod fields
* update go.mod
* fix: improve the pod level request validation
The pod level request should be larger than the aggregated container
requests. The fix is to skip those resources not supported at the pod
level for better efficiency.
A minor unit test is also added.
* Align with the limit check section using the pod spec to check
existence.
Only validate when feature gate RelaxedServiceNameValidation is enabled.
Also remove name validation on Service updates, as the name is
immutable.
Move ValidateObjectMeta out of ValidateService
Put it into ValidateServiceCreate(), making the code path as such:
```
pkg/registry/core/service/strategy.go
Validate -> validation.ValidateServiceCreate -> ValidateObjectMeta
-> ValidateService
ValidateUpdate -> validation.ValidateServiceUpdate -> ValidateObjectMetaUpdate
-> ValidateService
```
Other resources I checked pass the update objects through
ValidateObjectMeta and ValidateObjectMetaUpdate, so this breaks the
pattern, but it seems to be how the
ValidateObjectMeta/ValidateObjectMetaUpdate functions are designed to
operate.
If someone gains the ability to create static pods, they might try to use that
ability to run code which gets access to the resources associated with some
existing claim which was previously allocated for some other pod. Such an
attempt already fails because the claim status tracks which pods are allowed to
use the claim, the static pod is not in that list, the node is not authorized
to add it, and the kubelet checks that list before starting the pod in
195803cde5/pkg/kubelet/cm/dra/manager.go (L218-L222).
Even if the pod were started, DRA drivers typically manage node-local resources
which can already be accessed via such an attack without involving DRA. DRA
drivers which manage non-node-local resources have to consider access by a
compromised node as part of their threat model.
Nonetheless, it is better to not accept static pods which reference
ResourceClaims or ResourceClaimTemplates in the first place because there
is no valid use case for it.
This is done at different levels for defense in depth:
- configuration validation in the kubelet
- admission checking of node restrictions
- API validation
Co-authored-by: Jordan Liggitt <liggitt@google.com>
Code changes by Jordan, with one small change (resourceClaims -> resourceclaims).
Unit tests by Patrick.
This is needed to make declaratve validation clean. Past me thought
this was clever (pointer versioned, non-pointer internal) but it is just
confusing.