KYAML is a strict subset of YAML, which is sort of halfway between YAML
and JSON. It has the following properties:
* Does not depend on whitespace (easier to text-patch and template).
* Always quotes value strings (no ambiguity aroud things like "no").
* Allows quoted keys, but does not require them, and only quotes them if
they are not obviously safe (e.g. "no" would always be quoted).
* Always uses {} for structs and maps (no more obscure errors about
mapping values).
* Always uses [] for lists (no more trying to figure out if a dash
changes the meaning).
* When printing, it includes a header which makes it clear this is YAML
and not ill-formed JSON.
* Allows trailing commas
* Allows comments,
* Tries to economize on vertical space by "cuddling" some kinds of
brackets together.
* Retains comments.
Examples:
A struct:
```yaml
metadata: {
creationTimestamp: "2024-12-11T00:10:11Z",
labels: {
app: "hostnames",
},
name: "hostnames",
namespace: "default",
resourceVersion: "15231643",
uid: "f64dbcba-9c58-40b0-bbe7-70495efb5202",
}
```
A list of primitves:
```yaml
ipFamilies: [
"IPv4",
"IPv6",
]
```
A list of structs:
```yaml
ports: [{
port: 80,
protocol: "TCP",
targetPort: 80,
}, {
port: 443,
protocol: "TCP",
targetPort: 443,
}]
```
A multi-document stream:
```yaml
---
{
foo: "bar",
}
---
{
qux: "zrb",
}
```
Moving Scheduler interfaces to staging: Move PodInfo and NodeInfo interfaces (together with related types) to staging repo, leaving internal implementation in kubernetes/kubernetes/pkg/scheduler
* Add JSON & YAML output support for kubectl api-resources
Create a separate `PrintFlags` struct within the apiresources.go file
that handles printing only for `kubetl api-resources` because existing
output formats, i.e., wide and name, are already implemented
independently from HumanReadableFlags and NamePrintFlags.
Signed-off-by: Dharmit Shah <shahdharmit@gmail.com>
* Use separate printer type for all options
Signed-off-by: Dharmit Shah <shahdharmit@gmail.com>
* Unit tests for JSON & YAML outputs
Signed-off-by: Dharmit Shah <shahdharmit@gmail.com>
* Separate file for print types
Signed-off-by: Dharmit Shah <shahdharmit@gmail.com>
* Move JSON-YAML tests to separate function
Signed-off-by: Dharmit Shah <shahdharmit@gmail.com>
* Fix broken unit test
Signed-off-by: Dharmit Shah <shahdharmit@gmail.com>
* Unifying JSON & YAML unit test functions
Signed-off-by: Dharmit Shah <shahdharmit@gmail.com>
* Fix linter errors
Signed-off-by: Dharmit Shah <shahdharmit@gmail.com>
* PR feedback and linter again
Signed-off-by: Dharmit Shah <shahdharmit@gmail.com>
---------
Signed-off-by: Dharmit Shah <shahdharmit@gmail.com>
Remove context.TODO and context.Background
Fix linter error in volume_manager_test
Fix QF1008: Could remove embedded field "ObjectMeta" from selector
Remove the extra code change
Remove the extra change
Update the NewTestContext
As part of the PR 132028 we added more e2e test coverage to validate
the fix, and check as much as possible there are no regressions.
The issue and the fix become evident largely when inspecting
memory allocation with the Memory Manager static policy enabled.
Quoting the commit message of bc56d0e45a
```
The podresources API List implementation uses the internal data of the
resource managers as source of truth.
Looking at the implementation here:
https://github.com/kubernetes/kubernetes/blob/v1.34.0-alpha.0/pkg/kubelet/apis/podresources/server_v1.go#L60
we take care of syncing the device allocation data before querying the
device manager to return its pod->devices assignment.
This is needed because otherwise the device manager (and all the other
resource managers) would do the cleanup asynchronously, so the `List` call
will return incorrect data.
But we don't do this syncing neither for CPUs or for memory,
so when we report these we will get stale data as the issue #132020 demonstrates.
For CPU manager, we however have the reconcile loop which cleans the stale data periodically.
Turns out this timing interplay was actually the reason the existing issue #119423 seemed fixed
(see: #119423 (comment)).
But it's actually timing. If in the reproducer we set the `cpuManagerReconcilePeriod` to a time
very high (>= 5 minutes), then the issue still reproduces against current master branch
(https://github.com/kubernetes/kubernetes/blob/v1.34.0-alpha.0/test/e2e_node/podresources_test.go#L983).
```
The missing actor here is memory manager. Memory manager has no
reconcile loop (implicit fixing the stale data problem) no explicit
synchronization, so it is the unlucky one which reported stale data,
leading to the eventual understanding of the problem.
For this reason it was (and still is) important to exercise it during
the test.
Turns out the test is however wrong, likely because a hidden dependency
between the test expectations and the lane configuration (notably
machine specs), so we disable the memory manager activation for the time
being, until we figure out a safe way to enable it.
Note this significantly weakens the signal for this specific test.
Signed-off-by: Francesco Romani <fromani@redhat.com>
This avoids the overhead for the more complex conversion to v1beta1 and might
make it a bit more realistic to get rid of the v1beta1 eventually.
The expected GVK must be set explicitly because when emulating 1.33,
v1beta1 is the default although the fixed storage version is v1beta2.
It hasn't been on-by-default before, therefore it does not get locked to the
new default on yet. This has some impact on the scheduler configuration
because the plugin is now enabled by default.
Because the feature is now GA, it doesn't need to be a label on E2E tests,
which wouldn't be possible anyway once it gets removed entirely.
The pods/finalizer permission can be restricted to just updates because that is
all that matters.
The DeviceTaints rules were under the wrong feature gate check (copy-and-paste)
and must remain disabled when DRA itself becomes enabled.
Some tests do version emulation and need the DRA feature. In that combination
the --runtime-config-emulation-forward-compatible option is needed to allow
enabling the V1 API although it's only available in 1.34.
As before when adding v1beta2, DRA drivers built using the
k8s.io/dynamic-resource-allocation helper packages remain compatible with all
Kubernetes release >= 1.32. The helper code picks whatever API version is
enabled from v1beta1/v1beta2/v1.
However, the control plane now depends on v1, so a cluster configuration where
only v1beta1 or v1beta2 are enabled without the v1 won't work.
Call p.conn.Close() in DRAPluginManager.remove to ensure the gRPC
connection is properly closed when a plugin is unregistered. This
prevents reconnecting attempts, resource leaks and ensures clean
shutdown of plugin connections during plugin removal.