When the WorkloadMonitor is reconciled and child Workload objects are
created, they will now get additional labels in the
`workloads.cozystack.io` namespace, containing metadata about the
workload. This particular commit checks if a pod targeted by a Workload
is owned by a VirtualMachineInstance (i.e. it launches a KubeVirt VMI)
and, if so, gets the VMI instance type and puts it in the
`kubevirt-vmi-instance-type` label.
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- **New Features**
- Introduced a new controller to synchronize tenant HelmReleases and
propagate configuration changes.
- Added dynamic host value overrides in multiple Helm templates by
conditionally retrieving values from the "tenant-root" HelmRelease.
- Updated RBAC permissions to allow management of HelmRelease resources.
- **Improvements**
- Added support for Helm v2 API integration.
- Enhanced HelmRelease reconciliation logic and configuration
propagation for tenant environments.
- **Bug Fixes**
- Fixed periodic reconciliation for the "tenant-root" HelmRelease by
setting its interval to zero.
- **Version Updates**
- Incremented version numbers for the "info" and "ingress" packages.
- **Chores**
- Updated version mappings and commit references.
- Improved .gitignore to exclude the .vscode directory.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
* Count Workload resources for pods by requests, not limits
* Do not count init container requests
* Prefix Workloads for pods with `pod-`, just like the other types to
prevent possible name collisions (closes#787)
The previous version of the WorkloadMonitor controller incorrectly
summed resource limits on pods, rather than requests. This prevented it
from tracking the resource allocation for pods, which only had requests
specified, which is particularly the case for kubevirt's virtual machine
pods. Additionally, it counted the limits for all containers, including
init containers, which are short-lived and do not contribute much to the
total resource usage.
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
When populating the WorkloadMonitor objects, the status field is now
populated with a specially formatted string, mimicking the keys of
ResourceQuota.spec.hard, e.g.
`<storageclassname>.storageclass.storage.k8s.io/requests.storage` or
`<ipaddresspoolname>.ipaddresspool.metallb.io/requests.ipaddresses`
so the storage class or IP pool in use can be tracked. Part of #788.
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
Workload object counts were previously getting out of control as the recreation of a related Pod would spawn a new workload, while the old one would never get deleted (except for StatefulSets, where the names of Pods are stable). Workloads without a matching object are now deleted.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
Based on the comprehensive summary of changes, here are the release
notes:
- **New Features**
- Added a new Kubernetes controller for managing workload monitoring
- Introduced telemetry collection capabilities with configurable options
- Added new Custom Resource Definitions (CRDs) for Workload and
WorkloadMonitor
- **Improvements**
- Enhanced API infrastructure with new API group and version
- Improved deployment configurations for various system components
- Added development container and workflow configurations
- **Bug Fixes**
- Updated import paths to correct domain naming
- **Chores**
- Updated copyright years
- Refined module dependencies
- Standardized code linting and testing configurations
- **Infrastructure**
- Increased `cozystack-api` deployment replicas from 1 to 2 for improved
availability
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>