Compare commits

...

196 Commits

Author SHA1 Message Date
Andrei Kvapil
8c4605284c Prepare release v0.26.1 (#659)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
  - Upgraded core platform components to version **v0.26.1**.
- Refreshed container images for key services including backups,
caching, autoscaling, dashboard integrations, and cloud providers.
- These updates improve overall stability, consistency, and performance
across the system.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-03-01 21:04:40 +01:00
Andrei Kvapil
f708dc2043 VirtualMachine: Fix WholeIP enum check (#657)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated the virtual machine component to version 0.8.2, ensuring more
reliable version references.
- Standardized a configuration option's casing to maintain consistency.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-03-01 11:08:03 +01:00
Andrei Kvapil
160e4e2a32 Update installation manifests
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-02-27 14:06:50 +01:00
xy2
79eadda494 Escape mustaches in prometheus rules for Helm. (#645)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a dynamic alert configuration system that aggregates
multiple alert settings into a single, streamlined document for easier
management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-27 13:16:54 +01:00
Timofei Larkin
3da1a4ed92 Merge pull request #654 from aenix-io/release-v0.26.0
Prepare release v0.26.0
2025-02-27 15:59:11 +04:00
Timofei Larkin
a5dc2d5382 Prepare release v0.26.0 2025-02-27 11:51:46 +03:00
Timofei Larkin
705eb06078 Merge pull request #651 from aenix-io/linstor-snapshots
linstor: add basic snapshot functionality
2025-02-27 11:16:26 +04:00
Andrei Kvapil
e735f96555 kubevirt: Enable live-migration by default (#652)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Expanded configuration options now include the ability to enable live
migration for virtual machine management, offering smoother transitions
and enhanced flexibility.
- Introduced a new eviction strategy for managing virtual machine
evictions.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-02-26 23:18:01 +01:00
Andrei Kvapil
f976ff8ed3 Upd cilium v1.16.7 (#653)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a configurable option for adjusting the Envoy access log
buffer size, allowing users to better tune log handling.
	- Improved startup feedback with more prompt service restarts.

- **Chores**
	- Upgraded all core components to version 1.16.7.
- Updated documentation and configuration settings to reflect the latest
release.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-26 23:17:34 +01:00
Andrei Kvapil
9ae6b2b0da linstor: add basic snapshot functionality
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-02-26 19:44:42 +01:00
Andrei Kvapil
86bb64000e Add new info logo in common style (#649)
New info icon for Cozystack

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
Co-authored-by: Viktoriia Kvapil <159528100+kvapsova@users.noreply.github.com>
2025-02-25 15:12:06 +01:00
Kingdon Barrett
19e0e4c2dc Flux Operator v0.15 (#631)
A new release of the Flux Operator (v0.15.0) - to go with the newly
created Flux v2.5.0 release

(And to go with that, a new version of the flux-instance chart.)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **New Features**
- Introduced enhanced operator capabilities by adding new resource
types, including `ResourceSetInputProvider` and `ResourceSet`.
- Expanded configuration options for deployments, including settings for
artifact pull secrets and customizable synchronization intervals.
- Added support for multitenancy and role-based access control
configurations.

- **Documentation**
- Updated version information and badges to reflect the upgrade to
version 0.15.0.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-02-25 14:57:49 +01:00
Kingdon Barrett
86724a6860 Upgrade to Flux 2.5.0 (#640)
Flux v2.5 is out:

* https://github.com/fluxcd/flux2/releases/tag/v2.5.0

* https://fluxcd.io/blog/2025/02/flux-v2.5.0/

🎉 🏆 

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Upgraded the FluxCD system from version 2.4.x to 2.5.x for improved
integration and performance.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-02-25 14:56:48 +01:00
klinch0
a226fdd242 bugfix/fix-nil-pointer (#643)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced dashboard and identity management displays with updated
branding and localization settings, ensuring a refreshed user interface
and experience.
  
- **Style**
- Streamlined dashboard appearance by removing legacy custom styling,
resulting in a more consistent and contemporary look.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-25 14:54:23 +01:00
klinch0
e2369bae68 feature/add-quota (#644)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new configurable parameter for tenant resource quotas,
enabling flexible CPU and memory management.
	- Added a new YAML template for Kubernetes ResourceQuota configuration.
	- Updated application version to 1.8.0.
- **Documentation**
- Added documentation for the new `resourceQuotas` parameter in tenant
configuration.
- **Chores**
	- Updated versioning entries for the tenant application.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-25 14:53:52 +01:00
Timofei Larkin
46f0bb2078 Merge pull request #620 from aenix-io/chore/improve-etcd-tls
Improve TLS handling in etcd helm chart
2025-02-25 17:29:54 +04:00
Timofei Larkin
6ff8b527ea Merge branch 'main' into chore/improve-etcd-tls 2025-02-25 13:38:58 +03:00
Timofei Larkin
0f87c73051 Improve TLS handling in etcd helm chart
1. Add a `commonName` to every certificate.
2. Move 127.0.0.1 from DNS names to IP Addresses in the certificate
   spec.
3. Add **client** auth usage to the etcd-**server** certificate (yes,
   that's necessary), because etcd queries itself using its
   [server cert as a client cert](https://github.com/etcd-io/etcd/issues/9785#issuecomment-432438748).
4. Default all CA certificates' durations to 10 years.
5. Set subject org to release namespace and OU to name so that subjects
   are unique
2025-02-25 13:36:46 +03:00
klinch0
d0d62e8847 feature/add-goldpinger (#648)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a comprehensive Grafana dashboard for Goldpinger, offering
real-time insights into node health, error occurrences, and response
times with intuitive filtering.
- Expanded deployment configurations to include Goldpinger across
environments, streamlining release management and dependency handling.
- Launched a dedicated deployment package featuring customizable
templates for secure, efficient Kubernetes deployments—including
workloads, services, ingress, and monitoring integrations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-25 10:08:08 +01:00
xy2
439381e474 Allow lookup function in 'make diff'. (#647)
Many applications require the lookup function on the live server. Allow
its usage as well as in `make show`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated the release diff operation to simulate the upgrade process on
the server side, ensuring a safer preview without applying changes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-25 09:54:18 +01:00
Timofei Larkin
a6a95b0091 Merge pull request #633 from aenix-io/119-update-kamaji
Update kamaji version, fix kubernetes chart for compat with new kamaji version
2025-02-25 12:50:28 +04:00
Timofei Larkin
392cd862e9 Update scripts/migrations/9
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2025-02-25 10:38:17 +03:00
Timofei Larkin
b32106484f New schema version 10
BREAKING: all kuberneteses will be upgraded to chart version 0.15.1
2025-02-24 16:33:21 +03:00
Timofei Larkin
77df31e105 Merge branch 'main' into 119-update-kamaji 2025-02-24 13:15:28 +03:00
Timofei Larkin
24fa722276 Merge pull request #642 from aenix-io/release-0.25.3
Prepare release v0.25.3
2025-02-22 11:41:53 +04:00
Timofei Larkin
0211c57bed Prepare release v0.25.3 2025-02-22 10:33:32 +03:00
Timofei Larkin
135b0609b4 Merge pull request #638 from klinch0/feature/move-kubeconfig
feature/mv-kubeconfig
2025-02-21 13:57:33 +04:00
Floppy Disk
6c73e3f3ae feature/mv-kubeconfig 2025-02-20 15:23:54 +03:00
Timofei Larkin
bc95159a80 Merge pull request #634 from aenix-io/release/v0.25.2
Prepare release v0.25.2
2025-02-18 21:03:29 +04:00
Timofei Larkin
0f68db6793 Merge pull request #635 from klinch0/feature/update-limits
feature/add-more-resources
2025-02-18 20:01:09 +03:00
Floppy Disk
9a55747885 add more resources 2025-02-18 17:40:54 +03:00
Timofei Larkin
bd90eb267f Prepare release v0.25.2 2025-02-18 17:22:41 +03:00
Timofei Larkin
a31c3a5796 Update kamaji version
* Stripped port number from KamajiControlPlane hostname due to clastix/kamaji#679
* Bumped versions for kamaji and dependent charts
2025-02-18 10:52:15 +03:00
Timofei Larkin
7d5b22e662 Merge pull request #632 from klinch0/feature/add-white-label
feature/add-wl
2025-02-17 14:03:25 +04:00
Floppy Disk
42f1dabc31 add wl 2025-02-14 17:47:37 +03:00
Timofei Larkin
eefef8b09f Merge pull request #626 from klinch0/feature/add-workloadmonitors-roles
feature/add-workloadmonitors-roles
2025-02-13 17:58:33 +04:00
Timofei Larkin
93c4616115 Merge pull request #630 from aenix-io/release-0.25.1
Prepare release v0.25.1
2025-02-13 17:32:52 +04:00
Andrei Kvapil
1f6ea333b6 Prepare release v0.25.1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-02-13 16:00:02 +03:00
Floppy Disk
4cc48e6f34 add-workloadmonitors-roles 2025-02-13 13:33:35 +03:00
Timofei Larkin
ecfb02a76f Merge pull request #625 from klinch0/feature/add-kafka-monitoring
feature/add-kafka-monitoring
2025-02-13 14:21:52 +04:00
Floppy Disk
cc0222aa11 fix dashboard 2025-02-13 13:09:34 +03:00
Andrei Kvapil
65036e8145 Upd cozy-proxy to fix reconciliation logic (#629) 2025-02-12 18:41:27 +01:00
Andrei Kvapil
e2e32096a3 Fix VM services selector (#627)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated deployment configurations with the latest application versions
(0.8.1 and 0.5.1) to ensure improved stability and compatibility.
- **Bug Fixes**
- Enhanced service connectivity by refining the criteria used for
routing requests to the correct application endpoints.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-02-12 14:03:10 +01:00
Floppy Disk
84a23947b0 fix 2025-02-12 11:45:16 +03:00
Floppy Disk
d234d58a16 update kafka operator version 2025-02-10 13:29:59 +03:00
Floppy Disk
b75aaf177b add kafka monitoring 2025-02-10 13:29:17 +03:00
klinch0
87328a6ff3 Merge branch 'aenix-io:main' into main 2025-02-10 12:58:03 +03:00
Andrei Kvapil
3fa4dd3af9 Prepare release v0.25.0 (#622)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Upgraded multiple system components to the latest version, ensuring
improved performance, stability, and enhanced security.
- Updated deployment and testing configurations across the platform for
a more reliable user experience.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-02-09 11:41:28 +01:00
Andrei Kvapil
6245976d3e Fix bootbox chartname (#623) 2025-02-08 22:07:24 +01:00
Andrei Kvapil
dacabe6317 Update cozy-proxy v0.1.1 (#624) 2025-02-08 22:07:12 +01:00
Andrei Kvapil
bf68404c53 Update Talos v1.9.3 (#617)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Upgraded the core installer and related system images from version
v1.9.2 to v1.9.3.
- Refreshed firmware and driver references for improved consistency
across all installation profiles.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-07 14:47:45 +01:00
Andrei Kvapil
5f40685161 fix: running=false for the VMs (#621)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Revised Virtual Machine configuration to require explicit confirmation
for the running state. The system no longer auto-activates instances by
default, giving users more direct control over instance activation.
Existing validations continue to ensure that only valid configurations
are applied, resulting in a more reliable deployment process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-07 13:53:56 +01:00
Andrei Kvapil
f768dc1632 Introduce externalMethod=WholeIP for VMs (#616)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new configuration option for specifying the method to
handle external traffic. Users can now choose between "WholeIP" and
"PortList" (default) across virtual machine and instance deployments.
- Service settings now adjust automatically based on the selected
external traffic method.

- **Documentation**
- Updated configuration guides to include details on the new
`externalMethod` parameter and its usage for managing external traffic.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-02-07 08:40:49 +01:00
Andrei Kvapil
1a88883a3b Update cilium v1.16.6 (#618)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced proxy configuration with dedicated endpoints for metrics,
administration, and health checks.

- **Documentation**
- Updated displayed version number and badge to v1.16.6 for improved
clarity.

- **Chores**
- Upgraded component versions and image digests from v1.16.5 to v1.16.6.
- Streamlined configuration by removing legacy conditional settings and
obsolete CORS directives.
- Refined formatting of tag filters for clearer configuration
management.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-06 13:51:46 +01:00
klinch0
a42f98e04c Merge branch 'aenix-io:main' into main 2025-02-06 15:51:19 +03:00
klinch0
842d3e55bc feature/add-flux-dashboards (#619)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new dashboard for Flux Control Plane monitoring that
visualizes key performance metrics like CPU, memory, API requests, and
more.
- Added a second dashboard for Flux Cluster Stats to display resource
reconciliation, operation durations, and readiness indicators.
- Seamlessly integrated these dashboards into the monitoring workflow
with dynamic querying and periodic refresh options.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-06 13:50:57 +01:00
klinch0
f02397aab5 Merge branch 'aenix-io:main' into main 2025-02-06 15:41:38 +03:00
klinch0
5a47754a92 feature/add-etcd-vm-node-scrape (#614)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced system monitoring with a new configuration option to collect
etcd metrics. Users can now enable the scraping of etcd metrics via
updated settings, which improves observability.
- Introduced a secure proxy mechanism that conditionally routes metrics
data from etcd, offering administrators greater control over monitoring
capabilities.
- New configuration sections added to various bundles to support etcd
metrics scraping.
  
- **Bug Fixes**
- Removed outdated configuration for VMNodeScrape resource, ensuring
clarity and accuracy in monitoring configurations.

- **Chores**
- Added new service accounts, roles, and bindings to facilitate secure
access for monitoring etcd metrics.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2025-02-06 13:40:30 +01:00
Andrei Kvapil
d91bc52594 Introduce cozy-proxy (#615)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added a new proxy component to enhance deployment orchestration and
dependency management.
- Introduced dynamic update capabilities for fetching and deploying the
latest assets.
- Enabled configurable settings for container images, networking, and
access control.
- Incorporated streamlined resource naming and labeling for improved
management.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-02-06 12:11:01 +01:00
klinch0
f67816e2d3 Merge branch 'aenix-io:main' into main 2025-02-05 15:38:21 +03:00
klinch0
861e6c464b feature/add-client-etcd-monitoring (#613)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced etcd monitoring with new metrics exposure, pod scraping
configuration, and comprehensive alert rules for proactive
observability.
- Introduced a new `VMPodScrape` resource for improved pod metrics
collection.
- Added a new PrometheusRule configuration for monitoring etcd clusters
with various alert conditions.
- **Chores**
  - Upgraded the etcd release from version 2.4.0 to 2.5.0.
- Consolidated and renamed monitoring dashboard references for better
consistency.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-05 13:13:28 +01:00
Andrei Kvapil
835ee117f7 Rebiild matchobx (#612)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Upgraded the deployment Docker image to version 0.24.1, ensuring
improved stability and potential performance enhancements.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-02-03 16:42:04 +01:00
klinch0
e5e14722b8 Merge branch 'aenix-io:main' into main 2025-01-30 00:27:55 +03:00
Andrei Kvapil
af48519d65 Prepare release v0.24.1 (#611)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-29 11:39:30 +01:00
Andrei Kvapil
d6e9765604 Revert kamaji edge-24.12.1 (#610)
Due to upstream issue: https://github.com/clastix/kamaji/issues/679

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
	- Updated Kamaji application version to v1.0.0
	- Modified dependency version constraints for kamaji-etcd

- **Documentation**
	- Updated README with new version information
- Clarified configuration descriptions for DataStore and network
profiles

- **Chores**
	- Updated Chart version to 2.0.0
	- Simplified configuration management in deployment templates
	- Updated Dockerfile to use a different source code version

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-29 10:48:12 +01:00
Andrei Kvapil
0ab39f207c Prepare release v0.24.0 (#606)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes for CozyStack v0.24.0

- **Image Updates**
  - Upgraded CozyStack core components to version v0.24.0
- Updated multiple system images, including cluster-autoscaler, kubevirt
cloud provider, and CSI driver
  - Refreshed images for dashboard, API, and controller components
  - Updated Grafana image to version 1.8.0

- **Infrastructure Changes**
- Replaced `darkhttpd` container with new `assets` container in
deployment configuration
  - Updated image digests across various system components

- **Version Bump**
  - Incremented CozyStack version from v0.23.1 to v0.24.0
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-27 19:44:54 +01:00
Andrei Kvapil
ef2e065c77 Update Grafana, build plugins (#607)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

- **New Features**
  - Updated Grafana to version 11.4.0
- Added new Grafana plugins: VictoriaMetrics logs datasource, Natel
Discrete Panel, and Worldmap Panel

- **Improvements**
  - Enhanced Grafana image build process
  - Dynamically manage Grafana image versioning
  - Updated plugin installation method

- **Version Update**
  - Monitoring package version bumped from 1.7.0 to 1.8.0

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-27 18:45:48 +01:00
Andrei Kvapil
80b4c151bd Replace darkhttpd with cozystack-assets-server (#596)
fixes https://github.com/aenix-io/cozystack/issues/602
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
	- Introduced a new custom assets server for serving static files
	- Replaced `darkhttpd` with a custom Go-based file server

- **Improvements**
	- Updated base images to Alpine Linux 3.21
	- Simplified container dependencies
	- Enhanced server configuration with command-line flags

- **Infrastructure**
	- Rebuilt Kubernetes deployment configuration for assets service
	- Updated server startup parameters and container settings
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-27 13:57:33 +01:00
klinch0
719cedde02 Merge branch 'aenix-io:main' into main 2025-01-27 15:15:11 +03:00
Andrei Kvapil
cc5eb4765c Introduce BootBox (#601)
- Introduce tinkerbell essentials
- Introduce bootbox


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

# Release Notes: BootBox Package (v0.1.0)

## New Features
- Added BootBox, a PXE hardware provisioning service.
- Introduced network boot configuration with Matchbox and Smee.
- Enabled hardware management through Kubernetes Custom Resource
Definitions.
- Added support for managing physical machine specifications and
configurations.
- New HelmRelease configuration for streamlined deployment.
- Added new application entry for BootBox in the configuration.

## Configuration
- Supports configuring physical machine instances.
- Provides flexible network boot and DHCP settings.
- Includes role-based access control (RBAC) configurations.
- New parameters for trusted proxies and syslog settings.
- Enhanced configuration options for deployment parameters and resource
allocations.
- Introduced new schema for validating configuration values.

## Deployment
- Deployed in `tenant-root` namespace.
- Optional and privileged installation.
- Depends on Cilium and KubeOVN networking components.
- Configurable deployment strategies and resource allocations.
- Introduced new Service and Ingress resources for improved traffic
management.
- Added support for host networking and public IP configurations.

## Compatibility
- Supports single-node and multi-node cluster configurations.
- Compatible with Kubernetes environments.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-27 10:56:23 +01:00
Andrei Kvapil
d557050eca Fix vm-update hook access to api (#597)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Version Updates**
	- Updated `virtual-machine` package version from `0.7.0` to `0.7.1`
	- Updated `vm-instance` package version from `0.4.0` to `0.4.1`

- **Configuration Changes**
- Added new policy annotation `policy.cozystack.io/allow-to-apiserver:
"true"` to update hook templates for both virtual machine and VM
instance

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-27 10:56:04 +01:00
klinch0
469d1e9801 Merge branch 'aenix-io:main' into main 2025-01-23 17:48:38 +03:00
Sergio
05857b954d Fix kamaji make update (#598)
Fix makefile update target for kamaji
Fix Dockerfile for kamaji (golang:1.23 as builder)
kamaji Chart updated

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

- **New Features**
- Enhanced DataStore configuration with more flexible inheritance and
schema definition
  - Added support for advanced network profile settings

- **Improvements**
  - Updated Kamaji application to version `edge-24.12.1`
  - Upgraded Go runtime to version 1.23
  - Improved documentation for DataStore and configuration settings

- **Dependency Updates**
  - Updated `kamaji-etcd` dependency to version `>=0.8.1`

- **Version Changes**
  - Reset application and chart versions to `0.0.0`

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: SSR <sergey.rabinovich@nexign.com>
2025-01-23 15:28:46 +01:00
klinch0
81819661dc Merge branch 'aenix-io:main' into main 2025-01-21 16:31:57 +03:00
klinch0
06afcf27a3 feature/fix-k8s-config-with-OIDC (#594)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
	- Updated tenant application version from 1.6.6 to 1.6.7
	- Updated version tracking in package management system
	- Minor configuration adjustments in kubeconfig template
- Enhanced logic for determining API server endpoint based on kubeconfig
presence
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-20 14:13:57 +01:00
Timofei Larkin
9587caa4f7 Update cert-manager (#595)
Bumped the embedded cert-manager chart to the latest upstream version.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

Based on the comprehensive changes across multiple files in the
cert-manager Helm chart, here are the release notes:

- **New Features**
	- Added support for dynamic TLS serving certificates for metrics
- Enhanced Prometheus monitoring configuration with ServiceMonitor and
PodMonitor options
	- Introduced more flexible IP family configuration for services

- **Improvements**
	- Updated cert-manager to version v1.16.3
- Expanded configuration options for controller, webhook, and CA
injector
	- Improved RBAC permissions and service account management
	- Enhanced documentation and configuration guidance

- **Bug Fixes**
	- Deprecated `installCRDs` option in favor of more explicit settings
	- Refined namespace and resource selection for webhooks

- **Chores**
	- Updated Helm chart dependencies and compatibility
	- Improved template rendering and configuration management

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-20 14:13:18 +01:00
klinch0
2f0d0924a7 Merge branch 'aenix-io:main' into main 2025-01-20 12:05:14 +03:00
Andrei Kvapil
2a976afe99 Prepare release v0.23.1 (#593)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-18 15:41:43 +01:00
Andrei Kvapil
fb723bc650 Fix dashboard error: Unable to get installed package (#592)
This PR includes fix for dashboard

dd02680d79

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-18 15:18:28 +01:00
Andrei Kvapil
e23286a336 Prepare release v0.23.0 (#591)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes for Cozystack v0.23.0

- **Image Updates**
  - Upgraded core Cozystack components to version v0.23.0
- Updated multiple system and application images across various packages
- Refreshed image digests for components like Kubernetes, backup, and
infrastructure tools

- **Version Bump**
  - Incremented overall system version from v0.22.0 to v0.23.0
  - Updated configuration and deployment manifests accordingly

- **System Components**
  - Updated Cozystack API, Controller, and Dashboard configurations
- Refreshed image references for Kamaji, KubeOVN, and other system
services

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-17 18:23:53 +01:00
Andrei Kvapil
2f5336388c Add hooks to update instanceType, instanceProfile, and storage (#590)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
	- Added update hook for Virtual Machine configurations
- Enhanced version management for virtual machine and VM instance
packages

- **Version Updates**
	- Virtual Machine package version updated from 0.6.0 to 0.7.0
	- VM Instance package version updated from 0.3.0 to 0.4.0

- **Improvements**
- Introduced dynamic configuration update mechanisms for Kubernetes
deployments
- Added service account and role permissions for VM configuration
management

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-17 17:06:39 +01:00
klinch0
af58018a1e Bugfix/fix kk configure reconciliation (#589)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Configuration Update**
- Added a new `configHash` field in the `keycloak-configure` release for
both `paas-full` and `paas-hosted` configurations.
- Introduced a SHA256 checksum mechanism for the `cozyConfig` data to
enhance configuration integrity checks.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-17 17:05:48 +01:00
Andrei Kvapil
cfb171b000 Update Talos Linux v1.9.2 (#588)
fixes https://github.com/aenix-io/cozystack/issues/541
2025-01-17 14:50:54 +01:00
klinch0
e037cb0e3e feature/add-tg-severity-setting (#585)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added option to disable Telegram alerts for specific severity levels
in the Monitoring Hub.

- **Documentation**
- Updated README with new parameter
`alerta.alerts.telegram.disabledSeverity`.

- **Chores**
  - Bumped monitoring package version from 1.6.1 to 1.7.0.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-17 14:49:33 +01:00
Kingdon Barrett
749110aaa2 Update fluxcd-operator to 0.13.0 (#586)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added support for common metadata (annotations and labels) in Flux
instance configuration
  - Introduced a `name` field for sync configuration in Flux instance

- **Version Updates**
  - Upgraded Flux Operator chart from v0.12.0 to v0.13.0
  - Upgraded Flux Instance chart from v0.12.0 to v0.13.0

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-01-17 14:48:00 +01:00
klinch0
59b4a0fb91 bugfix/monitoring add nil checker (#587)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Version Update**
  - Monitoring application version updated from 1.6.1 to 1.6.2
- **Configuration Improvements**
  - Enhanced resource configuration checks for VM cluster components
- Improved handling of resource definitions to prevent potential errors

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-17 14:47:29 +01:00
klinch0
191c8b4061 Merge branch 'aenix-io:main' into main 2025-01-16 15:26:08 +03:00
Andrei Kvapil
4e68e65cd9 Prepare release v0.22.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-16 12:30:52 +01:00
Andrei Kvapil
33d2b24ff2 Prepare release v0.22.0 (#570)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-16 12:24:24 +01:00
Andrei Kvapil
9c8652cd5b Fix vm-disk lookup (#582)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Updated DataVolume lookup mechanism to correctly match disk names by
prepending "vm-disk-" prefix in Virtual Machine configuration.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-16 11:40:32 +01:00
klinch0
dd4fcd5539 Fix kubevirt monitoring alerts (#581) 2025-01-16 11:26:02 +01:00
Floppy Disk
9de782e719 fix 2025-01-16 13:23:56 +03:00
klinch0
9f9a774340 add kubevirt metrics and dashboards (#573)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added PrometheusRule configuration to monitor virtual machine (VM) and
virtual machine instance (VMI) states.
	- Introduced ServiceMonitor resource for Kubevirt metrics monitoring.
	- Added `monitorNamespace` configuration in KubeVirt custom resource.

- **Monitoring Enhancements**
- Implemented alerts for VMs and VMIs not running for more than 10
minutes.
	- Configured metrics endpoint with HTTPS support.

- **Version Updates**
- Updated version mappings for several packages, reflecting new commit
hashes.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-16 11:00:59 +01:00
klinch0
8193d788fc add-extra-redirect-uri (#579)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced Keycloak configuration with support for additional redirect
URIs
	- Added flexibility to specify extra redirect URI through configuration
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-16 10:42:38 +01:00
Andrei Kvapil
5f1c2a4f7e talos 1.9 (#578)
- Update Talos v1.9.1
- add: disable-selinux workaround
- Replace workaround with patched Talos
- Add image testing

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-15 14:23:05 +01:00
Andrei Kvapil
8cce943cb9 Update CNPG postgres-operator v1.25.0 (#575)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced CloudNativePG Operator configuration with new options for
cluster-wide monitoring and namespace control
  - Added support for IP family configuration in service settings
  - Increased flexibility for concurrent reconciliation processes

- **Version Updates**
  - Upgraded CloudNativePG Operator from version 1.24.0 to 1.25.0
  - Updated Helm chart version from 0.22.0 to 0.23.0

- **Configuration Improvements**
- Introduced new options for namespace override and cluster-wide event
observation
  - Added maximum concurrent reconciles setting
  - Expanded service networking configuration capabilities

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-15 13:59:07 +01:00
Andrei Kvapil
1256c81bd0 fix cnpg alerts templating (#574)
fix regression introduced in
https://github.com/aenix-io/cozystack/pull/558

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Refactor**
- Updated label formatting in PostgreSQL operator default alerts
configuration
- Enhanced alert template generation to dynamically include multiple
alert configurations from separate files

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-15 13:58:36 +01:00
Andrei Kvapil
6310096e85 Update Kube-OVN v1.13.2 (#577)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-15 13:58:24 +01:00
Andrei Kvapil
b246f9dfe2 Update Cilium v1.16.5 (#576)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
	- Updated Cilium package from version 1.16.4 to 1.16.5
- Updated image tags and digests for Cilium agent, Hubble relay, and
Cilium operator
	- Modified configuration files to reflect new version

- **New Features**
- Added internal address configuration for Envoy listeners with specific
CIDR ranges

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-15 13:57:59 +01:00
Andrei Kvapil
34d6ab032f Update Talos v1.9.1 (#553)
This PR includes a new image based on Talos Linux v1.9.1

- new DRBD module 9.2.12:
https://github.com/LINBIT/drbd/blob/master/ChangeLog
- ZFS fix: https://github.com/siderolabs/extensions/issues/572

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
	- Updated Talos system components to version 1.9.1
	- Added SELinux workaround DaemonSet for KubeVirt

- **Chores**
	- Updated image references for base installer and system extensions
- Modified installation script configuration to enhance Kubernetes setup
process
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-15 13:01:31 +01:00
klinch0
3bb975965d enablePodMonitor (#572)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enabled pod monitoring for multiple database clusters (Alerta,
Keycloak, SeaweedFS, Grafana)

- **Chores**
	- Updated monitoring package version from 1.6.0 to 1.6.1
- Updated version mapping with specific commit hash for monitoring
package
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-15 10:53:16 +01:00
klinch0
4547efab09 add cnpg alerts (#558)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added comprehensive monitoring and alerting rules for PostgreSQL
instances.
	- Introduced alerts for:
		- Long-running transactions
		- Backend waiting times
		- Transaction ID age
		- Replication lag
		- Archiving failures
		- Deadlock conflicts
		- Replication status
	- New resource: `PrometheusRule` named `cnpg-default-alerts`.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-13 18:39:14 +01:00
Andrei Kvapil
65593f459f Fix openapi spec for cozystack apps (#571)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-13 17:31:47 +01:00
Andrei Kvapil
b08a5d3e2f Fix version-map for updated application (#569) 2025-01-09 15:28:33 +01:00
Kingdon Barrett
c3d55e2295 Update Flux Operator (#567)
See
[Releases](https://github.com/controlplaneio-fluxcd/flux-operator/releases)
for details

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced Flux Operator CustomResourceDefinition (CRD) with new
metadata handling capabilities
	- Added support for common metadata annotations and labels
	- Introduced new resource naming and artifact revision tracking

- **Version Updates**
	- Flux Operator upgraded from v0.10.0 to v0.12.0
	- Flux Instance chart updated from v0.9.0 to v0.12.0

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-01-09 15:13:26 +01:00
Andrei Kvapil
cb7b8158e0 Update version-map for updated application (#568)
fixes versions update after this pr
https://github.com/aenix-io/cozystack/pull/563

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-09 15:09:18 +01:00
Andrei Kvapil
0e7288707e Introduce builder (#559)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
	- Added configuration for Kubernetes builder environment
	- Introduced Talos imager configuration with version v1.8.4
- Implemented garbage collection policies for OCI worker storage
management

- **Chores**
	- Updated Makefile to streamline image building process
	- Added Kubernetes deployment templates for builder sandbox

- **Infrastructure**
	- Created new configuration files for builder package
	- Enhanced build and deployment workflows

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-09 15:03:13 +01:00
Andrei Kvapil
38a993b356 Upd MAINTAINERS (#566)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-09 14:29:42 +01:00
Andrei Kvapil
107f390ae8 workloadmonitor (#563)
- upd redis
- update kubernetes app to use workloadmonitors
- upd kubernetes
- fix version


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added `WorkloadMonitor` resources for various components including
Kubernetes clusters, Redis, Sentinel, and SeaweedFS.
- Introduced monitoring capabilities for `alerta`, `alertmanager`,
`grafana`, and `vlogs` services.
- Enhanced RBAC configurations to support new monitoring resources
across multiple API groups.

- **Improvements**
	- Updated metadata and labeling for virtual machine templates.
	- Added dynamic resource naming based on release and group names.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-09 13:25:12 +01:00
Andrei Kvapil
0a9b0761dc Update cozystack-dashboard to show workload status (#562)
![Screenshot 2025-01-08 at 16 13
23](https://github.com/user-attachments/assets/e0e0cc6d-dbe8-4e64-b9e9-532d8eb69006)


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

- **New Features**
  - Updated dashboard to use latest version of components
  - Simplified package repository management interface

- **Changes**
  - Removed specific version references in configuration
  - Updated image tags and digests to latest versions
  - Modified documentation links to point to CozyStack resources

- **Removed Features**
- Eliminated package repository management functionality from dashboard

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-09 13:24:55 +01:00
klinch0
d4634797f3 feature/add resources to vmcluster (#556)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **Version Updates**
  - Tenant application version bumped from 1.6.5 to 1.6.6
  - Monitoring application version updated from 1.5.3 to 1.5.4

- **Monitoring Configuration**
- Adjusted metrics storage deduplication interval: shortterm from 5
minutes to 15 seconds, longterm from 15 seconds to 5 minutes
- Updated resource configurations for VM components, including new
resource specifications for vminsert, vmselect, and vmstorage
- Increased memory limits and requests for VMAgent from 500Mi to 1024Mi
and from 200Mi to 768Mi, respectively

- **Performance Improvements**
  - Enhanced resource allocation for monitoring services
  - More flexible configuration options for metrics storage
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-09 13:18:46 +01:00
Timur Tukaev
7a53378799 Merge pull request #564 from aenix-io/tym83-patch-1
Update MAINTAINERS.md
2025-01-09 17:13:59 +05:00
Timur Tukaev
5cbe958645 Update MAINTAINERS.md
add Georg  functionality
2025-01-09 17:13:40 +05:00
Timur Tukaev
2892c220ac Update MAINTAINERS.md
add Kingdon
2025-01-09 17:12:55 +05:00
Andrei Kvapil
227848a59d Introduce cozystack-controller (#560)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

Based on the comprehensive summary of changes, here are the release
notes:

- **New Features**
	- Added a new Kubernetes controller for managing workload monitoring
- Introduced telemetry collection capabilities with configurable options
- Added new Custom Resource Definitions (CRDs) for Workload and
WorkloadMonitor

- **Improvements**
	- Enhanced API infrastructure with new API group and version
	- Improved deployment configurations for various system components
	- Added development container and workflow configurations

- **Bug Fixes**
	- Updated import paths to correct domain naming

- **Chores**
	- Updated copyright years
	- Refined module dependencies
	- Standardized code linting and testing configurations

- **Infrastructure**
- Increased `cozystack-api` deployment replicas from 1 to 2 for improved
availability
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-01-09 12:24:51 +01:00
Timur Tukaev
b16d954dd3 Update MAINTAINERS.md
Added new maintainers
2025-01-09 16:24:02 +05:00
Andrei Kvapil
7cfb90df10 Fix cluster-autoscaler (#561)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
	- Updated Cluster Autoscaler to version 1.32.0
- Added new configuration options for more granular node scaling and
management
	- Introduced custom patch for scaling behavior

- **Improvements**
	- Upgraded Go build environment to version 1.23.4
	- Switched to latest Cluster Autoscaler image tag
	- Enhanced node scaling flexibility with new command-line arguments

- **Technical Updates**
	- Modified cluster autoscaler deployment configuration
	- Updated image references and build process

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-08 16:13:57 +01:00
klinch0
26388c7757 up vmagent limit (#555)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Resource Configuration**
	- Updated VMAgent memory limits from 500Mi to 1024Mi.
	- Increased VMAgent memory requests from 200Mi to 768Mi.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-01-02 12:29:15 +01:00
Andrei Kvapil
fde4bcfa3b Prepare release v0.21.1 (#551)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Version Update**
	- Upgraded Cozystack from v0.21.0 to v0.21.1
	- Updated multiple system component images to the new version
- Updated image references across various configuration files and
packages

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-30 15:34:05 +01:00
Andrei Kvapil
b6e27cb3dc disable node.kubernetes.io/exclude-from-external-load-balancers label (#552) 2024-12-30 15:31:48 +01:00
Andrei Kvapil
f1e11451fa Fix tenant permissions for oidc disabled cluster (#549) 2024-12-30 09:46:08 +01:00
Andrei Kvapil
84f3ccc0a9 Prepare release v0.21.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-27 19:14:31 +01:00
Andrei Kvapil
4f767ee39c Update vm-instance to not include vm-disk prefix (#548)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-27 19:12:53 +01:00
Andrei Kvapil
175a65f871 Prepare release v0.21.0 (#546)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated images for various components to version `v0.21.0`, enhancing
overall functionality and performance.
- Introduced specific version tags for services, ensuring stability and
predictability in deployments.

- **Bug Fixes**
- Updated image digests for several components, reflecting improvements
or fixes in the underlying images.

- **Documentation**
- Updated URLs in documentation to direct users to the latest CozyStack
resources.

- **Chores**
- Removed outdated patch applications from the build process,
streamlining the Dockerfile configuration.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-27 18:53:46 +01:00
Andrei Kvapil
b761bd94e6 fix linstor-ha-controller (#547) 2024-12-27 15:28:44 +01:00
Andrei Kvapil
c48aed0aa8 hardcode vlogs version (#545) 2024-12-27 14:33:32 +01:00
Andrei Kvapil
007ebd8c9c update Talos Linux v1.8.4 (#544)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-27 14:33:17 +01:00
Andrei Kvapil
4754e359f5 Remove kubeapps-admin role (#543)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
	- Introduced new secrets for enhanced security management.
	- Added a new realm group for streamlined administrative roles.
	- Implemented a new cluster role binding for improved access control.

- **Bug Fixes**
	- Removed outdated role bindings to reflect updated permissions.

- **Refactor**
- Transitioned from a broad cluster role to a more focused
namespace-specific role, enhancing role granularity.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-27 14:33:03 +01:00
Andrei Kvapil
3ae70f381c Fix cozystack-api to show correct List types in openapi (#542)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated the Docker image reference for `cozystackAPI` to the latest
version.
- Enhanced OpenAPI schema generation for the Apps API server, improving
flexibility and correctness.

- **Bug Fixes**
- Streamlined OpenAPI definitions by removing outdated Application and
ApplicationList definitions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-27 11:22:39 +01:00
Andrei Kvapil
3c9e50a4df Update dashboard to use Cozystack API (#539)
<img width="1675" alt="Screenshot 2024-12-23 at 13 40 30"
src="https://github.com/user-attachments/assets/cc123697-4efd-4a4f-909c-793cec8d91bd"
/>
<img width="1673" alt="Screenshot 2024-12-23 at 13 40 45"
src="https://github.com/user-attachments/assets/3be63e8d-9ee6-487d-90d0-3583dc968dfc"
/>


Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new `pluginConfig` section in the Kubeapps dashboard
configuration for managing a broader range of applications.
- **Bug Fixes**
- Enhanced URL generation logic to ensure proper encoding of package
identifiers.
- **Chores**
- Updated image digests in the configuration for both the dashboard and
kubeappsapis sections.
	- Removed unnecessary patch application steps from the build process.
	- Upgraded the Go version used for building the application.
- Updated the application version for the tenant package from `1.6.3` to
`1.6.4`.
	- Added a new version `1.6.4 HEAD` for the tenant package.
- Adjusted RBAC configuration to streamline permissions and enhance
group-based access management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
Co-authored-by: klinch0 <68821526+klinch0@users.noreply.github.com>
2024-12-27 11:22:25 +01:00
klinch0
97d006e99f fix logs (#537)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a HelmRelease configuration for monitoring agents in
Kubernetes.
- Added a new section for `fluent-bit` with configurations for readiness
probes, volumes, and log processing.

- **Bug Fixes**
- Enhanced monitoring capabilities with detailed configurations for log
management and external integrations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-23 23:42:00 +01:00
klinch0
17fbda6e12 fix-vm-logs-url (#538)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
	- Updated monitoring application version to 1.5.3.
- Changed the data source type in Grafana configuration to
`victoriametrics-logs-datasource`.
- **Bug Fixes**
	- Corrected plugin loading configuration in Grafana.
- **Chores**
- Updated version mapping for the monitoring package in the versions
map.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-23 23:40:52 +01:00
klinch0
c1ca19dc18 add grafana size configure (#536)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a new parameter for Grafana's database size with a default
value of 10Gi.
  
- **Bug Fixes**
- Updated default values for `alerta.alerts.telegram.token` and
`alerta.alerts.telegram.chatID` to empty strings.

- **Documentation**
- Revised the README to reflect changes in default parameter values and
added new parameters for Grafana.

- **Chores**
  - Updated the monitoring application's version from 1.5.2 to 1.5.3.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-20 11:21:54 +01:00
Andrei Kvapil
41f7a90bfd Update kubeapps v2.12.0 (#533)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

upstream issue https://github.com/vmware-tanzu/kubeapps/pull/7847

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
	- Added support for conditional configuration based on OIDC settings.
	- Introduced label filtering for Helm releases and repositories.
	- Updated reconciliation strategy for Helm releases.

- **Bug Fixes**
	- Enhanced error handling and logging in package resource retrieval.

- **Documentation**
- Updated configuration values in `values.yaml` for image tags and
digests.

- **Chores**
	- Upgraded application and Go versions in Dockerfiles.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-19 21:48:56 +01:00
Andrei Kvapil
2057bb96e6 Refactor tenant RBAC rules (#534)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced new roles and role bindings for enhanced role-based access
control, including specific permissions for viewing, using, and
administering resources.
- Added a new dashboard role for access to helm repositories and charts.

- **Bug Fixes**
	- Updated application version from 1.6.2 to 1.6.3.

- **Chores**
- Updated version declarations for the tenant package in the versions
map.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-19 21:48:39 +01:00
klinch0
cfe86c0815 delete-cpu-limit (#535)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced resource management for the VMCluster resource, specifically
for the `vmstorage` component.
- Added resource specifications including memory limits and CPU
requests.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-19 21:48:11 +01:00
klinch0
abc8f08271 Add redis auth (#528)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced `authEnabled` parameter for enabling password generation in
Redis.
	- Added authentication logic for Redis failover configuration.
  
- **Bug Fixes**
	- Updated version of the Redis chart from `0.3.1` to `0.4.0`.

- **Documentation**
- Updated README to include the new `authEnabled` parameter description.

- **Chores**
	- Incremented version numbers for multiple packages in the version map.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-18 08:56:28 +01:00
klinch0
b43c95868f add annotations for fixing 502 status code (#527)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced ingress settings for Kubeapps deployment, allowing for
increased timeout and body size limits.
- Added configuration options for handling larger requests and longer
processing times.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-14 11:08:22 +01:00
Andrei Kvapil
e44bece114 Prepare release v0.20.2 2024-12-13 09:54:12 +01:00
Andrei Kvapil
0822928f53 Fix API resource for Redis (#526) 2024-12-12 14:46:19 +01:00
klinch0
2e0ae0bd0a fix disable oidc (#525)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Bug Fixes**
- Improved conditional logic for OIDC functionality, ensuring accurate
deployment of related components.
- **Chores**
- Updated dependencies for the `keycloak` release to ensure proper
operation with the `postgres-operator`.
- **New Features**
- Enhanced configuration handling for OIDC, affecting the inclusion of
related components based on strict equality checks.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-11 10:28:20 +01:00
Andrei Kvapil
3ff1709826 Prepare release v0.20.1 2024-12-10 13:19:04 +01:00
Andrei Kvapil
ebe9a1b0a5 Fix Terraform compatibility (#524)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced dynamic registration capabilities for internal API versions
of `Application` and `ApplicationList`.
- Added configuration management for server options, allowing users to
specify a resource configuration path via command line.
  
- **Bug Fixes**
	- Improved error handling for loading resource configurations.

- **Documentation**
- Updated OpenAPI specification handling by removing certain definitions
post-processing.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-10 12:40:29 +01:00
Andrei Kvapil
898374b533 bump monitoring version (#523) 2024-12-09 19:26:06 +01:00
Andrei Kvapil
95e39c951a Prepare release v0.20.0 (#522)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-09 18:42:41 +01:00
klinch0
b6bf168817 Add cozystack-cluster-admin (#517)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit


- **New Features**
- Introduced new `Secret` resources for `k8s-client`, `kubeapps-client`,
and `kubeapps-auth-config` to enhance Keycloak configuration.
- Added a new `KeycloakRealmGroup` named `cozystack-cluster-admin` for
improved access management.
- Implemented a new `RoleBinding` for `kubeapps-admin` in the
`cozy-public` namespace, linking it to the `kubeapps-admin` role.
- Created a new `ClusterRoleBinding` named
`cozystack-cluster-admin-group`, providing cluster-level permissions.
- Added new `ClusterRole` named `kubeapps-admin`, granting specific
permissions for resource management.

- **Bug Fixes**
	- None

- **Documentation**
	- None

- **Refactor**
	- None

- **Style**
	- None

- **Tests**
	- None

- **Chores**
	- None

- **Revert**
	- None

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-09 15:11:30 +01:00
klinch0
ebecf2d228 Fix super-admin role (#516)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new `super-admin` role with comprehensive permissions
across resources, enhancing access control.
  
- **Version Updates**
	- Application version updated from `1.6.1` to `1.6.2`.
- Various packages, including `tenant`, updated to reflect new version
identifiers.

These updates improve user access management and ensure the application
is running on the latest version.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-09 15:06:59 +01:00
Andrei Kvapil
49df7e24a3 Fix kube-state-mterics and flux alerts labels (#520)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Streamlined metadata for monitoring agents by removing specific
Helm-related annotations and labels.
- Updated service scrape configuration to enhance target pod
identification with a new relabeling entry.

- **Bug Fixes**
- Adjusted label selection in the `VMServiceScrape` resource to improve
service scrape functionality.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-09 14:00:59 +01:00
Andrei Kvapil
66d9b17525 fix monitoring: show alerts only from first instance (#521)
We don't need to show alerts from longterm instance, because the alerts
have shorter timeout than metrics collection interval


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated the `VMAlert` YAML template to generate only the first
`VMAlert` resource based on metrics storage values.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-09 14:00:40 +01:00
klinch0
ccedc5fe55 fix kubeconfig (#515)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced Kubernetes configuration template for tenant-specific
context, improving configurability and security.
  
- **Version Updates**
	- Updated application version from 1.6.1 to 1.6.2.
- Incremented version references for multiple packages, ensuring
alignment with the latest commits.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-09 11:11:52 +01:00
Andrei Kvapil
aebf471103 Fix EndpointSlice reconciliation (#518)
Upstream fixes:

- https://github.com/kubevirt/cloud-provider-kubevirt/pull/335
- https://github.com/kubevirt/cloud-provider-kubevirt/pull/336

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

- **New Features**
  - Incremented Kubernetes chart version to 0.14.1.
- Introduced a new cloud provider controller for managing EndpointSlices
in KubeVirt, enhancing responsiveness to service changes.

- **Improvements**
- Updated Docker image tag for kubevirt-cloud-provider to use the latest
version.
- Enhanced handling of EndpointSlices for LoadBalancer services,
improving service management.

- **Bug Fixes**
- Improved error handling and logging for service retrieval and
EndpointSlice management.

- **Documentation**
- Updated version mappings in the versions map file for clarity and
tracking.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-09 11:10:51 +01:00
Andrei Kvapil
d14b66cea5 Update Kube-OVN v0.13.0 (#513)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **New Features**
- Enhanced deployment configurations with new init containers for
various components, improving ownership management and initialization
processes.
- Added new properties to Custom Resource Definitions (CRDs) for better
network resource management and flexibility.
- Introduced new configuration options in `values.yaml` for enhanced
functionality.
- Implemented dynamic version-specific fetching for kube-ovn charts,
improving version control.
- Expanded permissions for ClusterRoles related to authentication and
authorization.

- **Bug Fixes**
- Updated command structures and security contexts across multiple
deployments to enhance security and functionality.

- **Documentation**
- Minor formatting adjustments made to improve clarity in configuration
files.

- **Chores**
- Streamlined Dockerfile and Helm chart configurations for better
maintainability and efficiency.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-06 10:49:14 +01:00
klinch0
da1e705a49 NATs: fix hardcode, add merge and resolve config (#514)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **New Features**
- Introduced new configuration parameters for Jetstream, including
`jetstream.size` and `jetstream.enabled`, enhancing storage and
functionality options.
- Added support for merging additional configurations with
`config.merge` and `config.resolver`.

- **Bug Fixes**
- Improved password generation and configuration merging logic for
better flexibility in deployments.

- **Version Updates**
  - NATS application version updated from `0.3.1` to `0.4.0`.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-06 10:36:20 +01:00
klinch0
b7a51ba0bb Remove unnecessary allow-to-keycloak policy (#512)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced Keycloak client configuration with new secrets for
`k8s-client`, `kubeapps-client`, and `kubeapps-auth-config`.
- Introduced new `ClusterKeycloak` and `ClusterKeycloakRealm` resources
for improved management.
- Updated Keycloak client scopes with additional attributes and protocol
mappers.
- Added multiple CiliumNetworkPolicy and CiliumClusterwideNetworkPolicy
configurations for better traffic control.

- **Improvements**
- Logic added to check for existing Kubernetes secrets and generate new
ones as needed, ensuring seamless configuration management.
- Enhanced network policies to provide comprehensive control over
ingress and egress traffic for various services within the tenant's
namespace.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-05 11:29:08 +01:00
Kingdon Barrett
f97f673de0 Add Urmanac to adopters (#511)
I saw your call for adopters - I am sort of in production now, but not
with any services that I can advertise.

This Urmanac is something I'm testing on WASM workloads. I also have
hosted some Ruby services on my cluster. I am still in the
proof-of-concept phase with my production workloads, working towards a
service level of 99.5% or better. I am running SpinKube on Cozystack,
with my own Talos Linux image that I have built to add the Spin and
Tailscale extensions.

(The urmanac is in beta at: https://beta.urmanac.com - urmanac.com is a
dead link for now.)

What's holding me back currently is hardware, not so much the software
stack. I have deployed Cozystack on some severely under-powered
machines. Every time I push it to the limit, my load averages shoot up
into the 100's and I unfortunately bring my control plane and services
down. I will probably get better results when I am able to separate the
KubeVirt clusters from the data plane and control plane. When the load
rises too high, etcd becomes unresponsive, and it goes downhill from
there.

I am very impressed with the architecture of Cozystack and I have made
some contributions to Cozystack on behalf of the FluxCD community! I am
in firm support of your goal to join the CNCF.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added "Urmanac" to the Cozystack Adopters list, including contact
information and a description of its use of Cozystack.
  
- **Documentation**
  - Reformatted the existing entry for "gohost" for consistency.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Kingdon Barrett <kingdon+github@tuesdaystudios.com>
2024-12-05 08:11:10 +01:00
Andrei Kvapil
c62a83a7ac Prepare release v0.19.0 (#500)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **New Features**
- Updated container images for various components to their latest
versions, enhancing performance and security.

- **Bug Fixes**
- Addressed potential issues by upgrading image tags and digests for
components such as CozyStack, ClickHouse, PostgreSQL, and others.

- **Documentation**
- Updated `values.yaml` configurations for multiple packages to reflect
the latest image versions and digests.

These updates ensure improved functionality and reliability across the
application.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-04 21:05:41 +01:00
Andrei Kvapil
607ad72283 Add networkpolicy for Keycloak (#510) 2024-12-04 19:52:49 +01:00
Andrei Kvapil
6272cd7b88 fix keycloak secrets drift (#509) 2024-12-04 19:44:16 +01:00
Andrei Kvapil
d43b8fdab0 fix keycloak secrets drift (#508)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **New Features**
- Enhanced management of Keycloak credentials by checking for existing
passwords stored in Kubernetes Secrets.
- Improved password management logic, allowing for the reuse of existing
passwords or the generation of new ones as needed.

- **Bug Fixes**
- Streamlined secret handling to avoid unnecessary random password
generation, improving security and maintainability.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Co-authored-by: Floppy Disk <kklinch0@gmail.com>
2024-12-04 19:40:37 +01:00
klinch0
3aa5f88a5f fix keycloak-configure secrets drift (#506)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced management of Kubernetes secrets for `k8s-client`,
`kubeapps-client`, and `kubeapps-auth-config`.
- Improved handling of client secrets by reusing existing configurations
when available.
  
- **Bug Fixes**
- Addressed issues with static secret definitions, streamlining the
configuration process.

- **Chores**
- Removed outdated secret and Keycloak client definitions for cleaner
configuration management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-04 16:44:32 +01:00
Andrei Kvapil
7da85d66d5 Add basic Makefiles for keycloak (#504)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced new Makefiles for `keycloak`, `keycloak-configure`, and
`keycloak-operator` packages, establishing environment variables for
deployment.
- Each Makefile includes common scripts to streamline build and
environment settings.

- **Bug Fixes**
	- No specific bug fixes were mentioned.

- **Documentation**
	- No updates to documentation were noted.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-04 16:19:05 +01:00
klinch0
142790dc51 fix kk-configure (#505) 2024-12-04 15:59:33 +01:00
Andrei Kvapil
21c291c4de Refactor Keycloak (#502)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **New Features**
  - Integrated OpenID Connect (OIDC) for enhanced authentication.
- Added dynamic Role resource for tenant-specific access to Kubernetes
secrets.
  - Introduced new Keycloak realm groups for improved role management.

- **Improvements**
  - Enhanced error handling for service readiness checks.
- Streamlined configuration files for better clarity and management of
OIDC settings.
- Updated handling of API server address and improved configuration
adaptability based on OIDC settings.

- **Bug Fixes**
- Removed deprecated configurations related to Keycloak, simplifying
deployment.

These updates aim to improve security, usability, and overall system
performance.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-04 09:31:08 +01:00
Andrei Kvapil
fd0458681c MetallB enable frr and disable frr-k8s by default (#503) 2024-12-03 19:50:58 +01:00
Andrei Kvapil
9baef88619 MetallB disable frr by default (#501) 2024-12-03 19:38:00 +01:00
klinch0
ba421182cd fix dashboard build (#499)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced build process for Kubeapps with improved modularity and patch
integration.
	- Introduced version specification for Kubeapps builds.

- **Bug Fixes**
	- Streamlined plugin build commands for better performance and clarity.

- **Refactor**
- Restructured Dockerfile to utilize different base images and optimize
the build stages.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-03 11:54:20 +01:00
klinch0
f73a5a0fcb add make-generate to pre-commit (#491)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a new pre-commit hook (`run-make-generate`) to automate the
generation process in application directories.
  
- **Documentation**
- Enhanced readability of the Managed NATS Service README by adjusting
formatting and removing unnecessary headers.

- **Bug Fixes**
- Corrected JSON structure in the Postgres values schema to ensure
validity.

- **Chores**
- Updated pre-commit configuration for improved consistency and
functionality.
- Reorganized properties in the NATS values schema, removing the `users`
property to reflect changes in user management capabilities.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-02 19:24:22 +01:00
Andrei Kvapil
2b10fb25c8 Update Talos Linux v1.8.3 (#497) 2024-12-02 19:23:28 +01:00
Andrei Kvapil
9556716ee7 Update KubeVirt v1.4.0 (#496) 2024-12-02 19:21:11 +01:00
Andrei Kvapil
d02b851fad Update CDI v1.61.0 (#495) 2024-12-02 19:20:58 +01:00
Andrei Kvapil
6d464a87cb Update LINSTOR v1.29.2 (#494)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

- **New Features**
  - Updated Piraeus Operator chart to version 2.7.1.
- Introduced new Custom Resource Definitions (CRDs) for enhanced
management of LINSTOR resources.
  
- **Improvements**
  - Updated image tags for various components to their latest versions.
- Added `nodeSelector` and `affinity` fields for improved pod scheduling
in deployments.

These enhancements provide users with better resource management and
operational capabilities.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-02 19:20:41 +01:00
Andrei Kvapil
6caefcdffa Update Cilium v1.16.4 (#493)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced new configuration options for socket-based load balancing
tracing and initial fetch timeout settings in the Cilium deployment.
- Enhanced validation checks for deprecated options to prevent
misconfigurations.

- **Bug Fixes**
	- Improved error messaging for deprecated or invalid settings.

- **Documentation**
- Updated version numbers in README and configuration files to reflect
the new version (1.16.4).

- **Chores**
- Updated Dockerfile and image tags to reference the latest version
(1.16.4).

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-02 19:20:21 +01:00
Andrei Kvapil
943dcd067d Update MetalLB v0.14.8 (#492)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
	- Upgraded MetalLB application version to `v0.14.8`.
	- Introduced a new `frr-k8s` dependency for enhanced BGP management.
- Added new configuration options for TLS settings and extra containers
in the controller.
- Implemented new Custom Resource Definitions (CRDs) for managing FRR
configurations and node states.

- **Bug Fixes**
- Improved validation logic for service account names to ensure
consistency.

- **Documentation**
- Updated README files for the MetalLB and `frr-k8s` charts to reflect
new features and configuration options.

- **Refactor**
- Enhanced RBAC configurations for better resource management and
security.
- Improved webhook configurations for better validation and consistency.

- **Chores**
- Updated various YAML configuration files to include namespace
specifications for clarity.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-02 19:20:07 +01:00
klinch0
edbbb9be68 add kubeaps integration (#486)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **New Features**
- Introduced a new variable `$host` for improved configuration
management.
- Added a `valuesFrom` section to the `dashboard` release, allowing
external value sourcing.
- Enhanced Keycloak integration with new client scopes, roles, and
configurations for Kubeapps.
- Added support for custom pod specifications and environment variables
in Redis configurations.
- Introduced a new Kubernetes configuration file for managing access to
resources via Role and Secret.
- Updated image versions across various components to ensure
compatibility and leverage new features.

- **Bug Fixes**
- Implemented error handling to ensure required configurations are
present.
- Improved handling of request headers for the `/logos` endpoint in
Nginx configuration.
- Adjusted security context configurations to enhance deployment
security.

- **Documentation**
- Updated configuration files to reflect new dependencies and structures
for better clarity.
- Enhanced README documentation with upgrade instructions and security
defaults.
- Expanded notes on handling persistent volumes and data migration
during upgrades.

These enhancements improve the overall functionality and reliability of
the platform.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-12-02 18:57:14 +01:00
Andrei Kvapil
9a699d7397 Allow specifying mtu for kubeovn daemonset (#487)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a new patch application step in the update process for
KubeOVN.
- Enhanced flexibility in the `kube-ovn-cni` configuration by allowing
users to specify the Maximum Transmission Unit (MTU) for improved
network performance.
  
- **Bug Fixes**
- Applied a patch to ensure the new MTU configuration is properly
integrated into the deployment process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-12-02 18:52:23 +01:00
klinch0
df448b995a Feature/add sso roles (#480)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
	- Updated application version from 1.5.0 to 1.6.0.
- Introduced new role-based access control (RBAC) roles: view, use,
admin, and super-admin, enhancing security and permissions management.
- Added new Keycloak realm groups for view, use, admin, and super-admin
roles, streamlining user management within the application.
- Integrated `keycloak-configure` release into the deployment structure,
establishing dependencies for improved configuration management.

- **Bug Fixes**
	- Resolved versioning discrepancies in the tenant package.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-11-27 11:46:21 +01:00
klinch0
b5edaaaab2 add kk operator and configure (#485)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced the `keycloak-operator` as an optional component in
multiple deployment configurations.
- Added a Helm chart for the `keycloak-operator`, enabling streamlined
deployment and management of Keycloak instances.
- Enhanced documentation with a new README file for the Keycloak
Operator Helm chart, detailing installation and usage instructions.
- Added various Custom Resource Definitions (CRDs) for managing Keycloak
resources effectively within Kubernetes.

- **Bug Fixes**
- Improved handling of user credentials and realm configurations in the
Keycloak operator.

- **Documentation**
- Comprehensive updates to the README and configuration files to assist
users in deploying and managing Keycloak.

- **Chores**
- Added various Custom Resource Definitions (CRDs) for managing Keycloak
resources effectively within Kubernetes.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-11-25 19:51:14 +01:00
Andrei Kvapil
5a4c165020 Fix OpenAPIv2 definitions for dynamic resources (#484)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
	- Enhanced OpenAPI schema handling for the Apps API server.
- Introduced a method for deep copying schema structures to improve
resource definition management.

- **Bug Fixes**
- Improved error handling during server configuration to ensure proper
reporting of setup issues.

- **Refactor**
- Removed dynamic type registration for the `v1alpha1` API version to
simplify server initialization.

- **Chores**
	- Updated image tag for the CozyStack API to the latest version.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-11-25 15:18:43 +01:00
klinch0
b7375f730f add services to dashboard (#482)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced new Kubernetes Roles for managing access control to
dashboard resources in the Redis, Kafka, and NATS applications.
  
- **Version Updates**
	- Updated Redis application version from `0.3.0` to `0.3.1`.
	- Updated ClickHouse application version from `0.6.0` to `0.6.1`.
	- Updated Kafka application version from `0.3.0` to `0.3.1`.
	- Updated NATS application version from `0.3.0` to `0.3.1`.
- Revised versioning for multiple packages, indicating specific commit
references.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-11-21 15:35:10 +01:00
Andrei Kvapil
bdc7a92337 Make keycloak optional for distro bundles (#481) 2024-11-21 01:20:39 +01:00
klinch0
647a5577f1 add keycloak (#475)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Release Notes

- **New Features**
- Integrated Keycloak service into deployment configurations across
multiple files, enhancing user authentication capabilities.
- Introduced a new Helm chart for Keycloak, facilitating easier
deployment and management.
- Added Kubernetes Ingress and Service resources for Keycloak to manage
external access and internal service routing.
- Configured a PostgreSQL cluster specifically for Keycloak, ensuring
data persistence.

- **Bug Fixes**
- Updated versioning in the installer script to ensure compatibility
with the latest configurations.

- **Documentation**
- Added detailed configuration options for Keycloak deployment,
including resource limits and ingress settings.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-11-21 01:18:19 +01:00
klinch0
78366f1953 add password for nats (#477)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced username and password parameters for NATS authentication,
enhancing security options.
- Added a new configuration for specifying the Kubernetes cluster domain
for routing.
- Implemented a new Role in Kubernetes RBAC for managing secrets related
to the NATS dashboard.

- **Bug Fixes**
- Updated versioning information for the NATS application to reflect the
latest changes.

- **Documentation**
- Enhanced the README with details on new authentication parameters and
configuration options.
- Updated the JSON schema to include new properties for user
configuration.

- **Chores**
	- Incremented the NATS application version from 0.2.0 to 0.3.0.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-11-21 01:11:48 +01:00
klinch0
47bd46c171 revert-precommit (#476)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Removed unnecessary pre-commit hooks to streamline the development
process.
- Retained the `gen-versions-map` hook for maintaining version
consistency.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-11-15 08:42:23 +01:00
Andrei Kvapil
bfbde07c55 Prepare release v0.18.0 (#462)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

- **New Features**
	- Expanded build process to include the `cozystack-api` component.
- Updated image versions for `cozystack`, `darkhttpd`, and other
components to improve performance and stability.

- **Bug Fixes**
- Updated image digests for various components, ensuring the latest
updates and security patches are applied.

- **Documentation**
- Incremented version numbers across multiple configuration files for
clarity and consistency.

- **Chores**
- Updated various package versions in the version map for better
dependency management.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-11-06 09:26:26 +01:00
klinch0
b9e80b9a91 add endpoint for http checks (#469) 2024-11-05 17:06:35 +01:00
klinch0
a6e710eeec fix tenant labels (#468)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated the HelmRelease configuration for monitoring agents to
simplify tenant label assignment by using the release namespace
directly.
  
- **Bug Fixes**
- Adjusted the logging configuration for `fluent-bit` to ensure accurate
categorization and processing of monitoring data.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-11-05 16:51:10 +01:00
Andrei Kvapil
003edf8cf0 Revert "Update LINSTOR v1.29.2" (#467)
Reverts aenix-io/cozystack#465
2024-11-05 14:26:59 +01:00
Andrei Kvapil
8d30b398d9 Switch operators to be optional in distro bundles (#466)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-11-05 14:24:13 +01:00
Andrei Kvapil
ad96d6a913 Update LINSTOR v1.29.2 (#465) 2024-11-05 14:16:43 +01:00
Andrei Kvapil
48e7cf547a Upate Talos Linux v1.8.2 (#464)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-11-05 14:14:17 +01:00
klinch0
3c27a1e9bf add metrics agents (#461)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced new HelmRelease configurations for cert-manager, monitoring
agents, and Victoria Metrics Operator in Kubernetes.
- Added resource specifications for `vmselect` in the VMCluster
configuration.
- Enhanced resource management for `vmselect` with defined limits and
requests for memory and CPU.

- **Bug Fixes**
	- Adjusted resource limits for Redis failover memory allocation.

- **Documentation**
- Updated README and release notes for various components, enhancing
clarity and usability.

- **Chores**
- Updated image versions across multiple components for consistency and
performance improvements.
- Modified migration scripts to facilitate transitions and manage
resources effectively.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2024-11-04 19:01:33 +01:00
klinch0
f06f653744 add values and checks (#454)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new configuration file for Kubernetes deployments,
enhancing clarity on parameters and settings.
- Added common parameters for NATS, including external access and
persistent volume settings.

- **Bug Fixes**
- Improved error handling and feedback in Helm release management
scripts.

- **Chores**
- Reduced verbosity in test output by removing unnecessary echo
statements in the testing Makefile.
- Added success return statements in various check scripts to ensure
proper termination.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-11-04 18:43:36 +01:00
klinch0
e41b5249d2 fix logos names (#459)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated the application icon for the `vm-instance` application to
enhance visual representation.
  
- **Bug Fixes**
- Improved the execution of migration scripts by ensuring they have the
correct permissions before running.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-11-04 17:34:40 +01:00
Andrei Kvapil
7b78af6092 Introduce Cozystack API (#460)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a RESTful API for managing `Application` resources,
enabling CRUD operations with HelmRelease integration.
- Added validation functions for `Application` and `ApplicationSpec`,
laying the groundwork for future validation rules.
- Implemented configuration management for resources, allowing for
structured application and release settings.

- **Bug Fixes**
- Addressed API rule violations related to naming conventions and
missing types in the CozyStack API definitions.

- **Tests**
- Added comprehensive tests for round-trip functionality and version
compatibility within the Apps API server.

- **Documentation**
- Introduced documentation for the `v1alpha1` API version, including
licensing and code generation annotations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-11-04 17:33:34 +01:00
klinch0
57e90b700f fix alerta webhook url (#457)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated monitoring application version to 1.5.1, enhancing overall
stability.
	- Introduced a new API key handling mechanism for improved security.
- Enhanced Ingress resource configurability with dynamic URL
construction.

- **Bug Fixes**
- Adjusted formatting for Ingress annotations to ensure proper
functionality.

- **Chores**
- Updated version mappings for the monitoring package in the versions
map.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-10-28 15:59:28 +01:00
klinch0
0ae7db654c add migration for dv (#455)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated versioning logic to support migration from version 5 to
version 6.
- Introduced a migration script for managing Persistent Volume Claims
(PVCs) and updating configurations.

- **Bug Fixes**
- Ensured error handling and waiting mechanisms for Kubernetes resources
are preserved during migrations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2024-10-24 14:44:19 +02:00
780 changed files with 115582 additions and 14817 deletions

View File

@@ -17,6 +17,18 @@ jobs:
- name: Install pre-commit
run: pip install pre-commit
- name: Install generate
run: |
sudo apt update
sudo apt install curl -y
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt install nodejs -y
git clone https://github.com/bitnami/readme-generator-for-helm
cd ./readme-generator-for-helm
npm install
npm install -g pkg
pkg . -o /usr/local/bin/readme-generator
- name: Run pre-commit hooks
run: |
git fetch origin main || git fetch origin master

View File

@@ -1,21 +1,6 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
- repo: local
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
- id: mixed-line-ending
args: [--fix=lf]
- id: check-yaml
exclude: '^.*templates/.*\.yaml$'
args: [--unsafe]
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.41.0
hooks:
- id: markdownlint
args: [--fix, --disable, MD013, MD041, --]
- repo: local
hooks:
- id: gen-versions-map
name: Generate versions map and check for changes
entry: sh -c 'make -C packages/apps check-version-map && make -C packages/extra check-version-map'
@@ -23,3 +8,16 @@ repos:
types: [file]
pass_filenames: false
description: Run the script and fail if it generates changes
- id: run-make-generate
name: Run 'make generate' in all app directories
entry: |
/bin/bash -c '
for dir in ./packages/apps/*/; do
if [ -d "$dir" ]; then
echo "Running make generate in $dir"
(cd "$dir" && make generate)
fi
done
'
language: script
files: ^.*$

View File

@@ -28,4 +28,5 @@ This list is sorted in chronological order, based on the submission date.
| [Ænix](https://aenix.io/) | @kvaps | 2024-02-14 | Ænix provides consulting services for cloud providers and uses Cozystack as the main tool for organizing managed services for them. |
| [Mediatech](https://mediatech.dev/) | @ugenk | 2024-05-01 | We're developing and hosting software for our and our custmer services. We're using cozystack as a kubernetes distribution for that. |
| [Bootstack](https://bootstack.app/) | @mrkhachaturov | 2024-08-01| At Bootstack, we utilize a Kubernetes operator specifically designed to simplify and streamline cloud infrastructure creation.|
| [gohost](https://gohost.kz/) | @karabass_off | 2024-02-01| Our company has been working in the market of Kazakhstan for more than 15 years, providing clients with a standard set of services: VPS/VDC, IaaS, shared hosting, etc. Now we are expanding the lineup by introducing Bare Metal Kubenetes cluster under Cozystack management.|
| [gohost](https://gohost.kz/) | @karabass_off | 2024-02-01 | Our company has been working in the market of Kazakhstan for more than 15 years, providing clients with a standard set of services: VPS/VDC, IaaS, shared hosting, etc. Now we are expanding the lineup by introducing Bare Metal Kubenetes cluster under Cozystack management. |
| [Urmanac](https://urmanac.com) | @kingdonb | 2024-12-04 | Urmanac is the future home of a hosting platform for the knowledge base of a community of personal server enthusiasts. We use Cozystack to provide support services for web sites hosted using both conventional deployments and on SpinKube, with WASM. |

View File

@@ -1,7 +1,12 @@
# The Cozystack Maintainers
| Maintainer | GitHub Username | Company |
| ---------- | --------------- | ------- |
| Andrei Kvapil | [@kvaps](https://github.com/kvaps) | Ænix |
| George Gaál | [@gecube](https://github.com/gecube) | Ænix |
| Eduard Generalov | [@egeneralov](https://github.com/egeneralov) | Ænix |
| Maintainer | GitHub Username | Company | Responsibility |
| ---------- | --------------- | ------- | --------------------------------- |
| Andrei Kvapil | [@kvaps](https://github.com/kvaps) | Ænix | Core Maintainer |
| George Gaál | [@gecube](https://github.com/gecube) | Ænix | DevOps Practices in Platform, Developers Advocate |
| Kingdon Barrett | [@kingdonb](https://github.com/kingdonb) | Urmanac | FluxCD and flux-operator |
| Timofei Larkin | [@lllamnyp](https://github.com/lllamnyp) | 3commas | Etcd-operator Lead |
| Artem Bortnikov | [@aobort](https://github.com/aobort) | Timescale | Etcd-operator Lead |
| Andrei Gumilev | [@chumkaska](https://github.com/chumkaska) | Ænix | Platform Documentation |
| Timur Tukaev | [@tym83](https://github.com/tym83) | Ænix | Cozystack Website, Marketing, Community Management |
| Kirill Klinchenkov | [@klinch0](https://github.com/klinch0) | Ænix | Core Maintainer |

View File

@@ -6,6 +6,9 @@ build:
make -C packages/apps/mysql image
make -C packages/apps/clickhouse image
make -C packages/apps/kubernetes image
make -C packages/extra/monitoring image
make -C packages/system/cozystack-api image
make -C packages/system/cozystack-controller image
make -C packages/system/cilium image
make -C packages/system/kubeovn image
make -C packages/system/dashboard image
@@ -33,6 +36,10 @@ assets:
make -C packages/core/installer/ assets
test:
test -f _out/assets/nocloud-amd64.raw.xz || make -C packages/core/installer talos-nocloud
make -C packages/core/testing apply
make -C packages/core/testing test
make -C packages/core/testing delete
make -C packages/core/testing test-applications
generate:
hack/update-codegen.sh

View File

@@ -0,0 +1,25 @@
API rule violation: list_type_missing,github.com/aenix-io/cozystack/pkg/apis/apps/v1alpha1,ApplicationStatus,Conditions
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,Ref
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,Schema
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,XEmbeddedResource
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,XIntOrString
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,XListMapKeys
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,XListType
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,XMapType
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,XPreserveUnknownFields
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,XValidations
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaPropsOrArray,JSONSchemas
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaPropsOrArray,Schema
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaPropsOrBool,Allows
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaPropsOrBool,Schema
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaPropsOrStringArray,Property
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaPropsOrStringArray,Schema
API rule violation: names_match,k8s.io/apimachinery/pkg/apis/meta/v1,APIResourceList,APIResources
API rule violation: names_match,k8s.io/apimachinery/pkg/apis/meta/v1,Duration,Duration
API rule violation: names_match,k8s.io/apimachinery/pkg/apis/meta/v1,InternalEvent,Object
API rule violation: names_match,k8s.io/apimachinery/pkg/apis/meta/v1,InternalEvent,Type
API rule violation: names_match,k8s.io/apimachinery/pkg/apis/meta/v1,MicroTime,Time
API rule violation: names_match,k8s.io/apimachinery/pkg/apis/meta/v1,StatusCause,Type
API rule violation: names_match,k8s.io/apimachinery/pkg/apis/meta/v1,Time,Time
API rule violation: names_match,k8s.io/apimachinery/pkg/runtime,Unknown,ContentEncoding
API rule violation: names_match,k8s.io/apimachinery/pkg/runtime,Unknown,ContentType

View File

@@ -0,0 +1,36 @@
/*
Copyright 2025.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1alpha1 contains API Schema definitions for the v1alpha1 API group.
// +kubebuilder:object:generate=true
// +groupName=cozystack.io
package v1alpha1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
var (
// GroupVersion is group version used to register these objects.
GroupVersion = schema.GroupVersion{Group: "cozystack.io", Version: "v1alpha1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme.
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)

View File

@@ -0,0 +1,70 @@
/*
Copyright 2025.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// WorkloadStatus defines the observed state of Workload
type WorkloadStatus struct {
// Kind represents the type of workload (redis, postgres, etc.)
// +required
Kind string `json:"kind"`
// Type represents the specific role of the workload (redis, sentinel, etc.)
// If not specified, defaults to Kind
// +optional
Type string `json:"type,omitempty"`
// Resources specifies the compute resources allocated to this workload
// +required
Resources map[string]resource.Quantity `json:"resources"`
// Operational indicates if all pods of the workload are ready
// +optional
Operational bool `json:"operational"`
}
// +kubebuilder:object:root=true
// +kubebuilder:printcolumn:name="Kind",type="string",JSONPath=".status.kind"
// +kubebuilder:printcolumn:name="Type",type="string",JSONPath=".status.type"
// +kubebuilder:printcolumn:name="CPU",type="string",JSONPath=".status.resources.cpu"
// +kubebuilder:printcolumn:name="Memory",type="string",JSONPath=".status.resources.memory"
// +kubebuilder:printcolumn:name="Operational",type="boolean",JSONPath=`.status.operational`
// Workload is the Schema for the workloads API
type Workload struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Status WorkloadStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// WorkloadList contains a list of Workload
type WorkloadList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Workload `json:"items"`
}
func init() {
SchemeBuilder.Register(&Workload{}, &WorkloadList{})
}

View File

@@ -0,0 +1,91 @@
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// WorkloadMonitorSpec defines the desired state of WorkloadMonitor
type WorkloadMonitorSpec struct {
// Selector is a label selector to find workloads to monitor
// +required
Selector map[string]string `json:"selector"`
// Kind specifies the kind of the workload
// +optional
Kind string `json:"kind,omitempty"`
// Type specifies the type of the workload
// +optional
Type string `json:"type,omitempty"`
// Version specifies the version of the workload
// +optional
Version string `json:"version,omitempty"`
// MinReplicas specifies the minimum number of replicas that should be available
// +kubebuilder:validation:Minimum=0
// +optional
MinReplicas *int32 `json:"minReplicas,omitempty"`
// Replicas is the desired number of replicas
// If not specified, will use observedReplicas as the target
// +kubebuilder:validation:Minimum=0
// +optional
Replicas *int32 `json:"replicas,omitempty"`
}
// WorkloadMonitorStatus defines the observed state of WorkloadMonitor
type WorkloadMonitorStatus struct {
// Operational indicates if the workload meets all operational requirements
// +optional
Operational *bool `json:"operational,omitempty"`
// AvailableReplicas is the number of ready replicas
// +optional
AvailableReplicas int32 `json:"availableReplicas"`
// ObservedReplicas is the total number of pods observed
// +optional
ObservedReplicas int32 `json:"observedReplicas"`
}
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="Kind",type="string",JSONPath=".spec.kind"
// +kubebuilder:printcolumn:name="Type",type="string",JSONPath=".spec.type"
// +kubebuilder:printcolumn:name="Version",type="string",JSONPath=".spec.version"
// +kubebuilder:printcolumn:name="Replicas",type="integer",JSONPath=".spec.replicas"
// +kubebuilder:printcolumn:name="MinReplicas",type="integer",JSONPath=".spec.minReplicas"
// +kubebuilder:printcolumn:name="Available",type="integer",JSONPath=".status.availableReplicas"
// +kubebuilder:printcolumn:name="Observed",type="integer",JSONPath=".status.observedReplicas"
// +kubebuilder:printcolumn:name="Operational",type="boolean",JSONPath=".status.operational"
// WorkloadMonitor is the Schema for the workloadmonitors API
type WorkloadMonitor struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec WorkloadMonitorSpec `json:"spec,omitempty"`
Status WorkloadMonitorStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
// WorkloadMonitorList contains a list of WorkloadMonitor
type WorkloadMonitorList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []WorkloadMonitor `json:"items"`
}
func init() {
SchemeBuilder.Register(&WorkloadMonitor{}, &WorkloadMonitorList{})
}
// GetSelector returns the label selector from metadata
func (w *WorkloadMonitor) GetSelector() map[string]string {
return w.Spec.Selector
}
// Selector specifies the label selector for workloads
type Selector map[string]string

View File

@@ -0,0 +1,238 @@
//go:build !ignore_autogenerated
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
package v1alpha1
import (
"k8s.io/apimachinery/pkg/api/resource"
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in Selector) DeepCopyInto(out *Selector) {
{
in := &in
*out = make(Selector, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Selector.
func (in Selector) DeepCopy() Selector {
if in == nil {
return nil
}
out := new(Selector)
in.DeepCopyInto(out)
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Workload) DeepCopyInto(out *Workload) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Workload.
func (in *Workload) DeepCopy() *Workload {
if in == nil {
return nil
}
out := new(Workload)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Workload) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkloadList) DeepCopyInto(out *WorkloadList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Workload, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadList.
func (in *WorkloadList) DeepCopy() *WorkloadList {
if in == nil {
return nil
}
out := new(WorkloadList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *WorkloadList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkloadMonitor) DeepCopyInto(out *WorkloadMonitor) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadMonitor.
func (in *WorkloadMonitor) DeepCopy() *WorkloadMonitor {
if in == nil {
return nil
}
out := new(WorkloadMonitor)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *WorkloadMonitor) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkloadMonitorList) DeepCopyInto(out *WorkloadMonitorList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]WorkloadMonitor, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadMonitorList.
func (in *WorkloadMonitorList) DeepCopy() *WorkloadMonitorList {
if in == nil {
return nil
}
out := new(WorkloadMonitorList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *WorkloadMonitorList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkloadMonitorSpec) DeepCopyInto(out *WorkloadMonitorSpec) {
*out = *in
if in.Selector != nil {
in, out := &in.Selector, &out.Selector
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.MinReplicas != nil {
in, out := &in.MinReplicas, &out.MinReplicas
*out = new(int32)
**out = **in
}
if in.Replicas != nil {
in, out := &in.Replicas, &out.Replicas
*out = new(int32)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadMonitorSpec.
func (in *WorkloadMonitorSpec) DeepCopy() *WorkloadMonitorSpec {
if in == nil {
return nil
}
out := new(WorkloadMonitorSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkloadMonitorStatus) DeepCopyInto(out *WorkloadMonitorStatus) {
*out = *in
if in.Operational != nil {
in, out := &in.Operational, &out.Operational
*out = new(bool)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadMonitorStatus.
func (in *WorkloadMonitorStatus) DeepCopy() *WorkloadMonitorStatus {
if in == nil {
return nil
}
out := new(WorkloadMonitorStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *WorkloadStatus) DeepCopyInto(out *WorkloadStatus) {
*out = *in
if in.Resources != nil {
in, out := &in.Resources, &out.Resources
*out = make(map[string]resource.Quantity, len(*in))
for key, val := range *in {
(*out)[key] = val.DeepCopy()
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new WorkloadStatus.
func (in *WorkloadStatus) DeepCopy() *WorkloadStatus {
if in == nil {
return nil
}
out := new(WorkloadStatus)
in.DeepCopyInto(out)
return out
}

33
cmd/cozystack-api/main.go Normal file
View File

@@ -0,0 +1,33 @@
/*
Copyright 2024 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"os"
"github.com/aenix-io/cozystack/pkg/cmd/server"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/component-base/cli"
)
func main() {
ctx := genericapiserver.SetupSignalContext()
options := server.NewAppsServerOptions(os.Stdout, os.Stderr)
cmd := server.NewCommandStartAppsServer(ctx, options)
code := cli.Run(cmd)
os.Exit(code)
}

View File

@@ -0,0 +1,29 @@
package main
import (
"flag"
"log"
"net/http"
"path/filepath"
)
func main() {
addr := flag.String("address", ":8123", "Address to listen on")
dir := flag.String("dir", "/cozystack/assets", "Directory to serve files from")
flag.Parse()
absDir, err := filepath.Abs(*dir)
if err != nil {
log.Fatalf("Error getting absolute path for %s: %v", *dir, err)
}
fs := http.FileServer(http.Dir(absDir))
http.Handle("/", fs)
log.Printf("Server starting on %s, serving directory %s", *addr, absDir)
err = http.ListenAndServe(*addr, nil)
if err != nil {
log.Fatalf("Server failed to start: %v", err)
}
}

View File

@@ -0,0 +1,210 @@
/*
Copyright 2025.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"crypto/tls"
"flag"
"os"
"time"
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
// to ensure that exec-entrypoint and run can make use of them.
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
"sigs.k8s.io/controller-runtime/pkg/webhook"
cozystackiov1alpha1 "github.com/aenix-io/cozystack/api/v1alpha1"
"github.com/aenix-io/cozystack/internal/controller"
"github.com/aenix-io/cozystack/internal/telemetry"
// +kubebuilder:scaffold:imports
)
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(cozystackiov1alpha1.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}
func main() {
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
var secureMetrics bool
var enableHTTP2 bool
var disableTelemetry bool
var telemetryEndpoint string
var telemetryInterval string
var cozystackVersion string
var tlsOpts []func(*tls.Config)
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.BoolVar(&secureMetrics, "metrics-secure", true,
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
flag.BoolVar(&enableHTTP2, "enable-http2", false,
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
flag.BoolVar(&disableTelemetry, "disable-telemetry", false,
"Disable telemetry collection")
flag.StringVar(&telemetryEndpoint, "telemetry-endpoint", "https://telemetry.cozystack.io",
"Endpoint for sending telemetry data")
flag.StringVar(&telemetryInterval, "telemetry-interval", "15m",
"Interval between telemetry data collection (e.g. 15m, 1h)")
flag.StringVar(&cozystackVersion, "cozystack-version", "unknown",
"Version of Cozystack")
opts := zap.Options{
Development: false,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
// Parse telemetry interval
interval, err := time.ParseDuration(telemetryInterval)
if err != nil {
setupLog.Error(err, "invalid telemetry interval")
os.Exit(1)
}
// Configure telemetry
telemetryConfig := telemetry.Config{
Disabled: disableTelemetry,
Endpoint: telemetryEndpoint,
Interval: interval,
CozystackVersion: cozystackVersion,
}
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
// if the enable-http2 flag is false (the default), http/2 should be disabled
// due to its vulnerabilities. More specifically, disabling http/2 will
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
// Rapid Reset CVEs. For more information see:
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
// - https://github.com/advisories/GHSA-4374-p667-p6c8
disableHTTP2 := func(c *tls.Config) {
setupLog.Info("disabling http/2")
c.NextProtos = []string{"http/1.1"}
}
if !enableHTTP2 {
tlsOpts = append(tlsOpts, disableHTTP2)
}
webhookServer := webhook.NewServer(webhook.Options{
TLSOpts: tlsOpts,
})
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
// More info:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/server
// - https://book.kubebuilder.io/reference/metrics.html
metricsServerOptions := metricsserver.Options{
BindAddress: metricsAddr,
SecureServing: secureMetrics,
TLSOpts: tlsOpts,
}
if secureMetrics {
// FilterProvider is used to protect the metrics endpoint with authn/authz.
// These configurations ensure that only authorized users and service accounts
// can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info:
// https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/filters#WithAuthenticationAndAuthorization
metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization
// TODO(user): If CertDir, CertName, and KeyName are not specified, controller-runtime will automatically
// generate self-signed certificates for the metrics server. While convenient for development and testing,
// this setup is not recommended for production.
}
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
Metrics: metricsServerOptions,
WebhookServer: webhookServer,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "19a0338c.cozystack.io",
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
// speeds up voluntary leader transitions as the new leader don't have to wait
// LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
// the manager stops, so would be fine to enable this option. However,
// if you are doing or is intended to do any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
if err = (&controller.WorkloadMonitorReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "WorkloadMonitor")
os.Exit(1)
}
// +kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
os.Exit(1)
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
os.Exit(1)
}
// Initialize telemetry collector
collector, err := telemetry.NewCollector(mgr.GetClient(), &telemetryConfig, mgr.GetConfig())
if err != nil {
setupLog.V(1).Error(err, "unable to create telemetry collector, telemetry will be disabled")
}
if collector != nil {
if err := mgr.Add(collector); err != nil {
setupLog.Error(err, "unable to set up telemetry collector")
setupLog.V(1).Error(err, "unable to set up telemetry collector, continuing without telemetry")
}
}
setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

117
go.mod Normal file
View File

@@ -0,0 +1,117 @@
// This is a generated file. Do not edit directly.
module github.com/aenix-io/cozystack
go 1.23.0
require (
github.com/fluxcd/helm-controller/api v1.1.0
github.com/google/gofuzz v1.2.0
github.com/onsi/ginkgo/v2 v2.19.0
github.com/onsi/gomega v1.33.1
github.com/spf13/cobra v1.8.1
github.com/stretchr/testify v1.9.0
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.31.2
k8s.io/apiextensions-apiserver v0.31.2
k8s.io/apimachinery v0.31.2
k8s.io/apiserver v0.31.2
k8s.io/client-go v0.31.2
k8s.io/component-base v0.31.2
k8s.io/klog/v2 v2.130.1
k8s.io/kube-openapi v0.0.0-20240827152857-f7e401e7b4c2
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8
sigs.k8s.io/controller-runtime v0.19.0
sigs.k8s.io/structured-merge-diff/v4 v4.4.1
)
require (
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fluxcd/pkg/apis/kustomize v1.6.1 // indirect
github.com/fluxcd/pkg/apis/meta v1.6.1 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-logr/zapr v1.3.0 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/cel-go v0.21.0 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/pprof v0.0.0-20240727154555-813a5fbdbec8 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect
github.com/imdario/mergo v0.3.6 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.19.1 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.55.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
go.etcd.io/etcd/api/v3 v3.5.16 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.16 // indirect
go.etcd.io/etcd/client/v3 v3.5.16 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect
go.opentelemetry.io/otel v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0 // indirect
go.opentelemetry.io/otel/metric v1.28.0 // indirect
go.opentelemetry.io/otel/sdk v1.28.0 // indirect
go.opentelemetry.io/otel/trace v1.28.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.28.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.30.0 // indirect
golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/sync v0.8.0 // indirect
golang.org/x/sys v0.26.0 // indirect
golang.org/x/term v0.25.0 // indirect
golang.org/x/text v0.19.0 // indirect
golang.org/x/time v0.7.0 // indirect
golang.org/x/tools v0.26.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240528184218-531527333157 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094 // indirect
google.golang.org/grpc v1.65.0 // indirect
google.golang.org/protobuf v1.34.2 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/kms v0.31.2 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.0 // indirect
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
)

313
go.sum Normal file
View File

@@ -0,0 +1,313 @@
github.com/NYTimes/gziphandler v1.1.1 h1:ZUDjpQae29j0ryrS0u/B8HZfJBtBQHjqw2rQ2cqUQ3I=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/antlr4-go/antlr/v4 v4.13.0 h1:lxCg3LAv+EUK6t1i0y1V6/SLeUi0eKEKdhQAlS8TVTI=
github.com/antlr4-go/antlr/v4 v4.13.0/go.mod h1:pfChB/xh/Unjila75QW7+VU4TSnWnnk9UTnmpPaOR2g=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a h1:idn718Q4B6AGu/h5Sxe66HYVdqdGu2l9Iebqhi/AEoA=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v0.5.2 h1:xVCHIVMUu1wtM/VkR9jVZ45N3FhZfYMMYGorLCR8P3k=
github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/fluxcd/helm-controller/api v1.1.0 h1:NS5Wm3U6Kv4w7Cw2sDOV++vf2ecGfFV00x1+2Y3QcOY=
github.com/fluxcd/helm-controller/api v1.1.0/go.mod h1:BgHMgMY6CWynzl4KIbHpd6Wpn3FN9BqgkwmvoKCp6iE=
github.com/fluxcd/pkg/apis/kustomize v1.6.1 h1:22FJc69Mq4i8aCxnKPlddHhSMyI4UPkQkqiAdWFcqe0=
github.com/fluxcd/pkg/apis/kustomize v1.6.1/go.mod h1:5dvQ4IZwz0hMGmuj8tTWGtarsuxW0rWsxJOwC6i+0V8=
github.com/fluxcd/pkg/apis/meta v1.6.1 h1:maLhcRJ3P/70ArLCY/LF/YovkxXbX+6sTWZwZQBeNq0=
github.com/fluxcd/pkg/apis/meta v1.6.1/go.mod h1:YndB/gxgGZmKfqpAfFxyCDNFJFP0ikpeJzs66jwq280=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ=
github.com/go-logr/zapr v1.3.0/go.mod h1:YKepepNBd1u/oyhd/yQmtjVXmm9uML4IXUgMOwR8/Gg=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/btree v1.0.1 h1:gK4Kx5IaGY9CD5sPJ36FHiBJ6ZXl0kilRiiCj+jdYp4=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/cel-go v0.21.0 h1:cl6uW/gxN+Hy50tNYvI691+sXxioCnstFzLp2WO4GCI=
github.com/google/cel-go v0.21.0/go.mod h1:rHUlWCcBKgyEk+eV03RPdZUekPp6YcJwV0FxuUksYxc=
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/pprof v0.0.0-20240727154555-813a5fbdbec8 h1:FKHo8hFI3A+7w0aUQuYXQ+6EN5stWmeY/AZqtM8xk9k=
github.com/google/pprof v0.0.0-20240727154555-813a5fbdbec8/go.mod h1:K1liHPHnj73Fdn/EKuT8nrFqBihUSKXoLYU0BuatOYo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 h1:+9834+KizmvFV7pXQGSXQTsaWhq2GjuNUt0aUU0YBYw=
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2m2hlwIgKw+rp3sdCBRoJY+30Y=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 h1:Ovs26xHkKqVztRpIrF/92BcuyuQ/YW4NSIpoGtfXNho=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k=
github.com/imdario/mergo v0.3.6 h1:xTNEAn+kxVO7dTZGu0CegyqKZmoWFI0rF8UxjlB2d28=
github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jonboulle/clockwork v0.2.2 h1:UOGuzwb1PwsrDAObMuhUnj0p5ULPj8V/xJ7Kx9qUBdQ=
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/onsi/ginkgo/v2 v2.19.0 h1:9Cnnf7UHo57Hy3k6/m5k3dRfGTMXGvxhHFvkDTCTpvA=
github.com/onsi/ginkgo/v2 v2.19.0/go.mod h1:rlwLi9PilAFJ8jCg9UE1QP6VBpd6/xj3SRC0d6TU0To=
github.com/onsi/gomega v1.33.1 h1:dsYjIxxSR755MDmKVsaFQTE22ChNBcuuTWgkUDSubOk=
github.com/onsi/gomega v1.33.1/go.mod h1:U4R44UsT+9eLIaYRB2a5qajjtQYn0hauxvRm16AVYg0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.19.1 h1:wZWJDwK+NameRJuPGDhlnFgx8e8HN3XHQeLaYJFJBOE=
github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc=
github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/soheilhy/cmux v0.1.5 h1:jjzc5WVemNEDTLwv9tlmemhC73tI08BNOIGwBOo10Js=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stoewer/go-strcase v1.3.0 h1:g0eASXYtp+yvN9fK8sH94oCIk0fau9uV1/ZdJ0AVEzs=
github.com/stoewer/go-strcase v1.3.0/go.mod h1:fAH5hQ5pehh+j3nZfvwdk2RgEgQjAoM8wodgtPmh1xo=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75 h1:6fotK7otjonDflCTK0BCfls4SPy3NcCVb5dqqmbRknE=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75/go.mod h1:KO6IkyS8Y3j8OdNO85qEYBsRPuteD+YciPomcXdrMnk=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.9 h1:8x7aARPEXiXbHmtUwAIv7eV2fQFHrLLavdiJ3uzJXoI=
go.etcd.io/bbolt v1.3.9/go.mod h1:zaO32+Ti0PK1ivdPtgMESzuzL2VPoIG1PCQNvOdo/dE=
go.etcd.io/etcd/api/v3 v3.5.16 h1:WvmyJVbjWqK4R1E+B12RRHz3bRGy9XVfh++MgbN+6n0=
go.etcd.io/etcd/api/v3 v3.5.16/go.mod h1:1P4SlIP/VwkDmGo3OlOD7faPeP8KDIFhqvciH5EfN28=
go.etcd.io/etcd/client/pkg/v3 v3.5.16 h1:ZgY48uH6UvB+/7R9Yf4x574uCO3jIx0TRDyetSfId3Q=
go.etcd.io/etcd/client/pkg/v3 v3.5.16/go.mod h1:V8acl8pcEK0Y2g19YlOV9m9ssUe6MgiDSobSoaBAM0E=
go.etcd.io/etcd/client/v2 v2.305.13 h1:RWfV1SX5jTU0lbCvpVQe3iPQeAHETWdOTb6pxhd77C8=
go.etcd.io/etcd/client/v2 v2.305.13/go.mod h1:iQnL7fepbiomdXMb3om1rHq96htNNGv2sJkEcZGDRRg=
go.etcd.io/etcd/client/v3 v3.5.16 h1:sSmVYOAHeC9doqi0gv7v86oY/BTld0SEFGaxsU9eRhE=
go.etcd.io/etcd/client/v3 v3.5.16/go.mod h1:X+rExSGkyqxvu276cr2OwPLBaeqFu1cIl4vmRjAD/50=
go.etcd.io/etcd/pkg/v3 v3.5.13 h1:st9bDWNsKkBNpP4PR1MvM/9NqUPfvYZx/YXegsYEH8M=
go.etcd.io/etcd/pkg/v3 v3.5.13/go.mod h1:N+4PLrp7agI/Viy+dUYpX7iRtSPvKq+w8Y14d1vX+m0=
go.etcd.io/etcd/raft/v3 v3.5.13 h1:7r/NKAOups1YnKcfro2RvGGo2PTuizF/xh26Z2CTAzA=
go.etcd.io/etcd/raft/v3 v3.5.13/go.mod h1:uUFibGLn2Ksm2URMxN1fICGhk8Wu96EfDQyuLhAcAmw=
go.etcd.io/etcd/server/v3 v3.5.13 h1:V6KG+yMfMSqWt+lGnhFpP5z5dRUj1BDRJ5k1fQ9DFok=
go.etcd.io/etcd/server/v3 v3.5.13/go.mod h1:K/8nbsGupHqmr5MkgaZpLlH1QdX1pcNQLAkODy44XcQ=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0 h1:9G6E0TXzGFVfTnawRzrPl83iHOAV7L8NJiR8RSGYV1g=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0/go.mod h1:azvtTADFQJA8mX80jIH/akaE7h+dbm/sVuaHqN13w74=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 h1:4K4tsIXefpVJtvA/8srF4V4y0akAoPHkIslgAkjixJA=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0/go.mod h1:jjdQuTGVsXV4vSs+CJ2qYDeDPf9yIJV23qlIzBm73Vg=
go.opentelemetry.io/otel v1.28.0 h1:/SqNcYk+idO0CxKEUOtKQClMK/MimZihKYMruSMViUo=
go.opentelemetry.io/otel v1.28.0/go.mod h1:q68ijF8Fc8CnMHKyzqL6akLO46ePnjkgfIMIjUIX9z4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 h1:3Q/xZUyC1BBkualc9ROb4G8qkH90LXEIICcs5zv1OYY=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0/go.mod h1:s75jGIWA9OfCMzF0xr+ZgfrB5FEbbV7UuYo32ahUiFI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0 h1:qFffATk0X+HD+f1Z8lswGiOQYKHRlzfmdJm0wEaVrFA=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0/go.mod h1:MOiCmryaYtc+V0Ei+Tx9o5S1ZjA7kzLucuVuyzBZloQ=
go.opentelemetry.io/otel/metric v1.28.0 h1:f0HGvSl1KRAU1DLgLGFjrwVyismPlnuU6JD6bOeuA5Q=
go.opentelemetry.io/otel/metric v1.28.0/go.mod h1:Fb1eVBFZmLVTMb6PPohq3TO9IIhUisDsbJoL/+uQW4s=
go.opentelemetry.io/otel/sdk v1.28.0 h1:b9d7hIry8yZsgtbmM0DKyPWMMUMlK9NEKuIG4aBqWyE=
go.opentelemetry.io/otel/sdk v1.28.0/go.mod h1:oYj7ClPUA7Iw3m+r7GeEjz0qckQRJK2B8zjcZEfu7Pg=
go.opentelemetry.io/otel/trace v1.28.0 h1:GhQ9cUuQGmNDd5BTCP2dAvv75RdMxEfTmYejp+lkx9g=
go.opentelemetry.io/otel/trace v1.28.0/go.mod h1:jPyXzNPg6da9+38HEwElrQiHlVMTnVfM3/yv2OlIHaI=
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw=
golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24=
golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.26.0 h1:v/60pFQmzmT9ExmjDv2gGIfi3OqfKoEP6I5+umXlbnQ=
golang.org/x/tools v0.26.0/go.mod h1:TPVVj70c7JJ3WCazhD8OdXcZg/og+b9+tH/KxylGwH0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw=
gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=
google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d h1:VBu5YqKPv6XiJ199exd8Br+Aetz+o08F+PLMnwJQHAY=
google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d/go.mod h1:yZTlhN0tQnXo3h00fuXNCxJdLdIdnVFVBaRJ5LWBbw4=
google.golang.org/genproto/googleapis/api v0.0.0-20240528184218-531527333157 h1:7whR9kGa5LUwFtpLm2ArCEejtnxlGeLbAyjFY8sGNFw=
google.golang.org/genproto/googleapis/api v0.0.0-20240528184218-531527333157/go.mod h1:99sLkeliLXfdj2J75X3Ho+rrVCaJze0uwN7zDDkjPVU=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094 h1:BwIjyKYGsK9dMCBOorzRri8MQwmi7mT9rGHsCEinZkA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094/go.mod h1:Ue6ibwXGpU+dqIcODieyLOcgj7z8+IcskoNIgZxtrFY=
google.golang.org/grpc v1.65.0 h1:bs/cUb4lp1G5iImFFd3u5ixQzweKizoZJAwBNLR42lc=
google.golang.org/grpc v1.65.0/go.mod h1:WgYC2ypjlB0EiQi6wdKixMqukr6lBc0Vo+oOgjrM5ZQ=
google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg=
google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4=
gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/api v0.31.2 h1:3wLBbL5Uom/8Zy98GRPXpJ254nEFpl+hwndmk9RwmL0=
k8s.io/api v0.31.2/go.mod h1:bWmGvrGPssSK1ljmLzd3pwCQ9MgoTsRCuK35u6SygUk=
k8s.io/apiextensions-apiserver v0.31.2 h1:W8EwUb8+WXBLu56ser5IudT2cOho0gAKeTOnywBLxd0=
k8s.io/apiextensions-apiserver v0.31.2/go.mod h1:i+Geh+nGCJEGiCGR3MlBDkS7koHIIKWVfWeRFiOsUcM=
k8s.io/apimachinery v0.31.2 h1:i4vUt2hPK56W6mlT7Ry+AO8eEsyxMD1U44NR22CLTYw=
k8s.io/apimachinery v0.31.2/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo=
k8s.io/apiserver v0.31.2 h1:VUzOEUGRCDi6kX1OyQ801m4A7AUPglpsmGvdsekmcI4=
k8s.io/apiserver v0.31.2/go.mod h1:o3nKZR7lPlJqkU5I3Ove+Zx3JuoFjQobGX1Gctw6XuE=
k8s.io/client-go v0.31.2 h1:Y2F4dxU5d3AQj+ybwSMqQnpZH9F30//1ObxOKlTI9yc=
k8s.io/client-go v0.31.2/go.mod h1:NPa74jSVR/+eez2dFsEIHNa+3o09vtNaWwWwb1qSxSs=
k8s.io/component-base v0.31.2 h1:Z1J1LIaC0AV+nzcPRFqfK09af6bZ4D1nAOpWsy9owlA=
k8s.io/component-base v0.31.2/go.mod h1:9PeyyFN/drHjtJZMCTkSpQJS3U9OXORnHQqMLDz0sUQ=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kms v0.31.2 h1:pyx7l2qVOkClzFMIWMVF/FxsSkgd+OIGH7DecpbscJI=
k8s.io/kms v0.31.2/go.mod h1:OZKwl1fan3n3N5FFxnW5C4V3ygrah/3YXeJWS3O6+94=
k8s.io/kube-openapi v0.0.0-20240827152857-f7e401e7b4c2 h1:GKE9U8BH16uynoxQii0auTjmmmuZ3O0LFMN6S0lPPhI=
k8s.io/kube-openapi v0.0.0-20240827152857-f7e401e7b4c2/go.mod h1:coRQXBK9NxO98XUv3ZD6AK3xzHCxV6+b7lrquKwaKzA=
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A=
k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.0 h1:CPT0ExVicCzcpeN4baWEV2ko2Z/AsiZgEdwgcfwLgMo=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.0/go.mod h1:Ve9uj1L+deCXFrPOk1LpFXqTg7LCFzFso6PA48q/XZw=
sigs.k8s.io/controller-runtime v0.19.0 h1:nWVM7aq+Il2ABxwiCizrVDSlmDcshi9llbaFbC0ji/Q=
sigs.k8s.io/controller-runtime v0.19.0/go.mod h1:iRmWllt8IlaLjvTTDLhRBXIEtkCK6hwVBJJsYS9Ajf4=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4=
sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=

16
hack/boilerplate.go.txt Normal file
View File

@@ -0,0 +1,16 @@
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

View File

@@ -21,7 +21,7 @@ fix_d8() {
}
swap_pvc_overview() {
jq '(.panels[] | select(.title=="PVC Detailed") | .panels[] | select(.title=="Overview")) as $a | del(.panels[] | select(.title=="PVC Detailed").panels[] | select(.title=="Overview")) | ( (.panels[] | select(.title=="PVC Detailed"))) as $b | del( .panels[] | select(.title=="PVC Detailed")) | (.panels[.panels|length]=($a|.gridPos.y=$b.gridPos.y)) | (.panels[.panels|length]=($b|.gridPos.y=$a.gridPos.y))'
jq '(.panels[] | select(.title=="PVC Detailed") | .panels[] | select(.title=="Overview")) as $a | del(.panels[] | select(.title=="PVC Detailed").panels[] | select(.title=="Overview")) | ( (.panels[] | select(.title=="PVC Detailed"))) as $b | del( .panels[] | select(.title=="PVC Detailed")) | (.panels[.panels|length]=($a|.gridPos.y=$b.gridPos.y)) | (.panels[.panels|length]=($b|.gridPos.y=$a.gridPos.y))'
}
deprectaed_remove_faq() {
@@ -68,7 +68,7 @@ modules/402-ingress-nginx/monitoring/grafana-dashboards/ingress-nginx/namespace/
modules/402-ingress-nginx/monitoring/grafana-dashboards/ingress-nginx/vhost/vhost_detail.json
modules/402-ingress-nginx/monitoring/grafana-dashboards/ingress-nginx/vhost/vhosts.json
modules/340-monitoring-kubernetes-control-plane/monitoring/grafana-dashboards/kubernetes-cluster/control-plane-status.json
modules/340-monitoring-kubernetes-control-plane/monitoring/grafana-dashboards/kubernetes-cluster/kube-etcd3.json #TODO
modules/340-monitoring-kubernetes-control-plane/monitoring/grafana-dashboards/kubernetes-cluster/kube-etcd.json #TODO
modules/340-monitoring-kubernetes-control-plane/monitoring/grafana-dashboards/kubernetes-cluster/deprecated-resources.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//kubernetes-cluster/nodes/ntp.json #TODO
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//kubernetes-cluster/nodes/nodes.json
@@ -78,6 +78,10 @@ modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//main/pod.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//main/namespace/namespaces.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//main/namespace/namespace.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//main/capacity-planning/capacity-planning.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//flux/flux-control-plane.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//flux/flux-stats.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//kafka/strimzi-kafka.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//goldpinger/goldpinger.json
EOT
@@ -109,4 +113,3 @@ done <<\EOT
https://raw.githubusercontent.com/dotdc/grafana-dashboards-kubernetes/master/dashboards/k8s-views-namespaces.json
https://raw.githubusercontent.com/dotdc/grafana-dashboards-kubernetes/master/dashboards/k8s-views-pods.json
EOT

View File

@@ -9,15 +9,29 @@ YELLOW='\033[0;33m'
ROOT_NS="tenant-root"
TEST_TENANT="tenant-e2e"
function clean() {
kubectl delete helmrelease.helm.toolkit.fluxcd.io $TEST_TENANT -n $ROOT_NS
if true; then
echo -e "${GREEN}Cleanup successful!${RESET}"
return 0
else
echo -e "${RED}Cleanup failed!${RESET}"
return 1
values_base_path="/hack/testdata/"
checks_base_path="/hack/testdata/"
function delete_hr() {
local release_name="$1"
local namespace="$2"
if [[ -z "$release_name" ]]; then
echo -e "${RED}Error: Release name is required.${RESET}"
exit 1
fi
if [[ -z "$namespace" ]]; then
echo -e "${RED}Error: Namespace name is required.${RESET}"
exit 1
fi
if [[ "$release_name" == "tenant-e2e" ]]; then
echo -e "${YELLOW}Skipping deletion for release tenant-e2e.${RESET}"
return 0
fi
kubectl delete helmrelease $release_name -n $namespace
}
function install_helmrelease() {
@@ -43,6 +57,11 @@ function install_helmrelease() {
exit 1
fi
if [[ -n "$values_file" && -f "$values_file" ]]; then
local values_section
values_section=$(echo " values:" && sed 's/^/ /' "$values_file")
fi
local helmrelease_file=$(mktemp /tmp/HelmRelease.XXXXXX.yaml)
{
echo "apiVersion: helm.toolkit.fluxcd.io/v2"
@@ -64,11 +83,7 @@ function install_helmrelease() {
echo " version: '*'"
echo " interval: 1m0s"
echo " timeout: 5m0s"
if [[ -n "$values_file" && -f "$values_file" ]]; then
echo " values:"
cat "$values_file" | sed 's/^/ /'
fi
[[ -n "$values_section" ]] && echo "$values_section"
} > "$helmrelease_file"
kubectl apply -f "$helmrelease_file"
@@ -79,26 +94,38 @@ function install_helmrelease() {
function install_tenant (){
local release_name="$1"
local namespace="$2"
local values_file="${3:-tenant.yaml}"
local values_file="${values_base_path}tenant/values.yaml"
local repo_name="cozystack-apps"
local repo_ns="cozy-public"
install_helmrelease "$release_name" "$namespace" "tenant" "$repo_name" "$repo_ns" "$values_file"
}
function make_extra_checks(){
local checks_file="$1"
echo "after exec make $checks_file"
if [[ -n "$checks_file" && -f "$checks_file" ]]; then
echo -e "${YELLOW}Start extra checks with file: ${checks_file}${RESET}"
fi
}
function check_helmrelease_status() {
local release_name="$1"
local namespace="$2"
local checks_file="$3"
local timeout=300 # Timeout in seconds
local interval=5 # Interval between checks in seconds
local elapsed=0
while [[ $elapsed -lt $timeout ]]; do
local status_output
status_output=$(kubectl get helmrelease "$release_name" -n "$namespace" -o json | jq -r '.status.conditions[-1].reason')
if [[ "$status_output" == "InstallSucceeded" ]]; then
if [[ "$status_output" == "InstallSucceeded" || "$status_output" == "UpgradeSucceeded" ]]; then
echo -e "${GREEN}Helm release '$release_name' is ready.${RESET}"
make_extra_checks "$checks_file"
delete_hr $release_name $namespace
return 0
elif [[ "$status_output" == "InstallFailed" ]]; then
echo -e "${RED}Helm release '$release_name': InstallFailed${RESET}"
@@ -122,14 +149,17 @@ if [ -z "$chart_name" ]; then
exit 1
fi
echo "Running tests for chart: $chart_name"
install_tenant $TEST_TENANT $ROOT_NS
check_helmrelease_status $TEST_TENANT $ROOT_NS
checks_file="${checks_base_path}${chart_name}/check.sh"
repo_name="cozystack-apps"
repo_ns="cozy-public"
release_name="$chart_name-e2e"
install_helmrelease "$release_name" "$TEST_TENANT" "$chart_name" "$repo_name" "$repo_ns"
values_file="${values_base_path}${chart_name}/values.yaml"
check_helmrelease_status "$release_name" "$TEST_TENANT"
install_tenant $TEST_TENANT $ROOT_NS
check_helmrelease_status $TEST_TENANT $ROOT_NS "${checks_base_path}tenant/check.sh"
echo -e "${YELLOW}Running tests for chart: $chart_name${RESET}"
install_helmrelease $release_name $TEST_TENANT $chart_name $repo_name $repo_ns $values_file
check_helmrelease_status $release_name $TEST_TENANT $checks_file

View File

@@ -113,8 +113,6 @@ machine:
- usermode_helper=disabled
- name: zfs
- name: spl
install:
image: ghcr.io/aenix-io/cozystack/talos:v1.8.1
files:
- content: |
[plugins]
@@ -124,6 +122,12 @@ machine:
op: create
cluster:
apiServer:
extraArgs:
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
oidc-client-id: "kubernetes"
oidc-username-claim: "preferred_username"
oidc-groups-claim: "groups"
network:
cni:
name: none
@@ -136,6 +140,9 @@ EOT
cat > patch-controlplane.yaml <<\EOT
machine:
nodeLabels:
node.kubernetes.io/exclude-from-external-load-balancers:
$patch: delete
network:
interfaces:
- interface: eth0
@@ -179,10 +186,11 @@ talosctl apply -f controlplane.yaml -n 192.168.123.13 -e 192.168.123.13 -i
timeout 60 sh -c 'until nc -nzv 192.168.123.11 50000 && nc -nzv 192.168.123.12 50000 && nc -nzv 192.168.123.13 50000; do sleep 1; done'
# Bootstrap
talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11
timeout 10 sh -c 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
# Wait for etcd
timeout 180 sh -c 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep "rpc error"; do sleep 1; done'
timeout 180 sh -c 'until timeout -s 9 2 talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1; do sleep 1; done'
timeout 60 sh -c 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep "rpc error"; do sleep 1; done'
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
@@ -190,7 +198,7 @@ export KUBECONFIG=$PWD/kubeconfig
# Wait for kubernetes nodes appear
timeout 60 sh -c 'until [ $(kubectl get node -o name | wc -l) = 3 ]; do sleep 1; done'
kubectl create ns cozy-system
kubectl create ns cozy-system -o yaml | kubectl apply -f -
kubectl create -f - <<\EOT
apiVersion: v1
kind: ConfigMap
@@ -203,6 +211,8 @@ data:
ipv4-pod-gateway: "10.244.0.1"
ipv4-svc-cidr: "10.96.0.0/16"
ipv4-join-cidr: "100.64.0.0/16"
root-host: example.org
api-server-endpoint: https://192.168.123.10:6443
EOT
#
@@ -219,6 +229,7 @@ sleep 5
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n " $1 " hr/" $2 " &"} END{print "wait"}' | sh -x
# Wait for Cluster-API providers
timeout 30 sh -c 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager; do sleep 1; done'
kubectl wait deploy --timeout=30s --for=condition=available -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager
# Wait for linstor controller
@@ -287,13 +298,16 @@ spec:
avoidBuggyIPs: false
EOT
kubectl patch -n tenant-root hr/tenant-root --type=merge -p '{"spec":{ "values":{
# Wait for cozystack-api
kubectl wait --for=condition=Available apiservices v1alpha1.apps.cozystack.io --timeout=2m
kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '{"spec":{
"host": "example.org",
"ingress": true,
"monitoring": true,
"etcd": true,
"isolated": true
}}}'
}}'
# Wait for HelmRelease be created
timeout 60 sh -c 'until kubectl get hr -n tenant-root etcd ingress monitoring tenant-root; do sleep 1; done'
@@ -301,9 +315,9 @@ timeout 60 sh -c 'until kubectl get hr -n tenant-root etcd ingress monitoring te
# Wait for HelmReleases be installed
kubectl wait --timeout=2m --for=condition=ready -n tenant-root hr etcd ingress monitoring tenant-root
kubectl patch -n tenant-root hr/ingress --type=merge -p '{"spec":{ "values":{
kubectl patch -n tenant-root ingresses.apps.cozystack.io ingress --type=merge -p '{"spec":{
"dashboard": true
}}}'
}}'
# Wait for nginx-ingress-controller
timeout 60 sh -c 'until kubectl get deploy -n tenant-root root-ingress-controller; do sleep 1; done'
@@ -313,7 +327,7 @@ kubectl wait --timeout=5m --for=condition=available -n tenant-root deploy root-i
kubectl wait --timeout=5m --for=jsonpath=.status.readyReplicas=3 -n tenant-root sts etcd
# Wait for Victoria metrics
kubectl wait --timeout=5m --for=jsonpath=.status.updateStatus=operational -n tenant-root vmalert/vmalert-longterm vmalert/vmalert-shortterm vmalertmanager/alertmanager
kubectl wait --timeout=5m --for=jsonpath=.status.updateStatus=operational -n tenant-root vmalert/vmalert-shortterm vmalertmanager/alertmanager
kubectl wait --timeout=5m --for=jsonpath=.status.status=operational -n tenant-root vlogs/generic
kubectl wait --timeout=5m --for=jsonpath=.status.clusterStatus=operational -n tenant-root vmcluster/shortterm vmcluster/longterm
@@ -326,3 +340,12 @@ ip=$(kubectl get svc -n tenant-root root-ingress-controller -o jsonpath='{.statu
# Check Grafana
curl -sS -k "https://$ip" -H 'Host: grafana.example.org' | grep Found
# Test OIDC
kubectl patch -n cozy-system cm/cozystack --type=merge -p '{"data":{
"oidc-enabled": "true"
}}'
timeout 60 sh -c 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator; do sleep 1; done'
kubectl wait --timeout=10m --for=condition=ready -n cozy-keycloak hr keycloak keycloak-configure keycloak-operator

1
hack/testdata/http-cache/check.sh vendored Normal file
View File

@@ -0,0 +1 @@
return 0

2
hack/testdata/http-cache/values.yaml vendored Normal file
View File

@@ -0,0 +1,2 @@
endpoints:
- 8.8.8.8:443

1
hack/testdata/kubernetes/check.sh vendored Normal file
View File

@@ -0,0 +1 @@
return 0

62
hack/testdata/kubernetes/values.yaml vendored Normal file
View File

@@ -0,0 +1,62 @@
## @section Common parameters
## @param host The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host).
## @param controlPlane.replicas Number of replicas for Kubernetes contorl-plane components
## @param storageClass StorageClass used to store user data
##
host: ""
controlPlane:
replicas: 2
storageClass: replicated
## @param nodeGroups [object] nodeGroups configuration
##
nodeGroups:
md0:
minReplicas: 0
maxReplicas: 10
instanceType: "u1.medium"
ephemeralStorage: 20Gi
roles:
- ingress-nginx
resources:
cpu: ""
memory: ""
## @section Cluster Addons
##
addons:
## Cert-manager: automatically creates and manages SSL/TLS certificate
##
certManager:
## @param addons.certManager.enabled Enables the cert-manager
## @param addons.certManager.valuesOverride Custom values to override
enabled: true
valuesOverride: {}
## Ingress-NGINX Controller
##
ingressNginx:
## @param addons.ingressNginx.enabled Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role)
## @param addons.ingressNginx.valuesOverride Custom values to override
##
enabled: true
## @param addons.ingressNginx.hosts List of domain names that should be passed through to the cluster by upper cluster
## e.g:
## hosts:
## - example.org
## - foo.example.net
##
hosts: []
valuesOverride: {}
## Flux CD
##
fluxcd:
## @param addons.fluxcd.enabled Enables Flux CD
## @param addons.fluxcd.valuesOverride Custom values to override
##
enabled: true
valuesOverride: {}

1
hack/testdata/nats/check.sh vendored Normal file
View File

@@ -0,0 +1 @@
return 0

10
hack/testdata/nats/values.yaml vendored Normal file
View File

@@ -0,0 +1,10 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param replicas Persistent Volume size for NATS
## @param storageClass StorageClass used to store the data
##
external: false
replicas: 2
storageClass: ""

1
hack/testdata/tenant/check.sh vendored Normal file
View File

@@ -0,0 +1 @@
return 0

52
hack/update-codegen.sh Executable file
View File

@@ -0,0 +1,52 @@
#!/usr/bin/env bash
# Copyright 2024 The Cozystack Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
SCRIPT_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
CODEGEN_PKG=${CODEGEN_PKG:-$(cd "${SCRIPT_ROOT}"; ls -d -1 ./vendor/k8s.io/code-generator 2>/dev/null || echo ../code-generator)}
API_KNOWN_VIOLATIONS_DIR="${API_KNOWN_VIOLATIONS_DIR:-"${SCRIPT_ROOT}/api/api-rules"}"
UPDATE_API_KNOWN_VIOLATIONS="${UPDATE_API_KNOWN_VIOLATIONS:-true}"
CONTROLLER_GEN="go run sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.4"
source "${CODEGEN_PKG}/kube_codegen.sh"
THIS_PKG="k8s.io/sample-apiserver"
kube::codegen::gen_helpers \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
"${SCRIPT_ROOT}/pkg/apis"
if [[ -n "${API_KNOWN_VIOLATIONS_DIR:-}" ]]; then
report_filename="${API_KNOWN_VIOLATIONS_DIR}/cozystack_api_violation_exceptions.list"
if [[ "${UPDATE_API_KNOWN_VIOLATIONS:-}" == "true" ]]; then
update_report="--update-report"
fi
fi
kube::codegen::gen_openapi \
--extra-pkgs "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1" \
--output-dir "${SCRIPT_ROOT}/pkg/generated/openapi" \
--output-pkg "${THIS_PKG}/pkg/generated/openapi" \
--report-filename "${report_filename:-"/dev/null"}" \
${update_report:+"${update_report}"} \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
"${SCRIPT_ROOT}/pkg/apis"
$CONTROLLER_GEN object:headerFile="hack/boilerplate.go.txt" paths="./api/..."
$CONTROLLER_GEN rbac:roleName=manager-role crd paths="./api/..." output:crd:artifacts:config=packages/system/cozystack-controller/templates/crds

View File

@@ -0,0 +1,96 @@
/*
Copyright 2025.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"context"
"fmt"
"path/filepath"
"runtime"
"testing"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
"k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/envtest"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
cozystackiov1alpha1 "github.com/aenix-io/cozystack/api/v1alpha1"
// +kubebuilder:scaffold:imports
)
// These tests use Ginkgo (BDD-style Go testing framework). Refer to
// http://onsi.github.io/ginkgo/ to learn more about Ginkgo.
var cfg *rest.Config
var k8sClient client.Client
var testEnv *envtest.Environment
var ctx context.Context
var cancel context.CancelFunc
func TestControllers(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Controller Suite")
}
var _ = BeforeSuite(func() {
logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))
ctx, cancel = context.WithCancel(context.TODO())
By("bootstrapping test environment")
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config", "crd", "bases")},
ErrorIfCRDPathMissing: true,
// The BinaryAssetsDirectory is only required if you want to run the tests directly
// without call the makefile target test. If not informed it will look for the
// default path defined in controller-runtime which is /usr/local/kubebuilder/.
// Note that you must have the required binaries setup under the bin directory to perform
// the tests directly. When we run make test it will be setup and used automatically.
BinaryAssetsDirectory: filepath.Join("..", "..", "bin", "k8s",
fmt.Sprintf("1.31.0-%s-%s", runtime.GOOS, runtime.GOARCH)),
}
var err error
// cfg is defined in this file globally.
cfg, err = testEnv.Start()
Expect(err).NotTo(HaveOccurred())
Expect(cfg).NotTo(BeNil())
err = cozystackiov1alpha1.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
// +kubebuilder:scaffold:scheme
k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})
Expect(err).NotTo(HaveOccurred())
Expect(k8sClient).NotTo(BeNil())
})
var _ = AfterSuite(func() {
By("tearing down the test environment")
cancel()
err := testEnv.Stop()
Expect(err).NotTo(HaveOccurred())
})

View File

@@ -0,0 +1,273 @@
package controller
import (
"context"
"encoding/json"
"sort"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/pointer"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
cozyv1alpha1 "github.com/aenix-io/cozystack/api/v1alpha1"
)
// WorkloadMonitorReconciler reconciles a WorkloadMonitor object
type WorkloadMonitorReconciler struct {
client.Client
Scheme *runtime.Scheme
}
// +kubebuilder:rbac:groups=cozystack.io,resources=workloadmonitors,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cozystack.io,resources=workloadmonitors/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=cozystack.io,resources=workloads,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=cozystack.io,resources=workloads/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list;watch
// isPodReady checks if the Pod is in the Ready condition.
func (r *WorkloadMonitorReconciler) isPodReady(pod *corev1.Pod) bool {
for _, c := range pod.Status.Conditions {
if c.Type == corev1.PodReady && c.Status == corev1.ConditionTrue {
return true
}
}
return false
}
// updateOwnerReferences adds the given monitor as a new owner reference to the object if not already present.
// It then sorts the owner references to enforce a consistent order.
func updateOwnerReferences(obj metav1.Object, monitor client.Object) {
// Retrieve current owner references
owners := obj.GetOwnerReferences()
// Check if current monitor is already in owner references
var alreadyOwned bool
for _, ownerRef := range owners {
if ownerRef.UID == monitor.GetUID() {
alreadyOwned = true
break
}
}
runtimeObj, ok := monitor.(runtime.Object)
if !ok {
return
}
gvk := runtimeObj.GetObjectKind().GroupVersionKind()
// If not already present, add new owner reference without controller flag
if !alreadyOwned {
newOwnerRef := metav1.OwnerReference{
APIVersion: gvk.GroupVersion().String(),
Kind: gvk.Kind,
Name: monitor.GetName(),
UID: monitor.GetUID(),
// Set Controller to false to avoid conflict as multiple controllers are not allowed
Controller: pointer.BoolPtr(false),
BlockOwnerDeletion: pointer.BoolPtr(true),
}
owners = append(owners, newOwnerRef)
}
// Sort owner references to enforce a consistent order by UID
sort.SliceStable(owners, func(i, j int) bool {
return owners[i].UID < owners[j].UID
})
// Update the owner references of the object
obj.SetOwnerReferences(owners)
}
// reconcilePodForMonitor creates or updates a Workload object for the given Pod and WorkloadMonitor.
func (r *WorkloadMonitorReconciler) reconcilePodForMonitor(
ctx context.Context,
monitor *cozyv1alpha1.WorkloadMonitor,
pod corev1.Pod,
) error {
logger := log.FromContext(ctx)
// Combine both init containers and normal containers to sum resources properly
combinedContainers := append(pod.Spec.InitContainers, pod.Spec.Containers...)
// totalResources will store the sum of all container resource limits
totalResources := make(map[string]resource.Quantity)
// Iterate over all containers to aggregate their Limits
for _, container := range combinedContainers {
for name, qty := range container.Resources.Limits {
if existing, exists := totalResources[name.String()]; exists {
existing.Add(qty)
totalResources[name.String()] = existing
} else {
totalResources[name.String()] = qty.DeepCopy()
}
}
}
// If annotation "workload.cozystack.io/resources" is present, parse and merge
if resourcesStr, ok := pod.Annotations["workload.cozystack.io/resources"]; ok {
annRes := map[string]string{}
if err := json.Unmarshal([]byte(resourcesStr), &annRes); err != nil {
logger.Error(err, "Failed to parse resources annotation", "pod", pod.Name)
} else {
for k, v := range annRes {
parsed, err := resource.ParseQuantity(v)
if err != nil {
logger.Error(err, "Failed to parse resource quantity from annotation", "key", k, "value", v)
continue
}
totalResources[k] = parsed
}
}
}
workload := &cozyv1alpha1.Workload{
ObjectMeta: metav1.ObjectMeta{
Name: pod.Name,
Namespace: pod.Namespace,
},
}
_, err := ctrl.CreateOrUpdate(ctx, r.Client, workload, func() error {
// Update owner references with the new monitor
updateOwnerReferences(workload.GetObjectMeta(), monitor)
// Copy labels from the Pod if needed
workload.Labels = pod.Labels
// Fill Workload status fields:
workload.Status.Kind = monitor.Spec.Kind
workload.Status.Type = monitor.Spec.Type
workload.Status.Resources = totalResources
workload.Status.Operational = r.isPodReady(&pod)
return nil
})
if err != nil {
logger.Error(err, "Failed to CreateOrUpdate Workload", "workload", workload.Name)
return err
}
return nil
}
// Reconcile is the main reconcile loop.
// 1. It reconciles WorkloadMonitor objects themselves (create/update/delete).
// 2. It also reconciles Pod events mapped to WorkloadMonitor via label selector.
func (r *WorkloadMonitorReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)
// Fetch the WorkloadMonitor object if it exists
monitor := &cozyv1alpha1.WorkloadMonitor{}
err := r.Get(ctx, req.NamespacedName, monitor)
if err != nil {
// If the resource is not found, it may be a Pod event (mapFunc).
if apierrors.IsNotFound(err) {
return ctrl.Result{}, nil
}
logger.Error(err, "Unable to fetch WorkloadMonitor")
return ctrl.Result{}, err
}
// List Pods that match the WorkloadMonitor's selector
podList := &corev1.PodList{}
if err := r.List(
ctx,
podList,
client.InNamespace(monitor.Namespace),
client.MatchingLabels(monitor.Spec.Selector),
); err != nil {
logger.Error(err, "Unable to list Pods for WorkloadMonitor", "monitor", monitor.Name)
return ctrl.Result{}, err
}
var observedReplicas, availableReplicas int32
// For each matching Pod, reconcile the corresponding Workload
for _, pod := range podList.Items {
observedReplicas++
if err := r.reconcilePodForMonitor(ctx, monitor, pod); err != nil {
logger.Error(err, "Failed to reconcile Workload for Pod", "pod", pod.Name)
continue
}
if r.isPodReady(&pod) {
availableReplicas++
}
}
// Update WorkloadMonitor status based on observed pods
monitor.Status.ObservedReplicas = observedReplicas
monitor.Status.AvailableReplicas = availableReplicas
// Default to operational = true, but check MinReplicas if set
monitor.Status.Operational = pointer.Bool(true)
if monitor.Spec.MinReplicas != nil && availableReplicas < *monitor.Spec.MinReplicas {
monitor.Status.Operational = pointer.Bool(false)
}
// Update the WorkloadMonitor status in the cluster
if err := r.Status().Update(ctx, monitor); err != nil {
logger.Error(err, "Unable to update WorkloadMonitor status", "monitor", monitor.Name)
return ctrl.Result{}, err
}
// Return without requeue if we want purely event-driven reconciliations
return ctrl.Result{}, nil
}
// SetupWithManager registers our controller with the Manager and sets up watches.
func (r *WorkloadMonitorReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// Watch WorkloadMonitor objects
For(&cozyv1alpha1.WorkloadMonitor{}).
// Also watch Pod objects and map them back to WorkloadMonitor if labels match
Watches(
&corev1.Pod{},
handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, obj client.Object) []reconcile.Request {
pod, ok := obj.(*corev1.Pod)
if !ok {
return nil
}
var monitorList cozyv1alpha1.WorkloadMonitorList
// List all WorkloadMonitors in the same namespace
if err := r.List(ctx, &monitorList, client.InNamespace(pod.Namespace)); err != nil {
return nil
}
// Match each monitor's selector with the Pod's labels
var requests []reconcile.Request
for _, m := range monitorList.Items {
matches := true
for k, v := range m.Spec.Selector {
if podVal, exists := pod.Labels[k]; !exists || podVal != v {
matches = false
break
}
}
if matches {
requests = append(requests, reconcile.Request{
NamespacedName: types.NamespacedName{
Namespace: m.Namespace,
Name: m.Name,
},
})
}
}
return requests
}),
).
// Watch for changes to Workload objects we create (owned by WorkloadMonitor)
Owns(&cozyv1alpha1.Workload{}).
Complete(r)
}

View File

@@ -0,0 +1,292 @@
package telemetry
import (
"bytes"
"context"
"fmt"
"net/http"
"strings"
"time"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/discovery"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
cozyv1alpha1 "github.com/aenix-io/cozystack/api/v1alpha1"
)
// Collector handles telemetry data collection and sending
type Collector struct {
client client.Client
discoveryClient discovery.DiscoveryInterface
config *Config
ticker *time.Ticker
stopCh chan struct{}
}
// NewCollector creates a new telemetry collector
func NewCollector(client client.Client, config *Config, kubeConfig *rest.Config) (*Collector, error) {
discoveryClient, err := discovery.NewDiscoveryClientForConfig(kubeConfig)
if err != nil {
return nil, fmt.Errorf("failed to create discovery client: %w", err)
}
return &Collector{
client: client,
discoveryClient: discoveryClient,
config: config,
}, nil
}
// Start implements manager.Runnable
func (c *Collector) Start(ctx context.Context) error {
if c.config.Disabled {
return nil
}
c.ticker = time.NewTicker(c.config.Interval)
c.stopCh = make(chan struct{})
// Initial collection
c.collect(ctx)
for {
select {
case <-ctx.Done():
c.ticker.Stop()
close(c.stopCh)
return nil
case <-c.ticker.C:
c.collect(ctx)
}
}
}
// NeedLeaderElection implements manager.LeaderElectionRunnable
func (c *Collector) NeedLeaderElection() bool {
// Only run telemetry collector on the leader
return true
}
// Stop halts telemetry collection
func (c *Collector) Stop() {
close(c.stopCh)
}
// getSizeGroup returns the exponential size group for PVC
func getSizeGroup(size resource.Quantity) string {
gb := size.Value() / (1024 * 1024 * 1024)
switch {
case gb <= 1:
return "1Gi"
case gb <= 5:
return "5Gi"
case gb <= 10:
return "10Gi"
case gb <= 25:
return "25Gi"
case gb <= 50:
return "50Gi"
case gb <= 100:
return "100Gi"
case gb <= 250:
return "250Gi"
case gb <= 500:
return "500Gi"
case gb <= 1024:
return "1Ti"
case gb <= 2048:
return "2Ti"
case gb <= 5120:
return "5Ti"
default:
return "10Ti"
}
}
// collect gathers and sends telemetry data
func (c *Collector) collect(ctx context.Context) {
logger := log.FromContext(ctx).V(1)
// Get cluster ID from kube-system namespace
var kubeSystemNS corev1.Namespace
if err := c.client.Get(ctx, types.NamespacedName{Name: "kube-system"}, &kubeSystemNS); err != nil {
logger.Info(fmt.Sprintf("Failed to get kube-system namespace: %v", err))
return
}
clusterID := string(kubeSystemNS.UID)
var cozystackCM corev1.ConfigMap
if err := c.client.Get(ctx, types.NamespacedName{Namespace: "cozy-system", Name: "cozystack"}, &cozystackCM); err != nil {
logger.Info(fmt.Sprintf("Failed to get cozystack configmap in cozy-system namespace: %v", err))
return
}
oidcEnabled := cozystackCM.Data["oidc-enabled"]
bundle := cozystackCM.Data["bundle-name"]
bundleEnable := cozystackCM.Data["bundle-enable"]
bundleDisable := cozystackCM.Data["bundle-disable"]
// Get Kubernetes version from nodes
var nodeList corev1.NodeList
if err := c.client.List(ctx, &nodeList); err != nil {
logger.Info(fmt.Sprintf("Failed to list nodes: %v", err))
return
}
// Create metrics buffer
var metrics strings.Builder
// Add Cozystack info metric
if len(nodeList.Items) > 0 {
k8sVersion, _ := c.discoveryClient.ServerVersion()
metrics.WriteString(fmt.Sprintf(
"cozy_cluster_info{cozystack_version=\"%s\",kubernetes_version=\"%s\",oidc_enabled=\"%s\",bundle_name=\"%s\",bunde_enable=\"%s\",bunde_disable=\"%s\"} 1\n",
c.config.CozystackVersion,
k8sVersion,
oidcEnabled,
bundle,
bundleEnable,
bundleDisable,
))
}
// Collect node metrics
nodeOSCount := make(map[string]int)
for _, node := range nodeList.Items {
key := fmt.Sprintf("%s (%s)", node.Status.NodeInfo.OperatingSystem, node.Status.NodeInfo.OSImage)
nodeOSCount[key] = nodeOSCount[key] + 1
}
for osKey, count := range nodeOSCount {
metrics.WriteString(fmt.Sprintf(
"cozy_nodes_count{os=\"%s\",kernel=\"%s\"} %d\n",
osKey,
nodeList.Items[0].Status.NodeInfo.KernelVersion,
count,
))
}
// Collect LoadBalancer services metrics
var serviceList corev1.ServiceList
if err := c.client.List(ctx, &serviceList); err != nil {
logger.Info(fmt.Sprintf("Failed to list Services: %v", err))
} else {
lbCount := 0
for _, svc := range serviceList.Items {
if svc.Spec.Type == corev1.ServiceTypeLoadBalancer {
lbCount++
}
}
metrics.WriteString(fmt.Sprintf("cozy_loadbalancers_count %d\n", lbCount))
}
// Count tenant namespaces
var nsList corev1.NamespaceList
if err := c.client.List(ctx, &nsList); err != nil {
logger.Info(fmt.Sprintf("Failed to list Namespaces: %v", err))
} else {
tenantCount := 0
for _, ns := range nsList.Items {
if strings.HasPrefix(ns.Name, "tenant-") {
tenantCount++
}
}
metrics.WriteString(fmt.Sprintf("cozy_tenants_count %d\n", tenantCount))
}
// Collect PV metrics grouped by driver and size
var pvList corev1.PersistentVolumeList
if err := c.client.List(ctx, &pvList); err != nil {
logger.Info(fmt.Sprintf("Failed to list PVs: %v", err))
} else {
// Map to store counts by size and driver
pvMetrics := make(map[string]map[string]int)
for _, pv := range pvList.Items {
if capacity, ok := pv.Spec.Capacity[corev1.ResourceStorage]; ok {
sizeGroup := getSizeGroup(capacity)
// Get the CSI driver name
driver := "unknown"
if pv.Spec.CSI != nil {
driver = pv.Spec.CSI.Driver
} else if pv.Spec.HostPath != nil {
driver = "hostpath"
} else if pv.Spec.NFS != nil {
driver = "nfs"
}
// Initialize nested map if needed
if _, exists := pvMetrics[sizeGroup]; !exists {
pvMetrics[sizeGroup] = make(map[string]int)
}
// Increment count for this size/driver combination
pvMetrics[sizeGroup][driver]++
}
}
// Write metrics
for size, drivers := range pvMetrics {
for driver, count := range drivers {
metrics.WriteString(fmt.Sprintf(
"cozy_pvs_count{driver=\"%s\",size=\"%s\"} %d\n",
driver,
size,
count,
))
}
}
}
// Collect workload metrics
var monitorList cozyv1alpha1.WorkloadMonitorList
if err := c.client.List(ctx, &monitorList); err != nil {
logger.Info(fmt.Sprintf("Failed to list WorkloadMonitors: %v", err))
return
}
for _, monitor := range monitorList.Items {
metrics.WriteString(fmt.Sprintf(
"cozy_workloads_count{uid=\"%s\",kind=\"%s\",type=\"%s\",version=\"%s\"} %d\n",
monitor.UID,
monitor.Spec.Kind,
monitor.Spec.Type,
monitor.Spec.Version,
monitor.Status.ObservedReplicas,
))
}
// Send metrics
if err := c.sendMetrics(clusterID, metrics.String()); err != nil {
logger.Info(fmt.Sprintf("Failed to send metrics: %v", err))
}
}
// sendMetrics sends collected metrics to the configured endpoint
func (c *Collector) sendMetrics(clusterID, metrics string) error {
req, err := http.NewRequest("POST", c.config.Endpoint, bytes.NewBufferString(metrics))
if err != nil {
return fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "text/plain")
req.Header.Set("X-Cluster-ID", clusterID)
resp, err := http.DefaultClient.Do(req)
if err != nil {
return fmt.Errorf("failed to send request: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("unexpected status code: %d", resp.StatusCode)
}
return nil
}

View File

@@ -0,0 +1,27 @@
package telemetry
import (
"time"
)
// Config holds telemetry configuration
type Config struct {
// Disable telemetry collection if set to true
Disabled bool
// Endpoint to send telemetry data to
Endpoint string
// Interval between telemetry data collection
Interval time.Duration
// CozystackVersion represents the current version of Cozystack
CozystackVersion string
}
// DefaultConfig returns default telemetry configuration
func DefaultConfig() *Config {
return &Config{
Disabled: false,
Endpoint: "https://telemetry.cozystack.io",
Interval: 15 * time.Minute,
CozystackVersion: "unknown",
}
}

View File

@@ -68,7 +68,7 @@ spec:
serviceAccountName: cozystack
containers:
- name: cozystack
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.17.1"
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.26.1"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
@@ -86,13 +86,12 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: darkhttpd
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.17.1"
- name: assets
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.26.1"
command:
- /usr/bin/darkhttpd
- /cozystack/assets
- --port
- "8123"
- /usr/bin/cozystack-assets-server
- "-dir=/cozystack/assets"
- "-address=:8123"
ports:
- name: http
containerPort: 8123

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.6.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/clickhouse-backup:0.6.0@sha256:dda84420cb8648721299221268a00d72a05c7af5b7fb452619bac727068b9e61
ghcr.io/aenix-io/cozystack/clickhouse-backup:0.6.1@sha256:7a99cabdfd541f863aa5d1b2f7b49afd39838fb94c8448986634a1dc9050751c

View File

@@ -8,7 +8,7 @@ rules:
resources:
- services
resourceNames:
- chi-clickhouse-test-clickhouse-0-0
- chendpoint-{{ .Release.Name }}
verbs: ["get", "list", "watch"]
- apiGroups:
- ""

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/postgres-backup:0.7.1@sha256:d2015c6dba92293bda652d055e97d1be80e8414c2dc78037c12812d1a2e2cba1
ghcr.io/aenix-io/cozystack/postgres-backup:0.8.0@sha256:d1f7692b6761f46f24687d885ec335330280346ae4a9ff28b3179681b36106b7

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/nginx-cache:0.3.1@sha256:b2b586194d17ca4dc71544971c775ad6878048087290030b8f096d481f57d1f7
ghcr.io/aenix-io/cozystack/nginx-cache:0.3.1@sha256:854b3908114de1876038eb9902577595cce93553ce89bf75ac956d22f1e8b8cc

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0
version: 0.3.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -0,0 +1,19 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-dashboard-resources
rules:
- apiGroups:
- ""
resources:
- services
resourceNames:
- {{ .Release.Name }}-kafka-bootstrap
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- {{ .Release.Name }}-clients-ca
verbs: ["get", "list", "watch"]

View File

@@ -57,6 +57,12 @@ spec:
class: {{ . }}
{{- end }}
deleteClaim: true
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-metrics
key: kafka-metrics-config.yml
zookeeper:
replicas: {{ .Values.zookeeper.replicas }}
storage:
@@ -68,6 +74,12 @@ spec:
class: {{ . }}
{{- end }}
deleteClaim: false
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-metrics
key: kafka-metrics-config.yml
entityOperator:
topicOperator: {}
userOperator: {}

View File

@@ -0,0 +1,198 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: {{ .Release.Name }}-metrics
data:
kafka-metrics-config.yml: |
# See https://github.com/prometheus/jmx_exporter for more info about JMX Prometheus Exporter metrics
lowercaseOutputName: true
rules:
# Special cases and very specific rules
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value
name: kafka_server_$1_$2
type: GAUGE
labels:
clientId: "$3"
topic: "$4"
partition: "$5"
- pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), brokerHost=(.+), brokerPort=(.+)><>Value
name: kafka_server_$1_$2
type: GAUGE
labels:
clientId: "$3"
broker: "$4:$5"
- pattern: kafka.server<type=(.+), cipher=(.+), protocol=(.+), listener=(.+), networkProcessor=(.+)><>connections
name: kafka_server_$1_connections_tls_info
type: GAUGE
labels:
cipher: "$2"
protocol: "$3"
listener: "$4"
networkProcessor: "$5"
- pattern: kafka.server<type=(.+), clientSoftwareName=(.+), clientSoftwareVersion=(.+), listener=(.+), networkProcessor=(.+)><>connections
name: kafka_server_$1_connections_software
type: GAUGE
labels:
clientSoftwareName: "$2"
clientSoftwareVersion: "$3"
listener: "$4"
networkProcessor: "$5"
- pattern: "kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+-total):"
name: kafka_server_$1_$4
type: COUNTER
labels:
listener: "$2"
networkProcessor: "$3"
- pattern: "kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+):"
name: kafka_server_$1_$4
type: GAUGE
labels:
listener: "$2"
networkProcessor: "$3"
- pattern: kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+-total)
name: kafka_server_$1_$4
type: COUNTER
labels:
listener: "$2"
networkProcessor: "$3"
- pattern: kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+)
name: kafka_server_$1_$4
type: GAUGE
labels:
listener: "$2"
networkProcessor: "$3"
# Some percent metrics use MeanRate attribute
# Ex) kafka.server<type=(KafkaRequestHandlerPool), name=(RequestHandlerAvgIdlePercent)><>MeanRate
- pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*><>MeanRate
name: kafka_$1_$2_$3_percent
type: GAUGE
# Generic gauges for percents
- pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*><>Value
name: kafka_$1_$2_$3_percent
type: GAUGE
- pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*, (.+)=(.+)><>Value
name: kafka_$1_$2_$3_percent
type: GAUGE
labels:
"$4": "$5"
# Generic per-second counters with 0-2 key/value pairs
- pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+), (.+)=(.+)><>Count
name: kafka_$1_$2_$3_total
type: COUNTER
labels:
"$4": "$5"
"$6": "$7"
- pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+)><>Count
name: kafka_$1_$2_$3_total
type: COUNTER
labels:
"$4": "$5"
- pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*><>Count
name: kafka_$1_$2_$3_total
type: COUNTER
# Generic gauges with 0-2 key/value pairs
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Value
name: kafka_$1_$2_$3
type: GAUGE
labels:
"$4": "$5"
"$6": "$7"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Value
name: kafka_$1_$2_$3
type: GAUGE
labels:
"$4": "$5"
- pattern: kafka.(\w+)<type=(.+), name=(.+)><>Value
name: kafka_$1_$2_$3
type: GAUGE
# Emulate Prometheus 'Summary' metrics for the exported 'Histogram's.
# Note that these are missing the '_sum' metric!
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Count
name: kafka_$1_$2_$3_count
type: COUNTER
labels:
"$4": "$5"
"$6": "$7"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*), (.+)=(.+)><>(\d+)thPercentile
name: kafka_$1_$2_$3
type: GAUGE
labels:
"$4": "$5"
"$6": "$7"
quantile: "0.$8"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Count
name: kafka_$1_$2_$3_count
type: COUNTER
labels:
"$4": "$5"
- pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*)><>(\d+)thPercentile
name: kafka_$1_$2_$3
type: GAUGE
labels:
"$4": "$5"
quantile: "0.$6"
- pattern: kafka.(\w+)<type=(.+), name=(.+)><>Count
name: kafka_$1_$2_$3_count
type: COUNTER
- pattern: kafka.(\w+)<type=(.+), name=(.+)><>(\d+)thPercentile
name: kafka_$1_$2_$3
type: GAUGE
labels:
quantile: "0.$4"
# KRaft overall related metrics
# distinguish between always increasing COUNTER (total and max) and variable GAUGE (all others) metrics
- pattern: "kafka.server<type=raft-metrics><>(.+-total|.+-max):"
name: kafka_server_raftmetrics_$1
type: COUNTER
- pattern: "kafka.server<type=raft-metrics><>(current-state): (.+)"
name: kafka_server_raftmetrics_$1
value: 1
type: UNTYPED
labels:
$1: "$2"
- pattern: "kafka.server<type=raft-metrics><>(.+):"
name: kafka_server_raftmetrics_$1
type: GAUGE
# KRaft "low level" channels related metrics
# distinguish between always increasing COUNTER (total and max) and variable GAUGE (all others) metrics
- pattern: "kafka.server<type=raft-channel-metrics><>(.+-total|.+-max):"
name: kafka_server_raftchannelmetrics_$1
type: COUNTER
- pattern: "kafka.server<type=raft-channel-metrics><>(.+):"
name: kafka_server_raftchannelmetrics_$1
type: GAUGE
# Broker metrics related to fetching metadata topic records in KRaft mode
- pattern: "kafka.server<type=broker-metadata-metrics><>(.+):"
name: kafka_server_brokermetadatametrics_$1
type: GAUGE
zookeeper-metrics-config.yml: |
# See https://github.com/prometheus/jmx_exporter for more info about JMX Prometheus Exporter metrics
lowercaseOutputName: true
rules:
# replicated Zookeeper
- pattern: "org.apache.ZooKeeperService<name0=ReplicatedServer_id(\\d+)><>(\\w+)"
name: "zookeeper_$2"
type: GAUGE
- pattern: "org.apache.ZooKeeperService<name0=ReplicatedServer_id(\\d+), name1=replica.(\\d+)><>(\\w+)"
name: "zookeeper_$3"
type: GAUGE
labels:
replicaId: "$2"
- pattern: "org.apache.ZooKeeperService<name0=ReplicatedServer_id(\\d+), name1=replica.(\\d+), name2=(\\w+)><>(Packets\\w+)"
name: "zookeeper_$4"
type: COUNTER
labels:
replicaId: "$2"
memberType: "$3"
- pattern: "org.apache.ZooKeeperService<name0=ReplicatedServer_id(\\d+), name1=replica.(\\d+), name2=(\\w+)><>(\\w+)"
name: "zookeeper_$4"
type: GAUGE
labels:
replicaId: "$2"
memberType: "$3"
- pattern: "org.apache.ZooKeeperService<name0=ReplicatedServer_id(\\d+), name1=replica.(\\d+), name2=(\\w+), name3=(\\w+)><>(\\w+)"
name: "zookeeper_$4_$5"
type: GAUGE
labels:
replicaId: "$2"
memberType: "$3"

View File

@@ -0,0 +1,40 @@
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMPodScrape
metadata:
name: {{ .Release.Name }}
spec:
podMetricsEndpoints:
- port: tcp-prometheus
scheme: http
relabelConfigs:
- separator: ;
regex: __meta_kubernetes_pod_label_(strimzi_io_.+)
replacement: $1
action: labelmap
- sourceLabels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
targetLabel: namespace
replacement: $1
action: replace
- sourceLabels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
targetLabel: pod
replacement: $1
action: replace
- sourceLabels: [__meta_kubernetes_pod_node_name]
separator: ;
regex: (.*)
targetLabel: node
replacement: $1
action: replace
- sourceLabels: [__meta_kubernetes_pod_host_ip]
separator: ;
regex: (.*)
targetLabel: node_ip
replacement: $1
action: replace
selector:
matchLabels:
app.kubernetes.io/instance: {{ .Release.Name }}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.13.0
version: 0.15.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/cluster-autoscaler:0.13.0@sha256:7f617de5a24de790a15d9e97c6287ff2b390922e6e74c7a665cbf498f634514d
ghcr.io/aenix-io/cozystack/cluster-autoscaler:0.15.1@sha256:73701e37727eedaafdf9efe4baefcf0835f064ee8731219f0c0186c0d0781a5c

View File

@@ -1,12 +1,14 @@
# Source: https://raw.githubusercontent.com/kubernetes/autoscaler/refs/heads/master/cluster-autoscaler/Dockerfile.amd64
ARG builder_image=docker.io/library/golang:1.22.5
ARG builder_image=docker.io/library/golang:1.23.4
ARG BASEIMAGE=gcr.io/distroless/static:nonroot-amd64
FROM ${builder_image} AS builder
RUN git clone https://github.com/kubernetes/autoscaler /src/autoscaler \
&& cd /src/autoscaler/cluster-autoscaler \
&& git checkout cluster-autoscaler-1.31.0
&& git checkout cluster-autoscaler-1.32.0
WORKDIR /src/autoscaler/cluster-autoscaler
COPY fix-downscale.diff /fix-downscale.diff
RUN git apply /fix-downscale.diff
RUN make build
FROM $BASEIMAGE

View File

@@ -0,0 +1,13 @@
diff --git a/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_unstructured.go b/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_unstructured.go
index 4eec0e4bf..f28fd9241 100644
--- a/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_unstructured.go
+++ b/cluster-autoscaler/cloudprovider/clusterapi/clusterapi_unstructured.go
@@ -106,8 +106,6 @@ func (r unstructuredScalableResource) Replicas() (int, error) {
func (r unstructuredScalableResource) SetSize(nreplicas int) error {
switch {
- case nreplicas > r.maxSize:
- return fmt.Errorf("size increase too large - desired:%d max:%d", nreplicas, r.maxSize)
case nreplicas < r.minSize:
return fmt.Errorf("size decrease too large - desired:%d min:%d", nreplicas, r.minSize)
}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/kubevirt-cloud-provider:0.13.0@sha256:b9dc8e5f0296146b37b332b07b8cd74d1b0308786160b161c670c55005d3dbe9
ghcr.io/aenix-io/cozystack/kubevirt-cloud-provider:0.15.1@sha256:02037bb7a75b35ca1e34924f13e7fa7b25bac2017ddbd7e9ed004c0ff368cce3

View File

@@ -3,13 +3,14 @@ FROM --platform=linux/amd64 golang:1.20.6 AS builder
RUN git clone https://github.com/kubevirt/cloud-provider-kubevirt /go/src/kubevirt.io/cloud-provider-kubevirt \
&& cd /go/src/kubevirt.io/cloud-provider-kubevirt \
&& git checkout adbd6c27468b86b020cf38490e84f124ef24ab62
&& git checkout da9e0cf
WORKDIR /go/src/kubevirt.io/cloud-provider-kubevirt
# see: https://github.com/kubevirt/cloud-provider-kubevirt/pull/291
# see: https://github.com/kubevirt/cloud-provider-kubevirt/pull/335
# see: https://github.com/kubevirt/cloud-provider-kubevirt/pull/336
ADD patches /patches
RUN git apply /patches/external-traffic-policy-local.diff
RUN git apply /patches/*.diff
RUN go get 'k8s.io/endpointslice/util@v0.28' 'k8s.io/apiserver@v0.28'
RUN go mod tidy
RUN go mod vendor

View File

@@ -0,0 +1,20 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index a3c1aa33..95c31438 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -412,11 +412,11 @@ func (c *Controller) reconcileByAddressType(service *v1.Service, tenantSlices []
// Create the desired port configuration
var desiredPorts []discovery.EndpointPort
- for _, port := range service.Spec.Ports {
+ for i := range service.Spec.Ports {
desiredPorts = append(desiredPorts, discovery.EndpointPort{
- Port: &port.TargetPort.IntVal,
- Protocol: &port.Protocol,
- Name: &port.Name,
+ Port: &service.Spec.Ports[i].TargetPort.IntVal,
+ Protocol: &service.Spec.Ports[i].Protocol,
+ Name: &service.Spec.Ports[i].Name,
})
}

View File

@@ -0,0 +1,129 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index a3c1aa33..6f6e3d32 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -108,32 +108,24 @@ func newRequest(reqType ReqType, obj interface{}, oldObj interface{}) *Request {
}
func (c *Controller) Init() error {
-
- // Act on events from Services on the infra cluster. These are created by the EnsureLoadBalancer function.
- // We need to watch for these events so that we can update the EndpointSlices in the infra cluster accordingly.
+ // Existing Service event handlers...
_, err := c.infraFactory.Core().V1().Services().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
- // cast obj to Service
svc := obj.(*v1.Service)
- // Only act on Services of type LoadBalancer
if svc.Spec.Type == v1.ServiceTypeLoadBalancer {
klog.Infof("Service added: %v/%v", svc.Namespace, svc.Name)
c.queue.Add(newRequest(AddReq, obj, nil))
}
},
UpdateFunc: func(oldObj, newObj interface{}) {
- // cast obj to Service
newSvc := newObj.(*v1.Service)
- // Only act on Services of type LoadBalancer
if newSvc.Spec.Type == v1.ServiceTypeLoadBalancer {
klog.Infof("Service updated: %v/%v", newSvc.Namespace, newSvc.Name)
c.queue.Add(newRequest(UpdateReq, newObj, oldObj))
}
},
DeleteFunc: func(obj interface{}) {
- // cast obj to Service
svc := obj.(*v1.Service)
- // Only act on Services of type LoadBalancer
if svc.Spec.Type == v1.ServiceTypeLoadBalancer {
klog.Infof("Service deleted: %v/%v", svc.Namespace, svc.Name)
c.queue.Add(newRequest(DeleteReq, obj, nil))
@@ -144,7 +136,7 @@ func (c *Controller) Init() error {
return err
}
- // Monitor endpoint slices that we are interested in based on known services in the infra cluster
+ // Existing EndpointSlice event handlers in tenant cluster...
_, err = c.tenantFactory.Discovery().V1().EndpointSlices().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
eps := obj.(*discovery.EndpointSlice)
@@ -194,10 +186,80 @@ func (c *Controller) Init() error {
return err
}
- //TODO: Add informer for EndpointSlices in the infra cluster to watch for (unwanted) changes
+ // Add an informer for EndpointSlices in the infra cluster
+ _, err = c.infraFactory.Discovery().V1().EndpointSlices().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
+ AddFunc: func(obj interface{}) {
+ eps := obj.(*discovery.EndpointSlice)
+ if c.managedByController(eps) {
+ svc, svcErr := c.getInfraServiceForEPS(context.TODO(), eps)
+ if svcErr != nil {
+ klog.Errorf("Failed to get infra Service for EndpointSlice %s/%s: %v", eps.Namespace, eps.Name, svcErr)
+ return
+ }
+ if svc != nil {
+ klog.Infof("Infra EndpointSlice added: %v/%v, requeuing Service: %v/%v", eps.Namespace, eps.Name, svc.Namespace, svc.Name)
+ c.queue.Add(newRequest(AddReq, svc, nil))
+ }
+ }
+ },
+ UpdateFunc: func(oldObj, newObj interface{}) {
+ eps := newObj.(*discovery.EndpointSlice)
+ if c.managedByController(eps) {
+ svc, svcErr := c.getInfraServiceForEPS(context.TODO(), eps)
+ if svcErr != nil {
+ klog.Errorf("Failed to get infra Service for EndpointSlice %s/%s: %v", eps.Namespace, eps.Name, svcErr)
+ return
+ }
+ if svc != nil {
+ klog.Infof("Infra EndpointSlice updated: %v/%v, requeuing Service: %v/%v", eps.Namespace, eps.Name, svc.Namespace, svc.Name)
+ c.queue.Add(newRequest(UpdateReq, svc, nil))
+ }
+ }
+ },
+ DeleteFunc: func(obj interface{}) {
+ eps := obj.(*discovery.EndpointSlice)
+ if c.managedByController(eps) {
+ svc, svcErr := c.getInfraServiceForEPS(context.TODO(), eps)
+ if svcErr != nil {
+ klog.Errorf("Failed to get infra Service for EndpointSlice %s/%s on delete: %v", eps.Namespace, eps.Name, svcErr)
+ return
+ }
+ if svc != nil {
+ klog.Infof("Infra EndpointSlice deleted: %v/%v, requeuing Service: %v/%v", eps.Namespace, eps.Name, svc.Namespace, svc.Name)
+ c.queue.Add(newRequest(DeleteReq, svc, nil))
+ }
+ }
+ },
+ })
+ if err != nil {
+ return err
+ }
+
return nil
}
+// getInfraServiceForEPS returns the Service in the infra cluster associated with the given EndpointSlice.
+// It does this by reading the "kubernetes.io/service-name" label from the EndpointSlice, which should correspond
+// to the Service name. If not found or if the Service doesn't exist, it returns nil.
+func (c *Controller) getInfraServiceForEPS(ctx context.Context, eps *discovery.EndpointSlice) (*v1.Service, error) {
+ svcName := eps.Labels[discovery.LabelServiceName]
+ if svcName == "" {
+ // No service name label found, can't determine infra service.
+ return nil, nil
+ }
+
+ svc, err := c.infraClient.CoreV1().Services(c.infraNamespace).Get(ctx, svcName, metav1.GetOptions{})
+ if err != nil {
+ if k8serrors.IsNotFound(err) {
+ // Service doesn't exist
+ return nil, nil
+ }
+ return nil, err
+ }
+
+ return svc, nil
+}
+
// Run starts an asynchronous loop that monitors and updates GKENetworkParamSet in the cluster.
func (c *Controller) Run(numWorkers int, stopCh <-chan struct{}, controllerManagerMetrics *controllersmetrics.ControllerManagerMetrics) {
defer utilruntime.HandleCrash()

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/kubevirt-csi-driver:0.13.0@sha256:1c96280e10becb858cb5f781a278f383319514f803c8e5fe401e0ef291f65821
ghcr.io/aenix-io/cozystack/kubevirt-csi-driver:0.15.1@sha256:a86d8a4722b81e89820ead959874524c4cc86654c22ad73c421bbf717d62c3f3

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.30.1@sha256:422e2078ebb24b4c327edefaad6dc9b3c8f6cf0cab3d8286c4f2b27ecf8b8466
ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.30.1@sha256:6f19f3f8a68372c5b212e98a79ff132cc20641bc46fc4b8d359158945dc04043

View File

@@ -30,6 +30,12 @@ spec:
- /cluster-autoscaler
args:
- --cloud-provider=clusterapi
- --enforce-node-group-min-size=true
- --ignore-daemonsets-utilization=true
- --ignore-mirror-pods-utilization=true
- --scale-down-unneeded-time=30s
- --scan-interval=25s
- --force-delete-unregistered-nodes=true
- --kubeconfig=/etc/kubernetes/kubeconfig/super-admin.svc
- --clusterapi-cloud-config-authoritative
- --node-group-auto-discovery=clusterapi:namespace={{ .Release.Namespace }},clusterName={{ .Release.Name }}

View File

@@ -29,6 +29,7 @@ spec:
{{- range .group.roles }}
node-role.kubernetes.io/{{ . }}: ""
{{- end }}
cluster.x-k8s.io/deployment-name: {{ $.Release.Name }}-{{ .groupName }}
spec:
domain:
{{- if and .group.resources .group.resources.cpu }}
@@ -117,7 +118,7 @@ spec:
ingress:
extraAnnotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
hostname: {{ .Values.host | default (printf "%s.%s" .Release.Name $host) }}:443
hostname: {{ .Values.host | default (printf "%s.%s" .Release.Name $host) }}
className: "{{ $ingress }}"
deployment:
podAdditionalMetadata:
@@ -126,6 +127,21 @@ spec:
replicas: 2
version: 1.30.1
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
spec:
replicas: 2
minReplicas: 1
kind: kubernetes
type: control-plane
selector:
kamaji.clastix.io/component: deployment
kamaji.clastix.io/name: {{ .Release.Name }}
version: {{ $.Chart.Version }}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtCluster
metadata:
@@ -172,6 +188,7 @@ spec:
---
{{- $context := deepCopy $ }}
{{- $_ := set $context "group" $group }}
{{- $_ := set $context "groupName" $groupName }}
{{- $kubevirtmachinetemplate := include "kubevirtmachinetemplate" $context }}
{{- $kubevirtmachinetemplateHash := $kubevirtmachinetemplate | sha256sum | trunc 6 }}
{{- $kubevirtmachinetemplateName := printf "%s-%s-%s" $.Release.Name $groupName $kubevirtmachinetemplateHash }}
@@ -255,6 +272,21 @@ spec:
- type: Ready
status: "False"
timeout: 300s
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: {{ $.Release.Namespace }}
spec:
minReplicas: {{ $group.minReplicas }}
kind: kubernetes
type: worker
selector:
cluster.x-k8s.io/cluster-name: {{ $.Release.Name }}
cluster.x-k8s.io/deployment-name: {{ $.Release.Name }}-{{ $groupName }}
cluster.x-k8s.io/role: worker
version: {{ $.Chart.Version }}
{{- end }}
---
{{- /*

View File

@@ -24,3 +24,13 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
{{- range $groupName, $group := .Values.nodeGroups }}
- {{ $.Release.Name }}-{{ $groupName }}
{{- end }}
verbs: ["get", "list", "watch"]

View File

@@ -0,0 +1,54 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-cert-manager-crds
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: cert-manager-crds
chart:
spec:
chart: cozy-cert-manager-crds
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-cert-manager-crds
storageNamespace: cozy-cert-manager-crds
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
{{- if .Values.addons.certManager.valuesOverride }}
valuesFrom:
- kind: Secret
name: {{ .Release.Name }}-cert-manager-crds-values-override
valuesKey: values
{{- end }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
{{- if .Values.addons.certManager.valuesOverride }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-cert-manager-crds-values-override
stringData:
values: |
{{- toYaml .Values.addons.certManager.valuesOverride | nindent 4 }}
{{- end }}

View File

@@ -43,6 +43,8 @@ spec:
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
- name: {{ .Release.Name }}-cert-manager-crds
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.addons.certManager.valuesOverride }}
---

View File

@@ -0,0 +1,104 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $targetTenant := index $myNS.metadata.annotations "namespace.cozystack.io/monitoring" }}
{{- if .Values.addons.monitoringAgents.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-monitoring-agents
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: cozy-monitoring-agents
chart:
spec:
chart: cozy-monitoring-agents
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-monitoring-agents
storageNamespace: cozy-monitoring-agents
install:
createNamespace: true
timeout: "300s"
remediation:
retries: -1
upgrade:
remediation:
retries: -1
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
- name: {{ .Release.Name }}-cozy-victoria-metrics-operator
namespace: {{ .Release.Namespace }}
values:
vmagent:
externalLabels:
cluster: {{ .Release.Name }}
tenant: {{ .Release.Namespace }}
remoteWrite:
url: http://vminsert-shortterm.{{ $targetTenant }}.svc:8480/insert/0/prometheus
fluent-bit:
readinessProbe:
httpGet:
path: /
daemonSetVolumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
daemonSetVolumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
config:
outputs: |
[OUTPUT]
Name http
Match kube.*
Host vlogs-generic.{{ $targetTenant }}.svc
port 9428
compress gzip
uri /insert/jsonline?_stream_fields=stream,kubernetes_pod_name,kubernetes_container_name,kubernetes_namespace_name&_msg_field=log&_time_field=date
format json_lines
json_date_format iso8601
header AccountID 0
header ProjectID 0
filters: |
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log On
K8S-Logging.Parser On
K8S-Logging.Exclude On
[FILTER]
Name nest
Match *
Wildcard pod_name
Operation lift
Nested_under kubernetes
Add_prefix kubernetes_
[FILTER]
Name modify
Match *
Add tenant {{ .Release.Namespace }}
[FILTER]
Name modify
Match *
Add cluster {{ .Release.Name }}
{{- end }}

View File

@@ -0,0 +1,41 @@
{{- if .Values.addons.monitoringAgents.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-cozy-victoria-metrics-operator
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: cozy-victoria-metrics-operator
chart:
spec:
chart: cozy-victoria-metrics-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-victoria-metrics-operator
storageNamespace: cozy-victoria-metrics-operator
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
- name: {{ .Release.Name }}-cert-manager-crds
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -75,8 +75,23 @@
"default": {}
}
}
},
"monitoringAgents": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables MonitoringAgents (fluentbit, vmagents for sending logs and metrics to storage) if tenant monitoring enabled, send to tenant storage, else to root storage",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
}
}
}
}
}
}

View File

@@ -60,3 +60,12 @@ addons:
##
enabled: false
valuesOverride: {}
## MonitoringAgents
##
monitoringAgents:
## @param addons.monitoringAgents.enabled Enables MonitoringAgents (fluentbit, vmagents for sending logs and metrics to storage) if tenant monitoring enabled, send to tenant storage, else to root storage
## @param addons.monitoringAgents.valuesOverride Custom values to override
##
enabled: false
valuesOverride: {}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/mariadb-backup:0.5.2@sha256:793edb25a29cbc00781e40af883815ca36937e736e2b0d202ea9c9619fb6ca11
ghcr.io/aenix-io/cozystack/mariadb-backup:0.5.2@sha256:9f0b2bc5135e10b29edb2824309059f5b4c4e8b744804b2cf55381171f335675

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0
version: 0.4.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -4,9 +4,13 @@
### Common parameters
| Name | Description | Value |
| -------------- | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `replicas` | Persistent Volume size for NATS | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| Name | Description | Value |
| ------------------- | -------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `replicas` | Persistent Volume size for NATS | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `users` | Users configuration | `{}` |
| `jetstream.size` | Jetstream persistent storage size | `10Gi` |
| `jetstream.enabled` | Enable or disable Jetstream | `true` |
| `config.merge` | Additional configuration to merge into NATS config | `{}` |
| `config.resolver` | Additional configuration to merge into NATS config | `{}` |

View File

@@ -1,3 +1,25 @@
{{- $passwords := dict }}
{{- range $user, $u := .Values.users }}
{{- if $u.password }}
{{- $_ := set $passwords $user $u.password }}
{{- else if not (index $passwords $user) }}
{{- $_ := set $passwords $user (randAlphaNum 16) }}
{{- end }}
{{- end }}
{{- if .Values.users }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-credentials
stringData:
{{- range $user, $u := .Values.users }}
{{ quote $user }}: {{ quote (index $passwords $user) }}
{{- end }}
{{- end }}
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
@@ -18,6 +40,25 @@ spec:
nats:
fullnameOverride: {{ .Release.Name }}
config:
{{- if or (gt (len $passwords) 0) (gt (len .Values.config.merge) 0) }}
merge:
{{- if gt (len $passwords) 0 }}
accounts:
A:
users:
{{- range $username, $password := $passwords }}
- user: "{{ $username }}"
password: "{{ $password }}"
{{- end }}
{{- end }}
{{- if and .Values.config (hasKey .Values.config "merge") }}
{{ toYaml .Values.config.merge | nindent 12 }}
{{- end }}
{{- end }}
{{- if and .Values.config (hasKey .Values.config "resolver") }}
resolver:
{{ toYaml .Values.config.resolver | nindent 12 }}
{{- end }}
cluster:
enabled: true
replicas: {{ .Values.replicas }}
@@ -26,10 +67,10 @@ spec:
jetstream:
enabled: true
fileStore:
enabled: true
enabled: {{ .Values.jetstream.enabled }}
pvc:
enabled: true
size: 10Gi
size: {{ .Values.jetstream.size }}
{{- with .Values.storageClass }}
storageClassName: {{ . }}
{{- end }}

View File

@@ -0,0 +1,19 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-dashboard-resources
rules:
- apiGroups:
- ""
resources:
- services
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- {{ .Release.Name }}-credentials
verbs: ["get", "list", "watch"]

View File

@@ -16,6 +16,36 @@
"type": "string",
"description": "StorageClass used to store the data",
"default": ""
},
"jetstream": {
"type": "object",
"properties": {
"size": {
"type": "string",
"description": "Jetstream persistent storage size",
"default": "10Gi"
},
"enabled": {
"type": "boolean",
"description": "Enable or disable Jetstream",
"default": true
}
}
},
"config": {
"type": "object",
"properties": {
"merge": {
"type": "object",
"description": "Additional configuration to merge into NATS config",
"default": {}
},
"resolver": {
"type": "object",
"description": "Additional configuration to merge into NATS config",
"default": {}
}
}
}
}
}

View File

@@ -8,3 +8,56 @@
external: false
replicas: 2
storageClass: ""
## @param users [object] Users configuration
## Example:
## users:
## user1:
## password: strongpassword
## user2: {}
users: {}
jetstream:
## @param jetstream.size Jetstream persistent storage size
## Specifies the size of the persistent storage for Jetstream (message store).
## Default: 10Gi
size: 10Gi
## @param jetstream.enabled Enable or disable Jetstream
## Set to true to enable Jetstream for persistent messaging in NATS.
## Default: true
enabled: true
config:
## @param config.merge Additional configuration to merge into NATS config
## Allows you to customize NATS server settings by merging additional configurations.
## For example, you can add extra parameters, configure authentication, or set custom settings.
## Default: {}
## example:
##
## merge:
## $include: ./my-config.conf
## zzz$include: ./my-config-last.conf
## server_name: nats
## authorization:
## token: << $TOKEN >>
## jetstream:
## max_memory_store: << 1GB >>
##
## will yield the config:
## {
## include ./my-config.conf;
## "authorization": {
## "token": $TOKEN
## },
## "jetstream": {
## "max_memory_store": 1GB
## },
## "server_name": "nats",
## include ./my-config-last.conf;
## }
merge: {}
## @param config.resolver Additional configuration to merge into NATS config
## Allows you to customize NATS server settings by merging resolver configurations.
## Default: {}
## Example see: https://github.com/nats-io/k8s/blob/main/helm/charts/nats/values.yaml#L247
resolver: {}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.1
version: 0.8.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/postgres-backup:0.7.1@sha256:d2015c6dba92293bda652d055e97d1be80e8414c2dc78037c12812d1a2e2cba1
ghcr.io/aenix-io/cozystack/postgres-backup:0.8.0@sha256:d1f7692b6761f46f24687d885ec335330280346ae4a9ff28b3179681b36106b7

View File

@@ -19,3 +19,10 @@ rules:
resourceNames:
- {{ .Release.Name }}-credentials
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]

View File

@@ -29,3 +29,17 @@ spec:
inheritedMetadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: postgres
type: postgres
selector:
cnpg.io/cluster: {{ .Release.Name }}
cnpg.io/podRole: instance
version: {{ $.Chart.Version }}

View File

@@ -103,4 +103,4 @@
}
}
}
}
}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0
version: 0.5.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -19,5 +19,6 @@ Service utilizes the Spotahome Redis Operator for efficient management and orche
| `size` | Persistent Volume size | `1Gi` |
| `replicas` | Number of Redis replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `authEnabled` | Enable password generation | `true` |

View File

@@ -0,0 +1,30 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-dashboard-resources
rules:
- apiGroups:
- ""
resources:
- services
resourceNames:
- rfs-{{ .Release.Name }}
- rfrm-{{ .Release.Name }}
- rfrs-{{ .Release.Name }}
- "{{ .Release.Name }}-external-lb"
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- "{{ .Release.Name }}-auth"
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}-redis
- {{ .Release.Name }}-sentinel
verbs: ["get", "list", "watch"]

View File

@@ -1,3 +1,20 @@
{{- if .Values.authEnabled }}
{{- $existingPassword := lookup "v1" "Secret" .Release.Namespace (printf "%s-auth" .Release.Name) }}
{{- $password := randAlphaNum 32 | b64enc }}
{{- if $existingPassword }}
{{- $password = index $existingPassword.data "password" }}
{{- end }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
password: {{ $password }}
{{- end }}
---
apiVersion: databases.spotahome.com/v1
kind: RedisFailover
metadata:
@@ -20,7 +37,6 @@ spec:
cpu: 150m
memory: 400Mi
limits:
cpu: 2
memory: 1000Mi
{{- with .Values.size }}
storage:
@@ -37,7 +53,7 @@ spec:
storageClassName: {{ . }}
{{- end }}
{{- end }}
exporter:
exporter:
enabled: true
image: oliver006/redis_exporter:v1.55.0-alpine
args:
@@ -53,3 +69,38 @@ spec:
- appendonly no
- save ""
{{- end }}
{{- if .Values.authEnabled }}
auth:
secretPath: {{ .Release.Name }}-auth
{{- end }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-redis
namespace: {{ $.Release.Namespace }}
spec:
minReplicas: 1
replicas: {{ .Values.replicas }}
kind: redis
type: redis
selector:
app.kubernetes.io/component: redis
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-sentinel
namespace: {{ $.Release.Namespace }}
spec:
minReplicas: 2
replicas: 3
kind: redis
type: sentinel
selector:
app.kubernetes.io/component: sentinel
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -21,6 +21,11 @@
"type": "string",
"description": "StorageClass used to store the data",
"default": ""
},
"authEnabled": {
"type": "boolean",
"description": "Enable password generation",
"default": true
}
}
}

View File

@@ -4,8 +4,10 @@
## @param size Persistent Volume size
## @param replicas Number of Redis replicas
## @param storageClass StorageClass used to store the data
## @param authEnabled Enable password generation
##
external: false
size: 1Gi
replicas: 2
storageClass: ""
authEnabled: true

View File

@@ -4,4 +4,4 @@ description: Separated tenant namespace
icon: /logos/tenant.svg
type: application
version: 1.5.0
version: 1.8.0

View File

@@ -50,11 +50,12 @@ tenant-u1
### Common parameters
| Name | Description | Value |
| ------------ | --------------------------------------------------------------------------------------------------------------------------- | ------- |
| `host` | The hostname used to access tenant services (defaults to using the tenant name as a subdomain for it's parent tenant host). | `""` |
| `etcd` | Deploy own Etcd cluster | `false` |
| `monitoring` | Deploy own Monitoring Stack | `false` |
| `ingress` | Deploy own Ingress Controller | `false` |
| `seaweedfs` | Deploy own SeaweedFS | `false` |
| `isolated` | Enforce tenant namespace with network policies | `false` |
| Name | Description | Value |
| ---------------- | --------------------------------------------------------------------------------------------------------------------------- | ------- |
| `host` | The hostname used to access tenant services (defaults to using the tenant name as a subdomain for it's parent tenant host). | `""` |
| `etcd` | Deploy own Etcd cluster | `false` |
| `monitoring` | Deploy own Monitoring Stack | `false` |
| `ingress` | Deploy own Ingress Controller | `false` |
| `seaweedfs` | Deploy own SeaweedFS | `false` |
| `isolated` | Enforce tenant namespace with network policies | `false` |
| `resourceQuotas` | Define resource quotas for the tenant | `{}` |

View File

@@ -0,0 +1,27 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $oidcEnabled := index $cozyConfig.data "oidc-enabled" }}
{{- if $oidcEnabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: info
namespace: {{ include "tenant.name" . }}
annotations:
helm.sh/resource-policy: keep
labels:
cozystack.io/ui: "true"
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
chart:
spec:
chart: info
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-extra
namespace: cozy-public
version: "*"
interval: 1m0s
timeout: 5m0s
{{- end }}

View File

@@ -0,0 +1,53 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $oidcEnabled := index $cozyConfig.data "oidc-enabled" }}
{{- if $oidcEnabled }}
apiVersion: v1.edp.epam.com/v1
kind: KeycloakRealmGroup
metadata:
name: {{ include "tenant.name" . }}-view
namespace: {{ include "tenant.name" . }}
spec:
name: {{ include "tenant.name" . }}-view
realmRef:
name: keycloakrealm-cozy
kind: ClusterKeycloakRealm
---
apiVersion: v1.edp.epam.com/v1
kind: KeycloakRealmGroup
metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
spec:
name: {{ include "tenant.name" . }}-use
realmRef:
name: keycloakrealm-cozy
kind: ClusterKeycloakRealm
---
apiVersion: v1.edp.epam.com/v1
kind: KeycloakRealmGroup
metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
spec:
name: {{ include "tenant.name" . }}-admin
realmRef:
name: keycloakrealm-cozy
kind: ClusterKeycloakRealm
---
apiVersion: v1.edp.epam.com/v1
kind: KeycloakRealmGroup
metadata:
name: {{ include "tenant.name" . }}-super-admin
namespace: {{ include "tenant.name" . }}
spec:
name: {{ include "tenant.name" . }}-super-admin
realmRef:
name: keycloakrealm-cozy
kind: ClusterKeycloakRealm
{{- end }}

View File

@@ -26,12 +26,24 @@ spec:
metricsStorages:
- name: shortterm
retentionPeriod: "3d"
deduplicationInterval: "5m"
storage: 10Gi
- name: longterm
retentionPeriod: "14d"
deduplicationInterval: "15s"
storage: 10Gi
vminsert:
resources: {}
vmselect:
resources: {}
vmstorage:
resources: {}
- name: longterm
retentionPeriod: "14d"
deduplicationInterval: "5m"
storage: 10Gi
vminsert:
resources: {}
vmselect:
resources: {}
vmstorage:
resources: {}
oncall:
enabled: false
{{- end }}

View File

@@ -159,6 +159,18 @@ spec:
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-to-keycloak
namespace: {{ include "tenant.name" . }}
spec:
endpointSelector: {}
egress:
- toEndpoints:
- matchLabels:
"k8s:io.kubernetes.pod.namespace": cozy-keycloak
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-to-cdi-upload-proxy
namespace: {{ include "tenant.name" . }}

View File

@@ -0,0 +1,10 @@
{{- if .Values.resourceQuotas }}
apiVersion: v1
kind: ResourceQuota
metadata:
name: tenant-quota
namespace: {{ include "tenant.name" . }}
spec:
hard:
{{- toYaml .Values.resourceQuotas | nindent 4 }}
{{- end }}

View File

@@ -14,6 +14,8 @@ metadata:
kubernetes.io/service-account.name: {{ include "tenant.name" . }}
type: kubernetes.io/service-account-token
---
# == default role ==
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
@@ -29,9 +31,14 @@ rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["roles"]
verbs: ["get"]
- apiGroups: ["helm.toolkit.fluxcd.io"]
resources: ["helmreleases"]
verbs: ["*"]
- apiGroups: ["apps.cozystack.io"]
resources: ['*']
verbs: ['*']
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
@@ -62,6 +69,328 @@ roleRef:
name: {{ include "tenant.name" . }}
apiGroup: rbac.authorization.k8s.io
---
# == view role ==
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-view
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups:
- rbac.authorization.k8s.io
resources:
- roles
verbs:
- get
- apiGroups:
- apps.cozystack.io
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-view
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-view
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-view
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-view
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-view
apiGroup: rbac.authorization.k8s.io
---
# == use role ==
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: [rbac.authorization.k8s.io]
resources:
- roles
verbs:
- get
- apiGroups: ["apps.cozystack.io"]
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- "*"
verbs:
- get
- list
- watch
- apiGroups: ["networking.k8s.io"]
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups: ["subresources.kubevirt.io"]
resources:
- virtualmachineinstances/console
- virtualmachineinstances/vnc
verbs:
- get
- list
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-use
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-use
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io
---
# == admin role ==
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: [rbac.authorization.k8s.io]
resources:
- roles
verbs:
- get
- apiGroups: [""]
resources:
- "*"
verbs:
- get
- list
- watch
- delete
- apiGroups: ["kubevirt.io"]
resources:
- virtualmachines
verbs:
- get
- list
- apiGroups: ["subresources.kubevirt.io"]
resources:
- virtualmachineinstances/console
- virtualmachineinstances/vnc
verbs:
- get
- list
- apiGroups: ["apps.cozystack.io"]
resources:
- buckets
- clickhouses
- ferretdb
- foos
- httpcaches
- kafkas
- kuberneteses
- mysqls
- natses
- postgreses
- rabbitmqs
- redises
- seaweedfses
- tcpbalancers
- virtualmachines
- vmdisks
- vminstances
- infos
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io
---
# == super admin role ==
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-super-admin
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: [rbac.authorization.k8s.io]
resources:
- roles
verbs:
- get
- apiGroups: [""]
resources:
- "*"
verbs:
- get
- list
- watch
- delete
- apiGroups: ["kubevirt.io"]
resources:
- virtualmachines
verbs:
- '*'
- apiGroups: ["subresources.kubevirt.io"]
resources:
- virtualmachineinstances/console
- virtualmachineinstances/vnc
verbs:
- get
- list
- apiGroups: ["apps.cozystack.io"]
resources:
- '*'
verbs:
- '*'
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "tenant.name" . }}-super-admin
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-super-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-super-admin
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-super-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-super-admin
apiGroup: rbac.authorization.k8s.io
---
# == dashboard role ==
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
@@ -73,7 +402,7 @@ rules:
verbs: ["get", "list"]
- apiGroups: ["source.toolkit.fluxcd.io"]
resources: ["helmcharts"]
verbs: ["*"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
@@ -81,6 +410,18 @@ metadata:
name: {{ include "tenant.name" . }}
namespace: cozy-public
subjects:
- kind: Group
name: {{ include "tenant.name" . }}-super-admin
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: {{ include "tenant.name" . }}-view
apiGroup: rbac.authorization.k8s.io
- kind: ServiceAccount
name: {{ include "tenant.name" . }}
namespace: {{ include "tenant.name" . }}

View File

@@ -31,6 +31,11 @@
"type": "boolean",
"description": "Enforce tenant namespace with network policies",
"default": false
},
"resourceQuotas": {
"type": "object",
"description": "Define resource quotas for the tenant",
"default": {}
}
}
}

View File

@@ -6,9 +6,18 @@
## @param ingress Deploy own Ingress Controller
## @param seaweedfs Deploy own SeaweedFS
## @param isolated Enforce tenant namespace with network policies
## @param resourceQuotas Define resource quotas for the tenant
host: ""
etcd: false
monitoring: false
ingress: false
seaweedfs: false
isolated: false
resourceQuotas: {}
# resourceQuotas:
# requests.cpu: "1"
# requests.memory: "1Gi"
# limits.cpu: "2"
# limits.memory: "2Gi"
# requests.nvidia.com/gpu: 4
# requests.storage: 100Gi

View File

@@ -5,7 +5,8 @@ clickhouse 0.2.1 5ca8823
clickhouse 0.3.0 b00621e
clickhouse 0.4.0 320fc32
clickhouse 0.5.0 2a4768a5
clickhouse 0.6.0 HEAD
clickhouse 0.6.0 18bbdb67
clickhouse 0.6.1 HEAD
ferretdb 0.1.0 4ffa8615
ferretdb 0.1.1 5ca8823
ferretdb 0.2.0 adaf603
@@ -21,7 +22,9 @@ kafka 0.2.0 a2cc83d
kafka 0.2.1 3ac17018
kafka 0.2.2 d0758692
kafka 0.2.3 5ca8823
kafka 0.3.0 HEAD
kafka 0.3.0 c07c4bbd
kafka 0.3.1 b7375f73
kafka 0.3.2 HEAD
kubernetes 0.1.0 f642698
kubernetes 0.2.0 7cd7de73
kubernetes 0.3.0 7caccec1
@@ -38,7 +41,11 @@ kubernetes 0.11.0 4eaca42
kubernetes 0.11.1 4f430a90
kubernetes 0.12.0 74649f8
kubernetes 0.12.1 28fca4e
kubernetes 0.13.0 HEAD
kubernetes 0.13.0 ced8e5b9
kubernetes 0.14.0 bfbde07c
kubernetes 0.14.1 fde4bcfa
kubernetes 0.15.0 cb7b8158
kubernetes 0.15.1 HEAD
mysql 0.1.0 f642698
mysql 0.2.0 8b975ff0
mysql 0.3.0 5ca8823
@@ -47,7 +54,10 @@ mysql 0.5.0 4b84798
mysql 0.5.1 fab5940b
mysql 0.5.2 HEAD
nats 0.1.0 5ca8823
nats 0.2.0 HEAD
nats 0.2.0 c07c4bbd
nats 0.3.0 78366f19
nats 0.3.1 b7375f73
nats 0.4.0 HEAD
postgres 0.1.0 f642698
postgres 0.2.0 7cd7de73
postgres 0.2.1 4a97e297
@@ -58,7 +68,8 @@ postgres 0.5.0 c07c4bbd
postgres 0.6.0 2a4768a
postgres 0.6.2 54fd61c
postgres 0.7.0 dc9d8bb
postgres 0.7.1 HEAD
postgres 0.7.1 175a65f
postgres 0.8.0 HEAD
rabbitmq 0.1.0 f642698
rabbitmq 0.2.0 5ca8823
rabbitmq 0.3.0 9e33dc0
@@ -68,7 +79,10 @@ rabbitmq 0.4.2 00b2834e
rabbitmq 0.4.3 HEAD
redis 0.1.1 f642698
redis 0.2.0 5ca8823
redis 0.3.0 HEAD
redis 0.3.0 c07c4bbd
redis 0.3.1 b7375f73
redis 0.4.0 abc8f082
redis 0.5.0 HEAD
tcp-balancer 0.1.0 f642698
tcp-balancer 0.2.0 HEAD
tenant 0.1.3 3d1b86c
@@ -80,15 +94,38 @@ tenant 1.2.0 15478a88
tenant 1.3.0 ceefae03
tenant 1.3.1 c56e5769
tenant 1.4.0 94c688f7
tenant 1.5.0 HEAD
tenant 1.5.0 48128743
tenant 1.6.0 df448b99
tenant 1.6.1 edbbb9be
tenant 1.6.2 ccedc5fe
tenant 1.6.3 2057bb96
tenant 1.6.4 3c9e50a4
tenant 1.6.5 f1e11451
tenant 1.6.6 d4634797
tenant 1.6.7 06afcf27
tenant 1.6.8 4cc48e6f
tenant 1.7.0 6c73e3f3
tenant 1.8.0 HEAD
virtual-machine 0.1.4 f2015d6
virtual-machine 0.1.5 7cd7de7
virtual-machine 0.2.0 5ca8823
virtual-machine 0.3.0 b908400
virtual-machine 0.4.0 4746d51
virtual-machine 0.5.0 HEAD
virtual-machine 0.5.0 cad9cde
virtual-machine 0.6.0 0e728870
virtual-machine 0.7.0 af58018a
virtual-machine 0.7.1 05857b95
virtual-machine 0.8.0 3fa4dd3
virtual-machine 0.8.1 3fa4dd3a
virtual-machine 0.8.2 HEAD
vm-disk 0.1.0 HEAD
vm-instance 0.1.0 HEAD
vm-instance 0.1.0 ced8e5b9
vm-instance 0.2.0 4f767ee3
vm-instance 0.3.0 0e728870
vm-instance 0.4.0 af58018a
vm-instance 0.4.1 05857b95
vm-instance 0.5.0 3fa4dd3
vm-instance 0.5.1 HEAD
vpn 0.1.0 f642698
vpn 0.2.0 7151424
vpn 0.3.0 a2bcf100

View File

@@ -17,10 +17,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0
version: 0.8.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.1"
appVersion: "0.8.2"

Some files were not shown because too many files have changed in this diff Show More