Compare commits

...

110 Commits

Author SHA1 Message Date
kklinch0
b1ed061de9 [bugfix] up pgdump version
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-19 10:59:46 +03:00
Andrei Kvapil
4479ed5e95 [bugfix] fix monitoring agents hr for tenant clusters (#1079)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Updated monitoring agents to use the correct namespaces for deployment
and data storage.

- **Chores**
  - Bumped the Kubernetes chart version to 0.24.1.
- Updated the versions map to reflect the latest chart version and
commit references.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-19 09:03:59 +02:00
kklinch0
b16e73ad42 [bugfix] fix monitoring agents hr for tenant clusters
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-18 18:38:54 +03:00
Andrei Kvapil
4631f85114 Split testing job into several (#1075)
This patch separates the Test job of the PR workflow into several
smaller jobs: 1) create a testing sandbox and deploy Talos, 2) install
Cozystack and configure it, 3) install managed applications and run e2e
tests. This lets developers shorten the feedback loop if tests are
merely acting flaky and aren't really broken. It's not the right way,
but it's 80/20.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a multi-stage workflow for environment preparation,
Cozystack installation, application testing, and cleanup.
- Added automated end-to-end scripts for provisioning Talos clusters and
validating Cozystack installations.
- Added new Makefile targets to streamline cluster preparation and
Cozystack installation processes.
- **Bug Fixes**
- Removed obsolete annotation step in application testing to improve
resource handling.
- Added pre-checks and resource cleanup in application testing to
enhance test reliability.
- **Chores**
- Improved workflow structure for enhanced setup and testing
reliability.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-18 13:42:25 +02:00
Timofei Larkin
746641e523 Split testing job into several
This patch separates the Test job of the PR workflow into several
smaller jobs: 1) create a testing sandbox and deploy Talos, 2) install
Cozystack and configure it, 3) install managed applications and run e2e
tests. This lets developers shorten the feedback loop if tests are
merely acting flaky and aren't really broken. It's not the right way,
but it's 80/20.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-17 18:47:09 +03:00
Andrei Kvapil
3ce6dbe850 Release v0.32.0 (#1074)
This PR prepares the release `v0.32.0`.
2025-06-17 11:30:28 +02:00
Andrei Kvapil
8d5007919f [tests] fix avaiting for vm-disk
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 10:32:59 +02:00
github-actions
08e569918b Prepare release v0.32.0
Signed-off-by: github-actions <github-actions@github.com>
2025-06-16 23:54:35 +00:00
Andrei Kvapil
6498000721 [tests] VM Disk, VMI, VM, DBs (#1048)
Add 'Apps' tests for
Virtual Machine Disk
Virtual Machine Instance
Virtual Machine
PostgreSQL
MySQL
ClickHouse

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **Tests**
- Added new end-to-end tests for creating and validating VM disks, VM
instances, virtual machines, and multiple database types (PostgreSQL,
MySQL, ClickHouse), ensuring correct provisioning and readiness of these
resources.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-17 01:50:32 +02:00
Andrei Kvapil
8486e6b3aa [kubernetes] Fixes for resources and migration to v0.32.4 (#1073)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 01:38:17 +02:00
Andrei Kvapil
3f6b6798f4 [kubernetes] Fixes for resources and migration to v0.32.4
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 01:34:54 +02:00
Andrei Kvapil
c1b928b8ef [cluster-api] Add missing migration for capi-providers (#1072)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Introduced a new migration script to update the system version and
manage related resources during the upgrade from version 14 to 15.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-17 01:34:11 +02:00
Andrei Kvapil
c2e8fba483 [cluster-api] Add missing migration for capi-providers
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 01:33:58 +02:00
Andrei Kvapil
62cb694d72 Release v0.32.0-beta.2 (#1049)
This PR prepares the release `v0.32.0-beta.2`.
2025-06-16 21:36:04 +02:00
github-actions
c619343aa2 Prepare release v0.32.0-beta.2
Signed-off-by: github-actions <github-actions@github.com>
2025-06-16 19:06:14 +00:00
Ahmad Murzahmatov
75ad26989d [tests] VM Disk, VMI, VM
Add 'Apps' tests for
Virtual Machine Disk
Virtual Machine Instance
Virtual Machine
PostgreSQL
MySQL
ClickHouse

Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
2025-06-16 21:00:22 +02:00
Andrei Kvapil
c4fc8c18df Use library chart with k8s managed app (#1026)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Refactor**
- Updated resource configuration rendering in cluster templates to use
standardized resource handling from a shared library, improving
consistency in resource definitions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 20:55:57 +02:00
Timofei Larkin
8663dc940f Use library chart with k8s managed app
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-16 20:54:30 +02:00
Andrei Kvapil
cf983a8f9c [dashboard] Remove dependency on listing secrets (#1062)
This change includes the following commit
6856b66f92

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated the version of a core dependency used in the dashboard and
related services to a newer commit. No user-facing changes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 20:48:01 +02:00
Andrei Kvapil
ad6aa0ca94 Refactor roles and permissions for tenants (#1067)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced advanced Helm template helpers for managing Kubernetes RBAC
(Role-Based Access Control), including access level mapping,
hierarchy-aware group subject generation, and tenant parsing.
- Added dynamic RoleBinding resources across multiple applications to
bind roles to appropriate subjects based on access levels and tenant
namespaces.
- **Bug Fixes**
- Refined tenant application roles by restricting resource permissions
to specific core Kubernetes resources, enhancing security and access
control granularity.
- **Chores**
- Updated chart versions across numerous applications to reflect new
releases.
- Added reference files linking to the shared library in multiple
application chart directories.
- Pinned package versions to specific commits for improved version
stability and tracking.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 20:47:09 +02:00
Andrei Kvapil
9dc5d62f47 [dashboard] Remove dependency on listing secrets
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 20:32:51 +02:00
Andrei Kvapil
3b8a9f9d2c Configure all apps to use new function to generate subjects
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 20:32:11 +02:00
Andrei Kvapil
ab9926a177 Update cozypkg v1.1.0 (#1063) 2025-06-16 20:12:21 +02:00
Andrei Kvapil
f83741eb09 Add extra helper function to generate subjects
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 20:11:41 +02:00
Timofei Larkin
028f2e4e8d Add helper function to generate subjects
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-16 19:19:57 +03:00
Andrei Kvapil
255fa8cbe1 [docs] Review the Clickhouse app docs (#1059)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Improved and clarified documentation for the Managed ClickHouse
Service, including enhanced introductory content and clearer backup
instructions.
- Updated and corrected parameter descriptions for accuracy, especially
regarding shards, replicas, storage sizes, and backup options.
- Expanded explanations and examples for resource configuration in
production environments.
  - Reformatted tables and notes for better readability and usability.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 18:14:02 +02:00
Andrei Kvapil
b42f5cdc01 [bugfix] fix distro full bundle (#1056)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added a new template to automatically create a self-signed
ClusterIssuer for certificate management if one does not already exist.
- **Chores**
- Updated dependency configuration for the snapshot-controller to
simplify its setup process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 18:13:44 +02:00
Andrei Kvapil
74633ad699 Update cozypkg v1.1.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 18:11:27 +02:00
Nick Volynkin
980185ca2b [docs] Review the Clickhouse app docs
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-16 08:40:46 +03:00
Andrei Kvapil
8eabe30548 [platform] Use cozypkg instead of helm (#1057)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced the use of the CozyPkg tool for package deployment and
management, replacing previous Helm-based workflows across installer,
platform, and system components.

- **Refactor**
- Updated Makefiles and scripts to use CozyPkg commands for showing,
applying, diffing, suspending, resuming, and deleting packages.
- Removed dynamic API version handling and simplified deployment command
structures.

- **Chores**
- Updated Docker images to newer base versions and included CozyPkg
installation steps.
- Changed installer image references to use the latest available build.
- Removed obsolete scripts and dependencies related to Helm and
Kustomize.
- Consolidated package installations and updated tooling in Dockerfiles
for improved efficiency.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-14 20:50:12 +02:00
Andrei Kvapil
0c9c688e6d [platform] decrease resources for system applications (#1054)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added resource constraints for the flux-operator and multiple kube-ovn
components, specifying CPU and memory requests and limits.

- **Improvements**
- Reduced default minimum CPU and memory requests for monitoring and
seaweedfs components, as well as for the Redis master in the dashboard,
to optimize resource usage.

- **Chores**
	- Updated version numbers for monitoring and seaweedfs packages.
	- Refreshed version mappings to reflect new releases.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-14 08:21:32 +02:00
Andrei Kvapil
908c75927e [platform] Use cozypkg instead of helm
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-13 19:02:15 +02:00
Andrei Kvapil
0a1f078384 [docs] Note that Cozystack is a CNCF Sandbox project in the readme (#1055)
Fix a few other things in the readme

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated the README to highlight Cozystack's CNCF Sandbox status and
original sponsorship.
- Moved the user interface screenshot to appear directly after the
introduction.
- Reorganized community information into a dedicated section with
clearer invitations and calendar links for meetings.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-13 12:58:44 +02:00
kklinch0
6a713e5eb4 [bugfix] fix distro full bundle
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-13 10:59:14 +03:00
Nick Volynkin
8f0a28bad5 [docs] Note that Cozystack is a CNCF Sandbox project in the readme
Fix a few other things in the readme

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-13 09:47:11 +03:00
kklinch0
0fa70d9d38 [platform] cut resources
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-13 01:06:05 +03:00
klinch0
b14c82d606 [bugfix] add-resource-quotas-for-pg-jobs-and-fix-install-generate (#1051)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added default resource specifications for PostgreSQL jobs to ensure
consistent CPU and memory allocation.
- **Chores**
  - Updated the chart version for the PostgreSQL application.
  - Refreshed version mapping to reflect the latest release.
- Improved Node.js setup and package installation in the pre-commit
workflow.
- **Tests**
- Increased memory allocation for QEMU virtual machines in end-to-end
tests.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-12 10:10:26 +03:00
kklinch0
8e79f24c5b add rq for pg
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-11 22:48:09 +03:00
Andrei Kvapil
3266a5514e Get instance type when reconciling WorkloadMonitor (#1030)
When the WorkloadMonitor is reconciled and child Workload objects are
created, they will now get additional labels in the
`workloads.cozystack.io` namespace, containing metadata about the
workload. This particular commit checks if a pod targeted by a Workload
is owned by a VirtualMachineInstance (i.e. it launches a KubeVirt VMI)
and, if so, gets the VMI instance type and puts it in the
`kubevirt-vmi-instance-type` label.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Workload objects created for Pods now include additional labels
extracted from their owner references, specifically for
VirtualMachineInstance resources.
- If a VirtualMachineInstance has a relevant annotation, its instance
type is now reflected as a label on the associated Workload.
- **Chores**
- Updated and added several dependencies to improve compatibility and
maintainability.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-11 12:55:42 +02:00
Andrei Kvapil
0c37323a15 [kubernetes] Update Kubevirt-CCM (#1052)
Fixes panic, upstream issue:

- https://github.com/kubevirt/cloud-provider-kubevirt/pull/354

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved filtering and error handling for endpoints and virtual
machines with missing or invalid data, ensuring only valid endpoints are
processed.
- **New Features**
- Enhanced support for multi-cluster environments by introducing cluster
name filtering for service and endpoint management.
- **Tests**
- Added new tests to verify correct handling of endpoints and services
across clusters and improved coverage for edge cases.
- **Chores**
- Updated Kubernetes app and image versions for improved tracking and
deployment consistency.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-11 12:55:06 +02:00
Andrei Kvapil
10af98e158 [kubernetes] Update Kubevirt-CCM
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-11 10:30:13 +02:00
Andrei Kvapil
632224a30a Update Kube-OVN v1.13.13 and enable db healthcheck (#1047)
This PR updates Kube-OVN to the latest version and also includes fix
https://github.com/kubeovn/kube-ovn/pull/5294

Ref
https://github.com/kubeovn/kube-ovn/issues/5125#issuecomment-2921920661

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 13:56:31 +02:00
Andrei Kvapil
e8d11e64a6 Update Metallb v0.15.2 (#1045)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added new configuration options to exclude specific address pools from
Prometheus alerts for address pool exhaustion and usage.
- Introduced a new CRD for ServiceBGPStatus to provide detailed BGP peer
status per service and node.
- Added new status fields to track assigned and available IPv4/IPv6
addresses in IPAddressPool.

- **Improvements**
  - Updated Helm chart and dependency versions to the latest releases.
- Enhanced validation for speaker configuration to prevent invalid
settings.
  - Clarified configuration descriptions for easier understanding.
- Increased file descriptor limits for FRR daemons to improve
reliability.
- Simplified Docker image handling by using pre-built MetalLB images
instead of local builds.

- **Bug Fixes**
- Updated RBAC roles to grant necessary permissions for new resources
and status updates.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 13:36:40 +02:00
Andrei Kvapil
27c7a2feb5 Update Cilium v1.17.4 (#1046)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added a new configuration option to require Kubernetes connectivity in
liveness probes.
  - Enabled Kafka API key redaction by default in Hubble settings.

- **Bug Fixes**
- Improved conditional logic for resource creation to prevent
unnecessary resources during preflight mode.
  - Corrected YAML indentation and formatting in configuration files.

- **Chores**
- Upgraded Cilium and related component images from version 1.17.3 to
1.17.4.
- Updated documentation and default configuration values to reflect new
versions and settings.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 11:53:33 +02:00
Andrei Kvapil
9555386bd7 Release v0.32.0-beta.1 (#1044)
This PR prepares the release `v0.32.0-beta.1`.
2025-06-10 11:37:31 +02:00
Andrei Kvapil
9733de38a3 Update Kube-OVN v1.13.13 and enable db healthcheck
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 11:33:19 +02:00
Andrei Kvapil
775a05cc3a Update Metallb v0.15.2
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 11:13:36 +02:00
Andrei Kvapil
4e5cc2ae61 Update Cilium v1.17.4
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 11:03:47 +02:00
github-actions
32adf5ab38 Prepare release v0.32.0-beta.1
Signed-off-by: github-actions <github-actions@github.com>
2025-06-10 08:28:28 +00:00
Andrei Kvapil
28302e776e [ci] fix clickhouse version parsing
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 10:22:06 +02:00
Timofei Larkin
911ca64de0 Get instance type when reconciling WorkloadMonitor
When the WorkloadMonitor is reconciled and child Workload objects are
created, they will now get additional labels in the
`workloads.cozystack.io` namespace, containing metadata about the
workload. This particular commit checks if a pod targeted by a Workload
is owned by a VirtualMachineInstance (i.e. it launches a KubeVirt VMI)
and, if so, gets the VMI instance type and puts it in the
`kubevirt-vmi-instance-type` label.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-10 11:17:40 +03:00
Andrei Kvapil
045ea76539 [platform] Introduce cluster-domain option and unhardcode cozy.local (#1039)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Dynamic cluster domain configuration is now propagated to multiple
components, allowing them to use the cluster domain value from a central
ConfigMap instead of a hardcoded value.
- The cluster domain is now injected into ClickHouse, Kubernetes, NATS,
Keycloak, and various operator releases for improved flexibility and
consistency.

- **Chores**
- Updated chart versions for ClickHouse, Kubernetes, and NATS
applications.
- Refreshed version references in the versions map to reflect the latest
releases.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 10:12:11 +02:00
Andrei Kvapil
cee820e82c [platform] Introduce cluster-domain option and unhardcode cozy.local
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 10:11:09 +02:00
Andrei Kvapil
6183b715b7 [dashboard] Cumulative update (#1042)
This PR includes fixes and updates for cozystack dashboard:

### [fix client rate
limiter](b1467cecc1)

fixes the error `client rate limiter Wait returned an error: context
canceled`
The QPS and Burst options were set after the kubernetes client
initalized and had no effect

The limits are also increased fivefold:

```diff
-         - --kube-api-qps=50.0
-         - --kube-api-burst=100
+         - --kube-api-qps=250.0
+         - --kube-api-burst=500
```


### [fix relative
urls](e2153e26dd)

Fixes regression introduced in
https://github.com/cozystack/cozystack/pull/935 which suddenly removed
previus workaround https://github.com/cozystack/cozystack/pull/102

Now the proper fix prepared.

Related to upstream issue
https://github.com/vmware-tanzu/kubeapps/issues/7740

### [remove version
selector](f412a6aba4)

from both package insallation page and upgrading page
<img width="505" alt="Screenshot 2025-06-10 at 1 47 10"
src="https://github.com/user-attachments/assets/36068264-2878-4b82-a159-6c911f1c1eef"
/>

now it always will default to the latest package version

### [always fetch details from the latest
version](741a7ddb93)

If old package version installed it will display information from the
latest package in repository. This and previus fix actually remove the
need for having versions_map logic and pack multiple charts for the
release. But informs user about newer versions and allows to perform
upgrade on demand in specific time:

<img width="423" alt="Screenshot 2025-06-10 at 1 52 53"
src="https://github.com/user-attachments/assets/dd571c9f-c2bc-403f-9aa0-3d8853600241"
/>

### [Remove plugin name from
header]ffc0b0246b

We always use flux though

<img width="386" alt="Screenshot 2025-06-10 at 1 55 39"
src="https://github.com/user-attachments/assets/df6f52b5-82ab-4e7a-a973-2a82eb38ebfb"
/>

### [Fix switching context from app
view](d89e721fcb)

Fixes the error message while swtiching tenant from the application view

```
An error occurred while fetching the application: Unable to get installed package.
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added new configuration options for API request rate limits in the
dashboard settings.

- **Style**
- Updated dashboard appearance to hide version information and specific
label elements.

- **Chores**
- Updated internal references to the latest version of the dashboard
source code.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 10:04:38 +02:00
Andrei Kvapil
2669ab6072 [dashboard] Cumulative update
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 02:00:22 +02:00
Andrei Kvapil
96506c7cce [kafka] specify mimimal working resource presets (#1040)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated default resource presets for Kafka (now "small") and ZooKeeper
(now "micro") to provide improved baseline resources.
- **Documentation**
- Updated documentation to reflect new default resource presets for
Kafka and ZooKeeper.
- **Chores**
- Incremented Kafka chart version to 0.6.1 and updated version mapping.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 00:34:28 +02:00
Andrei Kvapil
7bb70c839e [cozystack-controller] Introduce system helm reconciler (#1033)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced automated monitoring of key configuration changes to ensure
system applications are promptly updated when relevant settings are
modified.
- **Bug Fixes**
- Improved error logging for controller setup to display accurate
controller names and ensure consistent error handling.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 22:56:28 +02:00
Andrei Kvapil
ba97a4593c [kafka] specify mimimal working resource presets
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-09 15:52:08 +02:00
Andrei Kvapil
c467ed798a Update flux-operator to 0.22.0, Flux to 2.6.x (#1035)
Flux 2.6.1 is the latest Flux release now

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced validation for custom resources to ensure consistent naming
and conditional field requirements.
- Added support for referencing input providers using label selectors,
and expanded input provider types.
	- Extended reporting with new cluster information fields.

- **Bug Fixes**
- Improved schema constraints to prevent invalid or inconsistent
resource configurations.

- **Documentation**
- Updated version information in documentation and Helm chart metadata
to reflect the latest release.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 12:39:57 +02:00
Andrei Kvapil
ed881f0741 [Fix] CloudInit for VM Instance (#1020)
Made same changes as in
[PR](https://github.com/cozystack/cozystack/pull/1019)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added support for defining a system disk with customizable storage and
image sources for virtual machines.
- **Improvements**
- Enhanced cloud-init configuration to require both SSH keys and
cloud-init data for certain volume setups, improving user data handling.
- Simplified disk configuration for virtual machines, making setup more
straightforward.
- Shortened and clarified error messages for missing configuration
fields.
- **Chores**
- Updated chart and package versions for virtual-machine and vm-instance
applications.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 12:37:22 +02:00
Andrei Kvapil
0e0dabdd08 [platform] Fix deps for paas-hosted bundle
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-09 12:36:32 +02:00
Andrei Kvapil
bd8f8bde95 [e2e] Increase timeout awaiting for machines (#1038)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Increased the timeout duration for waiting on specific resources
during end-to-end testing, improving reliability for longer operations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 12:36:14 +02:00
Andrei Kvapil
646dab497c [e2e] Increase timeout awaiting for machines
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-09 12:34:54 +02:00
Andrei Kvapil
dc3b61d164 [cozystack-controller] Fix RBAC for annotating namespaces (#1031)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Expanded permissions for managing namespaces, now allowing patch and
update actions in addition to viewing and listing.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 10:16:27 +02:00
Andrei Kvapil
4479a038cd [platform] Fix deps for paas-hosted bundle (#1034)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
  - Introduced a new release for object storage management.

- **Improvements**
- Updated dependencies for certain platform components to simplify
deployment.
- Network policy templates are now only applied when supported by the
cluster.

- **Chores**
  - Removed obsolete monitoring namespace configurations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 10:16:12 +02:00
Andrei Kvapil
dfd01ff118 [platform] Fix deps for paas-hosted bundle
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-09 10:12:06 +02:00
Kingdon B
d2bb66db31 bump Flux to 2.6
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-06-08 10:27:01 -04:00
Kingdon B
7af97e2d9f Update flux-operator to 0.22.0
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-06-06 19:07:21 -04:00
Andrei Kvapil
ac5145be87 [cozystack-controller] Fix RBAC for annotating namespaces
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-06 15:45:35 +02:00
Timofei Larkin
4779db2dda Use global scope for .Release object (#1032)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved reliability of secret references in Kubernetes cluster
templates to ensure correct retrieval and usage of release-specific
secrets.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-06 15:56:23 +03:00
Timofei Larkin
25c2774bc8 Use global scope for .Release object
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-06 15:14:46 +03:00
Ahmad Murzahmatov
bbee8103eb [fix] vm-instance: cloud-init
made same change as in
[PR](https://github.com/cozystack/cozystack/pull/1019)

Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
2025-06-06 16:52:54 +06:00
klinch0
730ea4d5ef [fix] CloudInit (#1019)
If ssh key provided - deploy
If cloudinit provided - deploy
If ssh key and cloudinit provided - deploy both
If none provided - init empty to avoid issues w/
network

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Refactor**
- Improved handling of SSH keys and cloud-init data in the Virtual
Machine setup, clearly distinguishing cases when SSH keys, cloud-init,
or both are provided.
  - Enhanced template readability with added spacing for better clarity.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-05 15:53:08 +03:00
klinch0
13fccdc465 bump tenant version (#1028) 2025-06-05 15:44:53 +03:00
kklinch0
f1b66c80f6 bump tenant version
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-05 15:40:45 +03:00
klinch0
f34f140d49 Add RBAC rules to allow portforward in kubevirt for SSH via virtctl (#1027)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Expanded user permissions to allow port forwarding for virtual machine
instances, enabling enhanced remote access capabilities.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-05 11:07:53 +03:00
mattia
520fbfb2e4 Add RBAC rules to allow portforward in kubevirt for SSH via virtctl
Signed-off-by: mattia <mattia@hidora.io>
2025-06-05 09:38:40 +02:00
klinch0
25016580c1 (k8s) configure containerd for client k8s cluster (#979)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced granular Helm charts for Cluster API providers: bootstrap,
core, control plane, and infrastructure, each with dedicated
configuration, metadata, and compressed component packaging.
- Added a new configuration option to the Kubernetes app to enable using
a custom secret for patching containerd.
- Enhanced Kubernetes deployment to conditionally manage containerd
registry certificates and configuration using custom or copied secrets.

- **Documentation**
- Updated Kubernetes app documentation to include the new containerd
patching secret configuration option.

- **Chores**
- Updated version mappings and chart versions for Kubernetes and Cluster
API-related components.
- Decomposed the monolithic Cluster API provider release into multiple,
more manageable releases with explicit namespaces and dependencies.

- **Refactor**
- Removed the previous unified Cluster API provider template in favor of
new, separate provider resource definitions.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-04 11:07:58 +03:00
kklinch0
f10f8455fc (k8s) configure containerd for client k8s cluster 2025-06-04 10:40:10 +03:00
Timofei Larkin
974581d39b [monitoring-agents] Add events and audit inputs (#948)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced log monitoring by adding support for Kubernetes events and
audit logs.
  - Introduced custom log parsers for improved log format handling.
  - Added log source tagging for easier identification of log origins.

- **Improvements**
- Refined log filtering and output formatting for better log
organization and delivery.
- Updated log outputs to support compressed JSON lines and ISO8601 date
formatting.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-04 10:33:58 +03:00
Timofei Larkin
7e24297913 Use library chart for resource management (#1025)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a shared library for resource configuration management
across multiple application charts.

- **Refactor**
- Updated resource configuration handling in several application charts
to use new centralized helpers for improved consistency and
sanitization.

- **Chores**
- Added references to the shared library in various application chart
directories.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-04 09:42:31 +03:00
Timofei Larkin
b6142cd4f5 Use library chart for resource management
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-04 09:05:21 +03:00
Timofei Larkin
e87994c769 Capture all resources by WorkloadMonitors (#1024)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced WorkloadMonitor resources for tcp-balancer, vm-disk, and
VPN applications, enabling enhanced workload monitoring capabilities.

- **Bug Fixes**
- Standardized Kubernetes resource labels across multiple applications
for improved consistency and compatibility.

- **Chores**
- Updated chart versions for several applications, including ClickHouse,
FerretDB, http-cache, MySQL, Postgres, Redis, tcp-balancer,
virtual-machine, vm-disk, vm-instance, and VPN.
- Updated Docker image reference for the installer to use the latest
version.
  - Refreshed internal version mappings for multiple packages.
- Added standardized instance labels to Kubernetes resources across
multiple applications for better tracking and management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-03 16:15:46 +03:00
Timofei Larkin
b140f1b57f Capture all resources by WorkloadMonitors
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-03 15:40:27 +03:00
Timofei Larkin
64936021d2 Release v0.31.0-rc.3 (#991)
This PR prepares the release `v0.31.0-rc.3`.

Signed-off by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-03 14:10:11 +06:00
Andrei Kvapil
a887e19e6c Capture all resources by WorkloadMonitors (#1018)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced monitoring resources for HAProxy, NGINX, and generic HTTP
cache workloads, allowing improved workload observability.
- **Enhancements**
- Added standardized labels to MariaDB, Postgres, and Redis resources
for better integration and management within Kubernetes environments.
- Updated label selectors in Postgres resources to use standardized
Kubernetes app labels.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-02 09:28:30 +02:00
Andrei Kvapil
92b97a569e Fixed Gateway API manifest (#1016)
In current version of Cozystack, flux's HelmRelease will refuse to
install cozy-gateway-api-crds when gatewayAPI enabled, complaining
version '*'not found and breaking install of entire kubernetes app. This
patch adds working version match.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated configuration to allow compatibility with all available
versions of the gateway-api-crds chart.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-02 09:25:25 +02:00
Timofei Larkin
0e22358b30 Capture all resources by WorkloadMonitors
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-02 09:44:27 +03:00
Zdenek Deu Janda
7429daf99c Fixed Gateway API manifest
Signed-off-by: Zdenek Deu Janda <zdenek.janda@cloudevelops.com>
2025-06-01 23:49:42 +02:00
Andrei Kvapil
b470b82e2a [tests] Fix concurrency for docker loing action (#1014)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Updated workflow steps to use a job-specific temporary directory for
Docker configuration during build and container registry login
processes. This enhances isolation of Docker credentials in CI jobs.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-30 16:59:11 +02:00
Andrei Kvapil
a0700e7399 [tests] Fix concurrency for docker loing action
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-30 16:49:21 +02:00
Andrei Kvapil
228e1983bc Let users specify CPU requests in VCPUs (#972)
With this change a request for a virtual machine with 3 vCPUs will
reserve exactly the same amount of physical compute, as a request for a
Clickhouse instance with `{"resources": {"cpu": "3"}}` in its values,
with the scaling factor being KubeVirt's CPU allocation ratio.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced configurable CPU allocation ratio for resource management,
allowing CPU requests to be scaled relative to limits.
- Added new templates for input validation and automatic loading of
configuration from Kubernetes ConfigMaps.

- **Bug Fixes**
- Improved resource sanitization and preset logic to handle CPU and
memory requests/limits more accurately.

- **Chores**
- Updated chart dependencies and versioning to reflect changes in
library usage.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-30 12:50:21 +02:00
Timofei Larkin
7023abdba7 Let users specify CPU requests in VCPUs
With this change a request for a virtual machine with 3 vCPUs will
reserve exactly the same amount of physical compute, as a request for a
Clickhouse instance with `{"resources": {"cpu": "3"}}` in its values,
with the scaling factor being KubeVirt's CPU allocation ratio.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-05-30 00:55:19 +02:00
Timofei Larkin
1b43a5f160 Remove user-facing config of limits and requests
This patch introduces reusable library charts that provide
backward-compatibility for users that specify their resources as
explicit requests and limits for cpu, however this input is processed so
that limits are set equal to requests except for CPU which only gets
requests. Users can now embrace the new form by directly specifying
resources in the first level of nesting (e.g. resources.cpu=100m instead
of .resources.requests.cpu=100m). The order of precedence is top-level,
then requests, then limits, ensuring that nothing will break in terms of
scheduling, however workloads that specified limits much higher than
requests might get a performance hit, now that they cannot use all this
excess capacity. This should only affect memory-hungry workloads in
low-contention environments.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-05-30 00:55:19 +02:00
Andrei Kvapil
20f4066c16 [tests] fix: increase qemu system disk size (#1011)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-30 00:54:48 +02:00
Andrei Kvapil
ea0dd68e84 [tests] fix: increase qemu system disk size
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-30 00:54:09 +02:00
Andrei Kvapil
e0c3d2324f [ci] Split build artefacts (#1010)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Improved workflow for pull requests by separating artifact uploads and
downloads, resulting in clearer and more organized handling of installer
and image files during build and test processes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-30 00:53:54 +02:00
Andrei Kvapil
cb303d694c [ci] Split build artefacts
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-30 00:48:20 +02:00
Andrei Kvapil
6130f43d06 Release v0.31.1 (#1008)
This PR prepares the release `v0.31.1`.
2025-05-30 00:18:28 +02:00
Andrei Kvapil
4db55ac5eb [ci] Add Github token to fetch draft releases
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-30 00:16:03 +02:00
github-actions
bfd20a5e0e Prepare release v0.31.1
Signed-off-by: github-actions <github-actions@github.com>
2025-05-29 23:44:58 +02:00
Andrei Kvapil
977141bed3 [ci] Fix download released artifacts (#1009)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-29 23:42:32 +02:00
Andrei Kvapil
c4f8d6a251 [ci] Fix download released artifacts
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-29 23:42:21 +02:00
Andrei Kvapil
9633ca4d25 Update Talos Linux v1.10.3 and fix assets (#1006)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Installer artifacts now include an additional asset, improving the
completeness of installation resources.

- **Bug Fixes**
- End-to-end tests and cluster setup now verify the presence of all
required installer asset files, reducing setup errors.

- **Chores**
- Updated installer and system extension images to newer versions for
improved stability and compatibility.
- Improved build and test workflows to handle multiple installer assets
and streamline artifact management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-29 23:27:12 +02:00
Andrei Kvapil
f798cbd9f9 Update Talos Linux v1.10.3
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-29 23:18:53 +02:00
Andrei Kvapil
cf87779f7b [ci] separate build and testing jobs (#1005)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Improved pull request workflow by separating build and test phases,
enhancing reliability and maintainability of automated checks.
- Updated testing process to use a pre-generated installer artifact,
streamlining test execution and environment setup.
- Enhanced release workflow to generate manifests before running tests,
ensuring up-to-date configurations during verification.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-29 18:41:40 +02:00
Andrei Kvapil
c69135e0e5 [ci] separate build and testing jobs
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-29 17:44:50 +02:00
Nick Volynkin
a9c3a4c601 [docs] Write a full release post for v0.31.0 (#999)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Documentation**
- Expanded and restructured the changelog for v0.31.0 to provide
detailed information on new features, improvements, bug fixes, testing
updates, CI/CD changes, and community contributions. The changelog now
offers clearer insight into the release contents and lifecycle.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-29 15:34:02 +07:00
Nick Volynkin
d1081c86b3 [docs] Write a full release post for v0.31.0
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-05-29 10:05:53 +03:00
kevin880202
fc8b52d73d reset and add audit/event monitoring in fluentbit values
Signed-off-by: kevin880202 <dytoponts11@gmail.com>
2025-05-21 22:07:27 +08:00
261 changed files with 51486 additions and 1007 deletions

View File

@@ -30,12 +30,13 @@ jobs:
run: |
sudo apt update
sudo apt install curl -y
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt install nodejs -y
git clone https://github.com/bitnami/readme-generator-for-helm
sudo apt install npm -y
git clone --branch 2.7.0 --depth 1 https://github.com/bitnami/readme-generator-for-helm.git
cd ./readme-generator-for-helm
npm install
npm install -g pkg
npm install -g @yao-pkg/pkg
pkg . -o /usr/local/bin/readme-generator
- name: Run pre-commit hooks

View File

@@ -16,7 +16,6 @@ jobs:
contents: read
packages: write
# Run only when the PR carries the "release" label and not closed.
if: |
contains(github.event.pull_request.labels.*.name, 'release') &&
github.event.action != 'closed'
@@ -35,6 +34,64 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
- name: Extract tag from PR branch
id: get_tag
uses: actions/github-script@v7
with:
script: |
const branch = context.payload.pull_request.head.ref;
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!m) {
core.setFailed(`❌ Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
return;
}
const tag = `v${m[1]}`;
core.setOutput('tag', tag);
- name: Find draft release and get asset IDs
id: fetch_assets
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GH_PAT }}
script: |
const tag = '${{ steps.get_tag.outputs.tag }}';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo,
per_page: 100
});
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) {
core.setFailed(`Draft release '${tag}' not found`);
return;
}
const findAssetId = (name) =>
draft.assets.find(a => a.name === name)?.id;
const installerId = findAssetId("cozystack-installer.yaml");
const diskId = findAssetId("nocloud-amd64.raw.xz");
if (!installerId || !diskId) {
core.setFailed("Missing required assets");
return;
}
core.setOutput("installer_id", installerId);
core.setOutput("disk_id", diskId);
- name: Download assets from GitHub API
run: |
mkdir -p _out/assets
curl -sSL \
-H "Authorization: token ${GH_PAT}" \
-H "Accept: application/octet-stream" \
-o _out/assets/cozystack-installer.yaml \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ steps.fetch_assets.outputs.installer_id }}"
curl -sSL \
-H "Authorization: token ${GH_PAT}" \
-H "Accept: application/octet-stream" \
-o _out/assets/nocloud-amd64.raw.xz \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ steps.fetch_assets.outputs.disk_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
- name: Run tests
run: make test

View File

@@ -9,8 +9,8 @@ concurrency:
cancel-in-progress: true
jobs:
e2e:
name: Build and Test
build:
name: Build
runs-on: [self-hosted]
permissions:
contents: read
@@ -33,9 +33,125 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
- name: Build
run: make build
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
- name: Test
run: make test
- name: Build Talos image
run: make -C packages/core/installer talos-nocloud
- name: Upload installer
uses: actions/upload-artifact@v4
with:
name: cozystack-installer
path: _out/assets/cozystack-installer.yaml
- name: Upload Talos image
uses: actions/upload-artifact@v4
with:
name: talos-image
path: _out/assets/nocloud-amd64.raw.xz
prepare_env:
name: Prepare environment
runs-on: [self-hosted]
needs: build
# Never run when the PR carries the "release" label.
if: |
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Download installer
uses: actions/download-artifact@v4
with:
name: cozystack-installer
path: _out/assets/
- name: Download Talos image
uses: actions/download-artifact@v4
with:
name: talos-image
path: _out/assets/
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Prepare environment
run: make SANDBOX_NAME=$SANDBOX_NAME prepare-env
install_cozystack:
name: Install Cozystack
runs-on: [self-hosted]
needs: prepare_env
# Never run when the PR carries the "release" label.
if: |
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Install Cozystack
run: make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME install-cozystack
test_apps:
name: Test applications
runs-on: [self-hosted]
needs: install_cozystack
# Never run when the PR carries the "release" label.
if: |
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: E2E Apps
run: make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME test-apps
cleanup:
name: Tear down environment
runs-on: [self-hosted]
needs: test_apps
# Never run when the PR carries the "release" label.
if: |
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: E2E Apps
run: make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME delete

View File

@@ -99,11 +99,15 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
# Build project artifacts
- name: Build
if: steps.check_release.outputs.skip == 'false'
run: make build
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
# Commit built artifacts
- name: Commit release artifacts

View File

@@ -43,12 +43,16 @@ manifests:
(cd packages/core/installer/; helm template -n cozy-installer installer .) > _out/assets/cozystack-installer.yaml
assets:
make -C packages/core/installer/ assets
make -C packages/core/installer assets
test:
make -C packages/core/testing apply
make -C packages/core/testing test
prepare-env:
make -C packages/core/testing apply
make -C packages/core/testing prepare-cluster
generate:
hack/update-codegen.sh

View File

@@ -12,11 +12,15 @@
**Cozystack** is a free PaaS platform and framework for building clouds.
Cozystack is a [CNCF Sandbox Level Project](https://www.cncf.io/sandbox-projects/) that was originally built and sponsored by [Ænix](https://aenix.io/).
With Cozystack, you can transform a bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters,
Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
Use Cozystack to build your own cloud or provide a cost-effective development environment.
![Cozystack user interface](https://cozystack.io/img/screenshot.png)
## Use-Cases
* [**Using Cozystack to build a public cloud**](https://cozystack.io/docs/guides/use-cases/public-cloud/)
@@ -28,9 +32,6 @@ You can use Cozystack as a platform to build a private cloud powered by Infrastr
* [**Using Cozystack as a Kubernetes distribution**](https://cozystack.io/docs/guides/use-cases/kubernetes-distribution/)
You can use Cozystack as a Kubernetes distribution for Bare Metal
## Screenshot
![Cozystack screenshot](https://cozystack.io/img/screenshot.png)
## Documentation
@@ -59,7 +60,10 @@ Commits are used to generate the changelog, and their author will be referenced
If you have **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/cozystack/cozystack/discussions/categories/feature-requests).
You are welcome to join our weekly community meetings (just add this events to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics)) or [Telegram group](https://t.me/cozystack).
## Community
You are welcome to join our [Telegram group](https://t.me/cozystack) and come to our weekly community meetings.
Add them to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics) for convenience.
## License

View File

@@ -194,7 +194,15 @@ func main() {
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Workload")
setupLog.Error(err, "unable to create controller", "controller", "TenantHelmReconciler")
os.Exit(1)
}
if err = (&controller.CozystackConfigReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CozystackConfigReconciler")
os.Exit(1)
}

View File

@@ -1,39 +1,129 @@
This is the third release candidate for the upcoming Cozystack v0.31.0 release.
The release notes show changes accumulated since the release of previous version, Cozystack v0.30.0.
Cozystack v0.31.0 is a significant release that brings new features, key fixes, and updates to underlying components.
This version enhances GPU support, improves many components of Cozystack, and introduces a more robust release process to improve stability.
Below, we'll go over the highlights in each area for current users, developers, and our community.
Cozystack 0.31.0 further advances GPU support, monitoring, and all-around convenience features.
## Major Features and Improvements
## New Features and Changes
### GPU support for tenant Kubernetes clusters
Cozystack now integrates NVIDIA GPU Operator support for tenant Kubernetes clusters.
This enables platform users to run GPU-powered AI/ML applications in their own clusters.
To enable GPU Operator, set `addons.gpuOperator.enabled: true` in the cluster configuration.
(@kvaps in https://github.com/cozystack/cozystack/pull/834)
Check out Andrei Kvapil's CNCF webinar [showcasing the GPU support by running Stable Diffusion in Cozystack](https://www.youtube.com/watch?v=S__h_QaoYEk).
<!--
* [kubernetes] Introduce GPU support for tenant Kubernetes clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/834)
-->
### Cilium Improvements
Cozystacks Cilium integration received two significant enhancements.
First, Gateway API support in Cilium is now enabled, allowing advanced L4/L7 routing features via Kubernetes Gateway API.
We thank Zdenek Janda @zdenekjanda for contributing this feature in https://github.com/cozystack/cozystack/pull/924.
Second, Cozystack now permits custom user-provided parameters in the tenant clusters Cilium configuration.
(@lllamnyp in https://github.com/cozystack/cozystack/pull/917)
<!--
* [cilium] Enable Cilium Gateway API. (@zdenekjanda in https://github.com/cozystack/cozystack/pull/924)
* [cilium] Enable user-added parameters in a tenant cluster Cilium. (@lllamnyp in https://github.com/cozystack/cozystack/pull/917)
-->
### Cross-Architecture Builds (ARM Support Beta)
Cozystack's build system was refactored to support multi-architecture binaries and container images.
This paves the road to running Cozystack on ARM64 servers.
Changes include Makefile improvements (https://github.com/cozystack/cozystack/pull/907)
and multi-arch Docker image builds (https://github.com/cozystack/cozystack/pull/932 and https://github.com/cozystack/cozystack/pull/970).
We thank Nikita Bykov @nbykov0 for his ongoing work on ARM support!
<!--
* Introduce support for cross-architecture builds and Cozystack on ARM:
* [build] Refactor Makefiles introducing build variables. (@nbykov0 in https://github.com/cozystack/cozystack/pull/907)
* [build] Add support for multi-architecture and cross-platform image builds. (@nbykov0 in https://github.com/cozystack/cozystack/pull/932 and https://github.com/cozystack/cozystack/pull/970)
-->
### VerticalPodAutoscaler (VPA) Expansion
The VerticalPodAutoscaler is now enabled for more Cozystack components to automate resource tuning.
Specifically, VPA was added for tenant Kubernetes control planes (@klinch0 in https://github.com/cozystack/cozystack/pull/806),
the Cozystack Dashboard (https://github.com/cozystack/cozystack/pull/828),
and the Cozystack etcd-operator (https://github.com/cozystack/cozystack/pull/850).
All Cozystack components that have VPA enabled can automatically adjust their CPU and memory requests based on usage, improving platform and application stability.
<!--
* Add VerticalPodAutoscaler to a few more components:
* [kubernetes] Kubernetes clusters in user tenants. (@klinch0 in https://github.com/cozystack/cozystack/pull/806)
* [platform] Cozystack dashboard. (@klinch0 in https://github.com/cozystack/cozystack/pull/828)
* [platform] Cozystack etcd-operator (@klinch0 in https://github.com/cozystack/cozystack/pull/850)
* Introduce support for cross-architecture builds and Cozystack on ARM:
* [build] Refactor Makefiles introducing build variables. (@nbykov0 in https://github.com/cozystack/cozystack/pull/907)
* [build] Add support for multi-architecture and cross-platform image builds. (@nbykov0 in https://github.com/cozystack/cozystack/pull/932 and https://github.com/cozystack/cozystack/pull/970)
-->
### Tenant HelmRelease Reconcile Controller
A new controller was introduced to monitor and synchronize HelmRelease resources across tenants.
This controller propagates configuration changes to tenant workloads and ensures that any HelmRelease defined in a tenant
stays in sync with platform updates.
It improves the reliability of deploying managed applications in Cozystack.
(@klinch0 in https://github.com/cozystack/cozystack/pull/870)
<!--
* [platform] Introduce a new controller to synchronize tenant HelmReleases and propagate configuration changes. (@klinch0 in https://github.com/cozystack/cozystack/pull/870)
* [platform] Introduce options `expose-services`, `expose-ingress` and `expose-external-ips` to the ingress service. (@kvaps in https://github.com/cozystack/cozystack/pull/929)
-->
### Virtual Machine Improvements
**Configurable KubeVirt CPU Overcommit**: The CPU allocation ratio in KubeVirt (how virtual CPUs are overcommitted relative to physical) is now configurable
via the `cpu-allocation-ratio` value in the Cozystack configmap.
This means Cozystack administrators can now tune CPU overcommitment for VMs to balance performance vs. density.
(@lllamnyp in https://github.com/cozystack/cozystack/pull/905)
**KubeVirt VM Export**: Cozystack now allows exporting KubeVirt virtual machines.
This feature, enabled via KubeVirt's `VirtualMachineExport` capability, lets users snapshot or back up VM images.
(@kvaps in https://github.com/cozystack/cozystack/pull/808)
**Support for various storage classes in Virtual Machines**: The `virtual-machine` application (since version 0.9.2) lets you pick any StorageClass for a VM's
system disk instead of relying on a hard-coded PVC.
Refer to values `systemDisk.storage` and `systemDisk.storageClass` in the [application's configs](https://cozystack.io/docs/reference/applications/virtual-machine/#common-parameters).
(@kvaps in https://github.com/cozystack/cozystack/pull/974)
<!--
* [kubevirt] Enable exporting VMs. (@kvaps in https://github.com/cozystack/cozystack/pull/808)
* [kubevirt] Make KubeVirt's CPU allocation ratio configurable. (@lllamnyp in https://github.com/cozystack/cozystack/pull/905)
* [virtual-machine] Add support for various storages. (@kvaps in https://github.com/cozystack/cozystack/pull/974)
-->
### Other Features and Improvements
* [platform] Introduce options `expose-services`, `expose-ingress`, and `expose-external-ips` to the ingress service. (@kvaps in https://github.com/cozystack/cozystack/pull/929)
* [cozystack-controller] Record the IP address pool and storage class in Workload objects. (@lllamnyp in https://github.com/cozystack/cozystack/pull/831)
* [cilium] Enable Cilium Gateway API. (@zdenekjanda in https://github.com/cozystack/cozystack/pull/924)
* [cilium] Enable user-added parameters in a tenant cluster Cilium. (@lllamnyp in https://github.com/cozystack/cozystack/pull/917)
* [apps] Remove user-facing config of limits and requests. (@lllamnyp in https://github.com/cozystack/cozystack/pull/935)
* Update the Cozystack release policy to include long-lived release branches and start with release candidates. Update CI workflows and docs accordingly.
* Use release branches `release-X.Y` for gathering and releasing fixes after initial `vX.Y.0` release. (@kvaps in https://github.com/cozystack/cozystack/pull/816)
* Automatically create release branches after initial `vX.Y.0` release is published. (@kvaps in https://github.com/cozystack/cozystack/pull/886)
* Introduce Release Candidate versions. Automate patch backporting by applying patches from pull requests labeled `[backport]` to the current release branch. (@kvaps in https://github.com/cozystack/cozystack/pull/841 and https://github.com/cozystack/cozystack/pull/901, @nickvolynkin in https://github.com/cozystack/cozystack/pull/890)
* Support alpha and beta pre-releases. (@kvaps in https://github.com/cozystack/cozystack/pull/978)
* Commit changes in release pipelines under `github-actions <github-actions@github.com>`. (@kvaps in https://github.com/cozystack/cozystack/pull/823)
* Describe the Cozystack release workflow. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/817 and https://github.com/cozystack/cozystack/pull/897)
## New Release Lifecycle
Cozystack release lifecycle is changing to provide a more stable and predictable lifecycle to customers running Cozystack in mission-critical environments.
* **Gradual Release with Alpha, Beta, and Release Candidates**: Cozystack will now publish pre-release versions (alpha, beta, release candidates) before a stable release.
Starting with v0.31.0, the team made three release candidates before releasing version v0.31.0.
This allows more testing and feedback before marking a release as stable.
* **Prolonged Release Support with Patch Versions**: After the initial `vX.Y.0` release, a long-lived branch `release-X.Y` will be created to backport fixes.
For example, with 0.31.0s release, a `release-0.31` branch will track patch fixes (`0.31.x`).
This strategy lets Cozystack users receive timely patch releases and updates with minimal risks.
To implement these new changes, we have rebuilt our CI/CD workflows and introduced automation, enabling automatic backports.
You can read more about how it's implemented in the Development section below.
For more information, read the [Cozystack Release Workflow](https://github.com/cozystack/cozystack/blob/main/docs/release.md) documentation.
## Fixes
* [virtual-machine] Add GPU names to the virtual machine specifications. (@kvaps in https://github.com/cozystack/cozystack/pull/862)
* [virtual-machine] Count Workload resources for pods by requests, not limits. Other improvements to VM resource tracking. (@lllamnyp in https://github.com/cozystack/cozystack/pull/904)
* [virtual-machine] Set PortList method by default. (@kvaps in https://github.com/cozystack/cozystack/pull/996)
* [virtual-machine] Specify ports even for wholeIP mode. (@kvaps in https://github.com/cozystack/cozystack/pull/1000)
* [platform] Fix installing HelmReleases on initial setup. (@kvaps in https://github.com/cozystack/cozystack/pull/833)
* [platform] Migration scripts update Kubernetes ConfigMap with the current stack version for improved version tracking. (@klinch0 in https://github.com/cozystack/cozystack/pull/840)
* [platform] Reduce requested CPU and RAM for the `kamaji` provider. (@klinch0 in https://github.com/cozystack/cozystack/pull/825)
@@ -45,7 +135,8 @@ Cozystack 0.31.0 further advances GPU support, monitoring, and all-around conven
* [kubernetes] Fix merging `valuesOverride` for tenant clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/879)
* [kubernetes] Fix `ubuntu-container-disk` tag. (@kvaps in https://github.com/cozystack/cozystack/pull/887)
* [kubernetes] Refactor Helm manifests for tenant Kubernetes clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/866)
* [kubernetes] Fix Ingress-NGINX depends on Cert-Manager . (@kvaps in https://github.com/cozystack/cozystack/pull/976)
* [kubernetes] Fix Ingress-NGINX depends on Cert-Manager. (@kvaps in https://github.com/cozystack/cozystack/pull/976)
* [kubernetes, apps] Enable `topologySpreadConstraints` for tenant Kubernetes clusters and fix it for managed PostgreSQL. (@klinch0 in https://github.com/cozystack/cozystack/pull/995)
* [tenant] Fix an issue with accessing external IPs of a cluster from the cluster itself. (@kvaps in https://github.com/cozystack/cozystack/pull/854)
* [cluster-api] Remove the no longer necessary workaround for Kamaji. (@kvaps in https://github.com/cozystack/cozystack/pull/867, patched in https://github.com/cozystack/cozystack/pull/956)
* [monitoring] Remove legacy label "POD" from the exclude filter in metrics. (@xy2 in https://github.com/cozystack/cozystack/pull/826)
@@ -54,24 +145,13 @@ Cozystack 0.31.0 further advances GPU support, monitoring, and all-around conven
* [postgres] Remove duplicated `template` entry from backup manifest. (@etoshutka in https://github.com/cozystack/cozystack/pull/872)
* [kube-ovn] Fix versions mapping in Makefile. (@kvaps in https://github.com/cozystack/cozystack/pull/883)
* [dx] Automatically detect version for migrations in the installer.sh. (@kvaps in https://github.com/cozystack/cozystack/pull/837)
* [e2e] Increase timeout durations for `capi` and `keycloak` to improve reliability during environment setup. (@kvaps in https://github.com/cozystack/cozystack/pull/858)
* [e2e] Fix `device_ownership_from_security_context` CRI. (@dtrdnk in https://github.com/cozystack/cozystack/pull/896)
* [e2e] Return `genisoimage` to the e2e-test Dockerfile (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/962)
* [ci] Improve the check for `versions_map` running on pull requests. (@kvaps and @klinch0 in https://github.com/cozystack/cozystack/pull/836, https://github.com/cozystack/cozystack/pull/842, and https://github.com/cozystack/cozystack/pull/845)
* [ci] If the release step was skipped on a tag, skip tests as well. (@kvaps in https://github.com/cozystack/cozystack/pull/822)
* [ci] Allow CI to cancel the previous job if a new one is scheduled. (@kvaps in https://github.com/cozystack/cozystack/pull/873)
* [ci] Use the correct version name when uploading build assets to the release page. (@kvaps in https://github.com/cozystack/cozystack/pull/876)
* [ci] Stop using `ok-to-test` label to trigger CI in pull requests. (@kvaps in https://github.com/cozystack/cozystack/pull/875)
* [ci] Do not run tests in the release building pipeline. (@kvaps in https://github.com/cozystack/cozystack/pull/882)
* [ci] Fix release branch creation. (@kvaps in https://github.com/cozystack/cozystack/pull/884)
* [ci, dx] Reduce noise in the test logs by suppressing the `wget` progress bar. (@lllamnyp in https://github.com/cozystack/cozystack/pull/865)
* [ci] Revert "automatically trigger tests in releasing PR". (@kvaps in https://github.com/cozystack/cozystack/pull/900)
* [ci] Force-update release branch on tagged main commits . (@kvaps in https://github.com/cozystack/cozystack/pull/977)
* [docs] Explain that tenants cannot have dashes in the names. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/980)
* [dx] remove version_map and building for library charts. (@kvaps in https://github.com/cozystack/cozystack/pull/998)
* [docs] Review the tenant Kubernetes cluster docs. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/969)
* [docs] Explain that tenants cannot have dashes in their names. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/980)
## Dependencies
* MetalLB s now included directly as a patched image based on version 0.14.9. (@lllamnyp in https://github.com/cozystack/cozystack/pull/945)
* MetalLB images are now built in-tree based on version 0.14.9 with additional critical patches. (@lllamnyp in https://github.com/cozystack/cozystack/pull/945)
* Update Kubernetes to v1.32.4. (@kvaps in https://github.com/cozystack/cozystack/pull/949)
* Update Talos Linux to v1.10.1. (@kvaps in https://github.com/cozystack/cozystack/pull/931)
* Update Cilium to v1.17.3. (@kvaps in https://github.com/cozystack/cozystack/pull/848)
@@ -83,15 +163,81 @@ Cozystack 0.31.0 further advances GPU support, monitoring, and all-around conven
* Update KamajiControlPlane to edge-25.4.1. (@kvaps in https://github.com/cozystack/cozystack/pull/953, fixed by @nbykov0 in https://github.com/cozystack/cozystack/pull/983)
* Update cert-manager to v1.17.2. (@kvaps in https://github.com/cozystack/cozystack/pull/975)
## Maintenance
## Documentation
* Add @klinch0 to CODEOWNERS. (@kvaps in https://github.com/cozystack/cozystack/pull/838)
* [Installing Talos in Air-Gapped Environment](https://cozystack.io/docs/operations/talos/configuration/air-gapped/):
new guide for configuring and bootstrapping Talos Linux clusters in air-gapped environments.
(@klinch0 in https://github.com/cozystack/website/pull/203)
## New Contributors
* [Cozystack Bundles](https://cozystack.io/docs/guides/bundles/): new page in the learning section explaining how Cozystack bundles work and how to choose a bundle.
(@NickVolynkin in https://github.com/cozystack/website/pull/188, https://github.com/cozystack/website/pull/189, and others;
updated by @kvaps in https://github.com/cozystack/website/pull/192 and https://github.com/cozystack/website/pull/193)
* [Managed Application Reference](https://cozystack.io/docs/reference/applications/): A set of new pages in the docs, mirroring application docs from the Cozystack dashboard.
(@NickVolynkin in https://github.com/cozystack/website/pull/198, https://github.com/cozystack/website/pull/202, and https://github.com/cozystack/website/pull/204)
* **LINSTOR Networking**: Guides on [configuring dedicated network for LINSTOR](https://cozystack.io/docs/operations/storage/dedicated-network/)
and [configuring network for distributed storage in multi-datacenter setup](https://cozystack.io/docs/operations/stretched/linstor-dedicated-network/).
(@xy2, edited by @NickVolynkin in https://github.com/cozystack/website/pull/171, https://github.com/cozystack/website/pull/182, and https://github.com/cozystack/website/pull/184)
### Fixes
* Correct error in the doc for the command to edit the configmap. (@lb0o in https://github.com/cozystack/website/pull/207)
* Fix group name in OIDC docs (@kingdonb in https://github.com/cozystack/website/pull/179)
* A bit more explanation of Docker buildx builders. (@nbykov0 in https://github.com/cozystack/website/pull/187)
## Development, Testing, and CI/CD
### Testing
Improvements:
* Introduce `cozytest` — a new [BATS-based](https://github.com/bats-core/bats-core) testing framework. (@kvaps in https://github.com/cozystack/cozystack/pull/982)
Fixes:
* Fix `device_ownership_from_security_context` CRI. (@dtrdnk in https://github.com/cozystack/cozystack/pull/896)
* Increase timeout durations for `capi` and `keycloak` to improve reliability during e2e-tests. (@kvaps in https://github.com/cozystack/cozystack/pull/858)
* Return `genisoimage` to the e2e-test Dockerfile (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/962)
### CI/CD Changes
Improvements:
* Use release branches `release-X.Y` for gathering and releasing fixes after initial `vX.Y.0` release. (@kvaps in https://github.com/cozystack/cozystack/pull/816)
* Automatically create release branches after initial `vX.Y.0` release is published. (@kvaps in https://github.com/cozystack/cozystack/pull/886)
* Introduce Release Candidate versions. Automate patch backporting by applying patches from pull requests labeled `[backport]` to the current release branch. (@kvaps in https://github.com/cozystack/cozystack/pull/841 and https://github.com/cozystack/cozystack/pull/901, @nickvolynkin in https://github.com/cozystack/cozystack/pull/890)
* Support alpha and beta pre-releases. (@kvaps in https://github.com/cozystack/cozystack/pull/978)
* Commit changes in release pipelines under `github-actions <github-actions@github.com>`. (@kvaps in https://github.com/cozystack/cozystack/pull/823)
* Describe the Cozystack release workflow. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/817 and https://github.com/cozystack/cozystack/pull/897)
Fixes:
* Improve the check for `versions_map` running on pull requests. (@kvaps and @klinch0 in https://github.com/cozystack/cozystack/pull/836, https://github.com/cozystack/cozystack/pull/842, and https://github.com/cozystack/cozystack/pull/845)
* If the release step was skipped on a tag, skip tests as well. (@kvaps in https://github.com/cozystack/cozystack/pull/822)
* Allow CI to cancel the previous job if a new one is scheduled. (@kvaps in https://github.com/cozystack/cozystack/pull/873)
* Use the correct version name when uploading build assets to the release page. (@kvaps in https://github.com/cozystack/cozystack/pull/876)
* Stop using `ok-to-test` label to trigger CI in pull requests. (@kvaps in https://github.com/cozystack/cozystack/pull/875)
* Do not run tests in the release building pipeline. (@kvaps in https://github.com/cozystack/cozystack/pull/882)
* Fix release branch creation. (@kvaps in https://github.com/cozystack/cozystack/pull/884)
* Reduce noise in the test logs by suppressing the `wget` progress bar. (@lllamnyp in https://github.com/cozystack/cozystack/pull/865)
* Revert "automatically trigger tests in releasing PR". (@kvaps in https://github.com/cozystack/cozystack/pull/900)
* Force-update release branch on tagged main commits. (@kvaps in https://github.com/cozystack/cozystack/pull/977)
* Show detailed errors in the `pull-request-release` workflow. (@lllamnyp in https://github.com/cozystack/cozystack/pull/992)
## Community and Maintenance
### Repository Maintenance
Added @klinch0 to CODEOWNERS. (@kvaps in https://github.com/cozystack/cozystack/pull/838)
### New Contributors
* @etoshutka made their first contribution in https://github.com/cozystack/cozystack/pull/872
* @dtrdnk made their first contribution in https://github.com/cozystack/cozystack/pull/896
* @zdenekjanda made their first contribution in https://github.com/cozystack/cozystack/pull/924
* @gwynbleidd2106 made their first contribution in https://github.com/cozystack/cozystack/pull/962
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.30.0...v0.31.0-rc.3
## Full Changelog
See https://github.com/cozystack/cozystack/compare/v0.30.0...v0.31.0

13
go.mod
View File

@@ -37,6 +37,7 @@ require (
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fluxcd/pkg/apis/kustomize v1.6.1 // indirect
@@ -91,14 +92,14 @@ require (
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.28.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.30.0 // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/sync v0.8.0 // indirect
golang.org/x/sys v0.26.0 // indirect
golang.org/x/term v0.25.0 // indirect
golang.org/x/text v0.19.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.7.0 // indirect
golang.org/x/tools v0.26.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect

28
go.sum
View File

@@ -26,8 +26,8 @@ github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkp
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v0.5.2 h1:xVCHIVMUu1wtM/VkR9jVZ45N3FhZfYMMYGorLCR8P3k=
github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
@@ -212,8 +212,8 @@ go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw=
golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@@ -222,26 +222,26 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24=
golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -1,9 +1,11 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Create tenant with isolated mode enabled" {
kubectl -n tenant-root get tenants.apps.cozystack.io test ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Tenant
@@ -24,6 +26,7 @@ EOF
}
@test "Create a tenant Kubernetes control plane" {
kubectl -n tenant-test get kuberneteses.apps.cozystack.io test ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes
@@ -90,5 +93,261 @@ EOF
kubectl wait tcp -n tenant-test kubernetes-test --timeout=2m --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
kubectl wait deploy --timeout=4m --for=condition=available -n tenant-test kubernetes-test kubernetes-test-cluster-autoscaler kubernetes-test-kccm kubernetes-test-kcsi-controller
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=5m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=10m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
kubectl -n tenant-test delete kuberneteses.apps.cozystack.io test
}
@test "Create a VM Disk" {
name='test'
kubectl -n tenant-test get vmdisks.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VMDisk
metadata:
name: $name
namespace: tenant-test
spec:
source:
http:
url: https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
optical: false
storage: 5Gi
storageClass: replicated
EOF
sleep 5
kubectl -n tenant-test wait hr vm-disk-$name --timeout=5s --for=condition=ready
kubectl -n tenant-test wait dv vm-disk-$name --timeout=150s --for=condition=ready
kubectl -n tenant-test wait pvc vm-disk-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
}
@test "Create a VM Instance" {
diskName='test'
name='test'
kubectl -n tenant-test get vminstances.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VMInstance
metadata:
name: $name
namespace: tenant-test
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
running: true
instanceType: "u1.medium"
instanceProfile: ubuntu
disks:
- name: $diskName
gpus: []
resources:
cpu: ""
memory: ""
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
sleep 5
timeout 20 sh -ec "until kubectl -n tenant-test get vmi vm-instance-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 5; done"
kubectl -n tenant-test wait hr vm-instance-$name --timeout=5s --for=condition=ready
kubectl -n tenant-test wait vm vm-instance-$name --timeout=20s --for=condition=ready
kubectl -n tenant-test delete vminstances.apps.cozystack.io $name
kubectl -n tenant-test delete vmdisks.apps.cozystack.io $diskName
}
@test "Create a Virtual Machine" {
name='test'
kubectl -n tenant-test get virtualmachines.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VirtualMachine
metadata:
name: $name
namespace: tenant-test
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
instanceType: "u1.medium"
instanceProfile: ubuntu
systemDisk:
image: ubuntu
storage: 5Gi
storageClass: replicated
gpus: []
resources:
cpu: ""
memory: ""
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
sleep 5
kubectl -n tenant-test wait hr virtual-machine-$name --timeout=10s --for=condition=ready
kubectl -n tenant-test wait dv virtual-machine-$name --timeout=150s --for=condition=ready
kubectl -n tenant-test wait pvc virtual-machine-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
kubectl -n tenant-test wait vm virtual-machine-$name --timeout=100s --for=condition=ready
timeout 120 sh -ec "until kubectl -n tenant-test get vmi virtual-machine-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete virtualmachines.apps.cozystack.io $name
}
@test "Create DB PostgreSQL" {
name='test'
kubectl -n tenant-test get postgreses.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Postgres
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 2
storageClass: ""
postgresql:
parameters:
max_connections: 100
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
users:
testuser:
password: xai7Wepo
databases:
testdb:
roles:
admin:
- testuser
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr postgres-$name --timeout=100s --for=condition=ready
kubectl -n tenant-test wait job.batch postgres-$name-init-job --timeout=50s --for=condition=Complete
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-r -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-ro -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-rw -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-r -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-ro -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-rw -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete postgreses.apps.cozystack.io $name
}
@test "Create DB MySQL" {
name='test'
kubectl -n tenant-test get mysqls.apps.cozystack.io $name ||
kubectl create -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: MySQL
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 2
storageClass: ""
users:
testuser:
maxUserConnections: 1000
password: xai7Wepo
databases:
testdb:
roles:
admin:
- testuser
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr mysql-$name --timeout=30s --for=condition=ready
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name -o jsonpath='{.spec.ports[0].port}' | grep -q '3306'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/mysql-$name --timeout=110s --for=jsonpath='{.status.replicas}'=2
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name-metrics -o jsonpath='{.spec.ports[0].port}' | grep -q '9104'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name-metrics -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test wait deployment.apps/mysql-$name-metrics --timeout=90s --for=jsonpath='{.status.replicas}'=1
kubectl -n tenant-test delete mysqls.apps.cozystack.io $name
}
@test "Create DB ClickHouse" {
name='test'
kubectl -n tenant-test get clickhouses.apps.cozystack.io $name ||
kubectl create -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: $name
namespace: tenant-test
spec:
size: 10Gi
logStorageSize: 2Gi
shards: 1
replicas: 2
storageClass: ""
logTTL: 15
users:
testuser:
password: xai7Wepo
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/clickhouse-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr clickhouse-$name --timeout=20s --for=condition=ready
timeout 180 sh -ec "until kubectl -n tenant-test get svc chendpoint-clickhouse-$name -o jsonpath='{.spec.ports[*].port}' | grep -q '8123 9000'; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-0 --timeout=120s --for=jsonpath='{.status.replicas}'=1
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 100 sh -ec "until kubectl -n tenant-test get svc chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.spec.ports[*].port}' | grep -q '9000 8123 9009'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get sts chi-clickhouse-$name-clickhouse-0-1 ; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-1 --timeout=140s --for=jsonpath='{.status.replicas}'=1
}

View File

@@ -3,12 +3,14 @@
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Environment variable COZYSTACK_INSTALLER_YAML is defined" {
if [ -z "${COZYSTACK_INSTALLER_YAML:-}" ]; then
echo 'COZYSTACK_INSTALLER_YAML environment variable is not set!' >&2
echo >&2
echo 'Please export it with the following command:' >&2
echo ' export COZYSTACK_INSTALLER_YAML=$(helm template -n cozy-system installer packages/core/installer)' >&2
@test "Required installer assets exist" {
if [ ! -f _out/assets/cozystack-installer.yaml ]; then
echo "Missing: _out/assets/cozystack-installer.yaml" >&2
exit 1
fi
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing: _out/assets/nocloud-amd64.raw.xz" >&2
exit 1
fi
}
@@ -70,19 +72,21 @@ EOF
done
}
@test "Download Talos NoCloud image" {
if [ ! -f nocloud-amd64.raw ]; then
wget https://github.com/cozystack/cozystack/releases/latest/download/nocloud-amd64.raw.xz \
-O nocloud-amd64.raw.xz --show-progress --output-file /dev/stdout --progress=dot:giga 2>/dev/null
rm -f nocloud-amd64.raw
xz --decompress nocloud-amd64.raw.xz
@test "Use Talos NoCloud image from assets" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing _out/assets/nocloud-amd64.raw.xz" 2>&1
exit 1
fi
rm -f nocloud-amd64.raw
cp _out/assets/nocloud-amd64.raw.xz .
xz --decompress nocloud-amd64.raw.xz
}
@test "Prepare VM disks" {
for i in 1 2 3; do
cp nocloud-amd64.raw srv${i}/system.img
qemu-img resize srv${i}/system.img 20G
qemu-img resize srv${i}/system.img 50G
qemu-img create srv${i}/data.img 100G
done
}
@@ -98,7 +102,7 @@ EOF
@test "Boot QEMU VMs" {
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 16384 \
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 24576 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
-drive file=srv${i}/system.img,if=virtio,format=raw \
@@ -243,8 +247,8 @@ EOF
--from-literal=api-server-endpoint=https://192.168.123.10:6443 \
--dry-run=client -o yaml | kubectl apply -f -
# Apply installer manifests from env variable
echo "$COZYSTACK_INSTALLER_YAML" | kubectl apply -f -
# Apply installer manifests from file
kubectl apply -f _out/assets/cozystack-installer.yaml
# Wait for the installer deployment to become available
kubectl wait deployment/cozystack -n cozy-system --timeout=1m --for=condition=Available

View File

@@ -0,0 +1,157 @@
#!/usr/bin/env bats
@test "Install Cozystack" {
# Create namespace & configmap required by installer
kubectl create namespace cozy-system --dry-run=client -o yaml | kubectl apply -f -
kubectl create configmap cozystack -n cozy-system \
--from-literal=bundle-name=paas-full \
--from-literal=ipv4-pod-cidr=10.244.0.0/16 \
--from-literal=ipv4-pod-gateway=10.244.0.1 \
--from-literal=ipv4-svc-cidr=10.96.0.0/16 \
--from-literal=ipv4-join-cidr=100.64.0.0/16 \
--from-literal=root-host=example.org \
--from-literal=api-server-endpoint=https://192.168.123.10:6443 \
--dry-run=client -o yaml | kubectl apply -f -
# Apply installer manifests from file
kubectl apply -f _out/assets/cozystack-installer.yaml
# Wait for the installer deployment to become available
kubectl wait deployment/cozystack -n cozy-system --timeout=1m --for=condition=Available
# Wait until HelmReleases appear & reconcile them
timeout 60 sh -ec 'until kubectl get hr -A | grep -q cozys; do sleep 1; done'
sleep 5
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n "$1" hr/"$2" &"} END {print "wait"}' | sh -ex
# Fail the test if any HelmRelease is not Ready
if kubectl get hr -A | grep -v " True " | grep -v NAME; then
kubectl get hr -A
fail "Some HelmReleases failed to reconcile"
fi
}
@test "Wait for ClusterAPI provider deployments" {
# Wait for ClusterAPI provider deployments
timeout 60 sh -ec 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager >/dev/null 2>&1; do sleep 1; done'
kubectl wait deployment/capi-controller-manager deployment/capi-kamaji-controller-manager deployment/capi-kubeadm-bootstrap-controller-manager deployment/capi-operator-cluster-api-operator deployment/capk-controller-manager -n cozy-cluster-api --timeout=1m --for=condition=available
}
@test "Wait for LINSTOR and configure storage" {
# Linstor controller and nodes
kubectl wait deployment/linstor-controller -n cozy-linstor --timeout=5m --for=condition=available
timeout 60 sh -ec 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) -eq 3 ]; do sleep 1; done'
for node in srv1 srv2 srv3; do
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs ${node} /dev/vdc --pool-name data --storage-pool data
done
# Storage classes
kubectl apply -f - <<'EOF'
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: Immediate
allowVolumeExpansion: true
EOF
}
@test "Wait for MetalLB and configure address pool" {
# MetalLB address pool
kubectl apply -f - <<'EOF'
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools: [cozystack]
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses: [192.168.123.200-192.168.123.250]
autoAssign: true
avoidBuggyIPs: false
EOF
}
@test "Check Cozystack API service" {
kubectl wait --for=condition=Available apiservices/v1alpha1.apps.cozystack.io --timeout=2m
}
@test "Configure Tenant and wait for applications" {
# Patch root tenant and wait for its releases
kubectl patch tenants/root -n tenant-root --type merge -p '{"spec":{"host":"example.org","ingress":true,"monitoring":true,"etcd":true,"isolated":true}}'
timeout 60 sh -ec 'until kubectl get hr -n tenant-root etcd ingress monitoring tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/etcd hr/ingress hr/tenant-root -n tenant-root --timeout=2m --for=condition=ready
if ! kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready; then
flux reconcile hr monitoring -n tenant-root --force
kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready
fi
# Expose Cozystack services through ingress
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"expose-services":"api,dashboard,cdi-uploadproxy,vm-exportproxy,keycloak"}}'
# NGINX ingress controller
timeout 60 sh -ec 'until kubectl get deploy root-ingress-controller -n tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait deploy/root-ingress-controller -n tenant-root --timeout=5m --for=condition=available
# etcd statefulset
kubectl wait sts/etcd -n tenant-root --for=jsonpath='{.status.readyReplicas}'=3 --timeout=5m
# VictoriaMetrics components
kubectl wait vmalert/vmalert-shortterm vmalertmanager/alertmanager -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vlogs/generic -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vmcluster/shortterm vmcluster/longterm -n tenant-root --for=jsonpath='{.status.clusterStatus}'=operational --timeout=5m
# Grafana
kubectl wait clusters.postgresql.cnpg.io/grafana-db -n tenant-root --for=condition=ready --timeout=5m
kubectl wait deploy/grafana-deployment -n tenant-root --for=condition=available --timeout=5m
# Verify Grafana via ingress
ingress_ip=$(kubectl get svc root-ingress-controller -n tenant-root -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
if ! curl -sS -k "https://${ingress_ip}" -H 'Host: grafana.example.org' --max-time 30 | grep -q Found; then
echo "Failed to access Grafana via ingress at ${ingress_ip}" >&2
exit 1
fi
}
@test "Keycloak OIDC stack is healthy" {
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"oidc-enabled":"true"}}'
timeout 120 sh -ec 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/keycloak hr/keycloak-configure hr/keycloak-operator -n cozy-keycloak --timeout=10m --for=condition=ready
}

View File

@@ -0,0 +1,235 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Required installer assets exist" {
if [ ! -f _out/assets/cozystack-installer.yaml ]; then
echo "Missing: _out/assets/cozystack-installer.yaml" >&2
exit 1
fi
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing: _out/assets/nocloud-amd64.raw.xz" >&2
exit 1
fi
}
@test "IPv4 forwarding is enabled" {
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
echo "IPv4 forwarding is disabled!" >&2
echo >&2
echo "Enable it with:" >&2
echo " echo 1 > /proc/sys/net/ipv4/ip_forward" >&2
exit 1
fi
}
@test "Clean previous VMs" {
kill $(cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid 2>/dev/null) 2>/dev/null || true
rm -rf srv1 srv2 srv3
}
@test "Prepare networking and masquerading" {
ip link del cozy-br0 2>/dev/null || true
ip link add cozy-br0 type bridge
ip link set cozy-br0 up
ip address add 192.168.123.1/24 dev cozy-br0
# Masquerading rule idempotent (delete first, then add)
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
}
@test "Prepare cloudinit drive for VMs" {
mkdir -p srv1 srv2 srv3
# Generate cloudinit ISOs
for i in 1 2 3; do
echo "hostname: srv${i}" > "srv${i}/meta-data"
cat > "srv${i}/user-data" <<'EOF'
#cloud-config
EOF
cat > "srv${i}/network-config" <<EOF
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- "192.168.123.1${i}/26"
gateway4: "192.168.123.1"
nameservers:
search: [cluster.local]
addresses: [8.8.8.8]
EOF
( cd "srv${i}" && genisoimage \
-output seed.img \
-volid cidata -rational-rock -joliet \
user-data meta-data network-config )
done
}
@test "Use Talos NoCloud image from assets" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing _out/assets/nocloud-amd64.raw.xz" 2>&1
exit 1
fi
rm -f nocloud-amd64.raw
cp _out/assets/nocloud-amd64.raw.xz .
xz --decompress nocloud-amd64.raw.xz
}
@test "Prepare VM disks" {
for i in 1 2 3; do
cp nocloud-amd64.raw srv${i}/system.img
qemu-img resize srv${i}/system.img 50G
qemu-img create srv${i}/data.img 100G
done
}
@test "Create tap devices" {
for i in 1 2 3; do
ip link del cozy-srv${i} 2>/dev/null || true
ip tuntap add dev cozy-srv${i} mode tap
ip link set cozy-srv${i} up
ip link set cozy-srv${i} master cozy-br0
done
}
@test "Boot QEMU VMs" {
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 24576 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
-drive file=srv${i}/system.img,if=virtio,format=raw \
-drive file=srv${i}/seed.img,if=virtio,format=raw \
-drive file=srv${i}/data.img,if=virtio,format=raw \
-display none -daemonize -pidfile srv${i}/qemu.pid
done
# Give qemu a few seconds to start up networking
sleep 5
}
@test "Wait until Talos API port 50000 is reachable on all machines" {
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Generate Talos cluster configuration" {
# Clusterwide patches
cat > patch.yaml <<'EOF'
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.123.0/24
extraConfig:
maxPods: 512
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
- name: spl
registries:
mirrors:
docker.io:
endpoints:
- https://mirror.gcr.io
files:
- content: |
[plugins]
[plugins."io.containerd.cri.v1.runtime"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
apiServer:
extraArgs:
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
oidc-client-id: "kubernetes"
oidc-username-claim: "preferred_username"
oidc-groups-claim: "groups"
network:
cni:
name: none
dnsDomain: cozy.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOF
# Controlplaneonly patches
cat > patch-controlplane.yaml <<'EOF'
machine:
nodeLabels:
node.kubernetes.io/exclude-from-external-load-balancers:
$patch: delete
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.123.10
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.123.0/24
EOF
# Generate secrets once
if [ ! -f secrets.yaml ]; then
talosctl gen secrets
fi
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 \
--config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
}
@test "Apply Talos configuration to the node" {
# Apply the configuration to all three nodes
for node in 11 12 13; do
talosctl apply -f controlplane.yaml -n 192.168.123.${node} -e 192.168.123.${node} -i
done
# Wait for Talos services to come up again
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Bootstrap Talos cluster" {
# Bootstrap etcd on the first node
timeout 10 sh -ec 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
# Wait until etcd is healthy
timeout 180 sh -ec 'until talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 >/dev/null 2>&1; do sleep 1; done'
timeout 60 sh -ec 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep -q "rpc error"; do sleep 1; done'
# Retrieve kubeconfig
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
# Wait until all three nodes register in Kubernetes
timeout 60 sh -ec 'until [ $(kubectl get node --no-headers | wc -l) -eq 3 ]; do sleep 1; done'
}

View File

@@ -0,0 +1,139 @@
package controller
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"sort"
"time"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/predicate"
)
type CozystackConfigReconciler struct {
client.Client
Scheme *runtime.Scheme
}
var configMapNames = []string{"cozystack", "cozystack-branding", "cozystack-scheduling"}
const configMapNamespace = "cozy-system"
const digestAnnotation = "cozystack.io/cozy-config-digest"
const forceReconcileKey = "reconcile.fluxcd.io/forceAt"
const requestedAt = "reconcile.fluxcd.io/requestedAt"
func (r *CozystackConfigReconciler) Reconcile(ctx context.Context, _ ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
digest, err := r.computeDigest(ctx)
if err != nil {
log.Error(err, "failed to compute config digest")
return ctrl.Result{}, nil
}
var helmList helmv2.HelmReleaseList
if err := r.List(ctx, &helmList); err != nil {
return ctrl.Result{}, fmt.Errorf("failed to list HelmReleases: %w", err)
}
now := time.Now().Format(time.RFC3339Nano)
updated := 0
for _, hr := range helmList.Items {
isSystemApp := hr.Labels["cozystack.io/system-app"] == "true"
isTenantRoot := hr.Namespace == "tenant-root" && hr.Name == "tenant-root"
if !isSystemApp && !isTenantRoot {
continue
}
if hr.Annotations == nil {
hr.Annotations = map[string]string{}
}
if hr.Annotations[digestAnnotation] == digest {
continue
}
patch := client.MergeFrom(hr.DeepCopy())
hr.Annotations[digestAnnotation] = digest
hr.Annotations[forceReconcileKey] = now
hr.Annotations[requestedAt] = now
if err := r.Patch(ctx, &hr, patch); err != nil {
log.Error(err, "failed to patch HelmRelease", "name", hr.Name, "namespace", hr.Namespace)
continue
}
updated++
log.Info("patched HelmRelease with new config digest", "name", hr.Name, "namespace", hr.Namespace)
}
log.Info("finished reconciliation", "updatedHelmReleases", updated)
return ctrl.Result{}, nil
}
func (r *CozystackConfigReconciler) computeDigest(ctx context.Context) (string, error) {
hash := sha256.New()
for _, name := range configMapNames {
var cm corev1.ConfigMap
err := r.Get(ctx, client.ObjectKey{Namespace: configMapNamespace, Name: name}, &cm)
if err != nil {
if kerrors.IsNotFound(err) {
continue // ignore missing
}
return "", err
}
// Sort keys for consistent hashing
var keys []string
for k := range cm.Data {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
v := cm.Data[k]
fmt.Fprintf(hash, "%s:%s=%s\n", name, k, v)
}
}
return hex.EncodeToString(hash.Sum(nil)), nil
}
func (r *CozystackConfigReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
WithEventFilter(predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
cm, ok := e.ObjectNew.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
CreateFunc: func(e event.CreateEvent) bool {
cm, ok := e.Object.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
DeleteFunc: func(e event.DeleteEvent) bool {
cm, ok := e.Object.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
}).
For(&corev1.ConfigMap{}).
Complete(r)
}
func contains(slice []string, val string) bool {
for _, s := range slice {
if s == val {
return true
}
}
return false
}

View File

@@ -248,15 +248,24 @@ func (r *WorkloadMonitorReconciler) reconcilePodForMonitor(
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("pod-%s", pod.Name),
Namespace: pod.Namespace,
Labels: map[string]string{},
},
}
metaLabels := r.getWorkloadMetadata(&pod)
_, err := ctrl.CreateOrUpdate(ctx, r.Client, workload, func() error {
// Update owner references with the new monitor
updateOwnerReferences(workload.GetObjectMeta(), monitor)
// Copy labels from the Pod if needed
workload.Labels = pod.Labels
for k, v := range pod.Labels {
workload.Labels[k] = v
}
// Add workload meta to labels
for k, v := range metaLabels {
workload.Labels[k] = v
}
// Fill Workload status fields:
workload.Status.Kind = monitor.Spec.Kind
@@ -433,3 +442,12 @@ func mapObjectToMonitor[T client.Object](_ T, c client.Client) func(ctx context.
return requests
}
}
func (r *WorkloadMonitorReconciler) getWorkloadMetadata(obj client.Object) map[string]string {
labels := make(map[string]string)
annotations := obj.GetAnnotations()
if instanceType, ok := annotations["kubevirt.io/cluster-instancetype-name"]; ok {
labels["workloads.cozystack.io/kubevirt-vmi-instance-type"] = instanceType
}
return labels
}

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.1.0"
appVersion: "0.2.0"

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -18,3 +18,14 @@ rules:
resourceNames:
- {{ .Release.Name }}-ui
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.9.0
version: 0.10.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1,4 +1,4 @@
CLICKHOUSE_BACKUP_TAG = $(shell awk '$$1 == "version:" {print $$2}' Chart.yaml)
CLICKHOUSE_BACKUP_TAG = $(shell awk '$$0 ~ /^version:/ {print $$2}' Chart.yaml)
include ../../../scripts/common-envs.mk
include ../../../scripts/package.mk

View File

@@ -1,32 +1,35 @@
# Managed Clickhouse Service
ClickHouse is an open source high-performance and column-oriented SQL database management system (DBMS).
It is used for online analytical processing (OLAP).
Cozystack platform uses Altinity operator to provide ClickHouse.
### How to restore backup:
find snapshot:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name snapshots
```
1. Find a snapshot:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name snapshots
```
restore:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name restore latest --target /tmp/
```
2. Restore it:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name restore latest --target /tmp/
```
more details:
- https://itnext.io/restic-effective-backup-from-stdin-4bc1e8f083c1
For more details, read [Restic: Effective Backup from Stdin](https://blog.aenix.io/restic-effective-backup-from-stdin-4bc1e8f083c1).
## Parameters
### Common parameters
| Name | Description | Value |
| ---------------- | ----------------------------------- | ------ |
| `size` | Persistent Volume size | `10Gi` |
| `logStorageSize` | Persistent Volume for logs size | `2Gi` |
| `shards` | Number of Clickhouse replicas | `1` |
| `replicas` | Number of Clickhouse shards | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `logTTL` | for query_log and query_thread_log | `15` |
| Name | Description | Value |
| ---------------- | -------------------------------------------------------- | ------ |
| `size` | Size of Persistent Volume for data | `10Gi` |
| `logStorageSize` | Size of Persistent Volume for logs | `2Gi` |
| `shards` | Number of Clickhouse shards | `1` |
| `replicas` | Number of Clickhouse replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `logTTL` | TTL (expiration time) for query_log and query_thread_log | `15` |
### Configuration parameters
@@ -36,15 +39,32 @@ more details:
### Backup parameters
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| Name | Description | Value |
| ------------------------ | --------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable periodic backups | `false` |
| `backup.s3Region` | AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | Retention strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | Access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | Secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | Password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Explicit CPU/memory resource requests and limits for the Clickhouse service | `{}` |
| `resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `nano` |
In production environments, it's recommended to set `resources` explicitly.
Example of `resources`:
```yaml
resources:
limits:
cpu: 4000m
memory: 4Gi
requests:
cpu: 100m
memory: 512Mi
```
Allowed values for `resourcesPreset` are `none`, `nano`, `micro`, `small`, `medium`, `large`, `xlarge`, `2xlarge`.
This value is ignored if `resources` value is set.

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/clickhouse-backup:0.9.0@sha256:3faf7a4cebf390b9053763107482de175aa0fdb88c1e77424fd81100b1c3a205
ghcr.io/cozystack/cozystack/clickhouse-backup:0.10.0@sha256:3faf7a4cebf390b9053763107482de175aa0fdb88c1e77424fd81100b1c3a205

View File

@@ -1,3 +1,5 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-credentials" .Release.Name) }}
{{- $passwords := dict }}
{{- $users := .Values.users }}
@@ -32,7 +34,7 @@ kind: "ClickHouseInstallation"
metadata:
name: "{{ .Release.Name }}"
spec:
namespaceDomainPattern: "%s.svc.cozy.local"
namespaceDomainPattern: "%s.svc.{{ $clusterDomain }}"
defaults:
templates:
dataVolumeClaimTemplate: data-volume-template
@@ -92,6 +94,9 @@ spec:
templates:
volumeClaimTemplates:
- name: data-volume-template
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
accessModes:
- ReadWriteOnce
@@ -99,6 +104,9 @@ spec:
requests:
storage: {{ .Values.size }}
- name: log-volume-template
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
accessModes:
- ReadWriteOnce
@@ -107,6 +115,9 @@ spec:
storage: {{ .Values.logStorageSize }}
podTemplates:
- name: clickhouse-per-host
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
affinity:
podAntiAffinity:
@@ -122,9 +133,9 @@ spec:
- name: clickhouse
image: clickhouse/clickhouse-server:24.9.2.42
{{- if .Values.resources }}
resources: {{- include "cozy-lib.resources.sanitize" .Values.resources | nindent 16 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 16 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" .Values.resourcesPreset | nindent 16 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 16 }}
{{- end }}
volumeMounts:
- name: data-volume-template
@@ -133,6 +144,9 @@ spec:
mountPath: /var/log/clickhouse-server
serviceTemplates:
- name: svc-template
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
generateName: chendpoint-{chi}
spec:
ports:

View File

@@ -24,3 +24,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -9,5 +9,5 @@ spec:
kind: clickhouse
type: clickhouse
selector:
clickhouse.altinity.com/chi: {{ $.Release.Name }}
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -4,22 +4,22 @@
"properties": {
"size": {
"type": "string",
"description": "Persistent Volume size",
"description": "Size of Persistent Volume for data",
"default": "10Gi"
},
"logStorageSize": {
"type": "string",
"description": "Persistent Volume for logs size",
"description": "Size of Persistent Volume for logs",
"default": "2Gi"
},
"shards": {
"type": "number",
"description": "Number of Clickhouse replicas",
"description": "Number of Clickhouse shards",
"default": 1
},
"replicas": {
"type": "number",
"description": "Number of Clickhouse shards",
"description": "Number of Clickhouse replicas",
"default": 2
},
"storageClass": {
@@ -29,7 +29,7 @@
},
"logTTL": {
"type": "number",
"description": "for query_log and query_thread_log",
"description": "TTL (expiration time) for query_log and query_thread_log",
"default": 15
},
"backup": {
@@ -37,17 +37,17 @@
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable pereiodic backups",
"description": "Enable periodic backups",
"default": false
},
"s3Region": {
"type": "string",
"description": "The AWS S3 region where backups are stored",
"description": "AWS S3 region where backups are stored",
"default": "us-east-1"
},
"s3Bucket": {
"type": "string",
"description": "The S3 bucket used for storing backups",
"description": "S3 bucket used for storing backups",
"default": "s3.example.org/clickhouse-backups"
},
"schedule": {
@@ -57,34 +57,34 @@
},
"cleanupStrategy": {
"type": "string",
"description": "The strategy for cleaning up old backups",
"description": "Retention strategy for cleaning up old backups",
"default": "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
},
"s3AccessKey": {
"type": "string",
"description": "The access key for S3, used for authentication",
"description": "Access key for S3, used for authentication",
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu"
},
"s3SecretKey": {
"type": "string",
"description": "The secret key for S3, used for authentication",
"description": "Secret key for S3, used for authentication",
"default": "ju3eum4dekeich9ahM1te8waeGai0oog"
},
"resticPassword": {
"type": "string",
"description": "The password for Restic backup encryption",
"description": "Password for Restic backup encryption",
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
},
"resources": {
"type": "object",
"description": "Resources",
"description": "Explicit CPU/memory resource requests and limits for the Clickhouse service",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"description": "Use a common resources preset when `resources` is not set explicitly.",
"default": "nano"
}
}

View File

@@ -1,11 +1,11 @@
## @section Common parameters
## @param size Persistent Volume size
## @param logStorageSize Persistent Volume for logs size
## @param shards Number of Clickhouse replicas
## @param replicas Number of Clickhouse shards
## @param size Size of Persistent Volume for data
## @param logStorageSize Size of Persistent Volume for logs
## @param shards Number of Clickhouse shards
## @param replicas Number of Clickhouse replicas
## @param storageClass StorageClass used to store the data
## @param logTTL for query_log and query_thread_log
## @param logTTL TTL (expiration time) for query_log and query_thread_log
##
size: 10Gi
logStorageSize: 2Gi
@@ -29,14 +29,14 @@ users: {}
## @section Backup parameters
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.enabled Enable periodic backups
## @param backup.s3Region AWS S3 region where backups are stored
## @param backup.s3Bucket S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.cleanupStrategy The strategy for cleaning up old backups
## @param backup.s3AccessKey The access key for S3, used for authentication
## @param backup.s3SecretKey The secret key for S3, used for authentication
## @param backup.resticPassword The password for Restic backup encryption
## @param backup.cleanupStrategy Retention strategy for cleaning up old backups
## @param backup.s3AccessKey Access key for S3, used for authentication
## @param backup.s3SecretKey Secret key for S3, used for authentication
## @param backup.resticPassword Password for Restic backup encryption
backup:
enabled: false
s3Region: us-east-1
@@ -47,7 +47,7 @@ backup:
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
## @param resources Explicit CPU/memory resource requests and limits for the Clickhouse service
resources: {}
# resources:
# limits:
@@ -56,6 +56,6 @@ resources: {}
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
## @param resourcesPreset Use a common resources preset when `resources` is not set explicitly.
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.12.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
ghcr.io/cozystack/cozystack/postgres-backup:0.14.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f

View File

@@ -24,3 +24,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -2,6 +2,8 @@ apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
type: {{ ternary "LoadBalancer" "ClusterIP" .Values.external }}
{{- if .Values.external }}

View File

@@ -12,6 +12,7 @@ spec:
metadata:
labels:
app: {{ .Release.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: ferretdb

View File

@@ -19,9 +19,9 @@ spec:
minSyncReplicas: {{ .Values.quorum.minSyncReplicas }}
maxSyncReplicas: {{ .Values.quorum.maxSyncReplicas }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 4 }}
{{- end }}
monitoring:
enablePodMonitor: true
@@ -35,6 +35,7 @@ spec:
inheritedMetadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.users }}
managed:

View File

@@ -9,5 +9,5 @@ spec:
kind: ferretdb
type: ferretdb
selector:
app: {{ $.Release.Name }}
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0
version: 0.5.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.5.0@sha256:158c35dd6a512bd14e86a423be5c8c7ca91ac71999c73cce2714e4db60a2db43
ghcr.io/cozystack/cozystack/nginx-cache:0.5.1@sha256:50ac1581e3100bd6c477a71161cb455a341ffaf9e5e2f6086802e4e25271e8af

View File

@@ -34,9 +34,9 @@ spec:
- image: haproxy:latest
name: haproxy
{{- if .Values.haproxy.resources }}
resources: {{- toYaml .Values.haproxy.resources | nindent 10 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.haproxy.resources $) | nindent 10 }}
{{- else if ne .Values.haproxy.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.haproxy.resourcesPreset "Release" .Release) | nindent 10 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.haproxy.resourcesPreset $) | nindent 10 }}
{{- end }}
ports:
- containerPort: 8080

View File

@@ -53,9 +53,9 @@ spec:
containers:
- name: nginx
{{- if $.Values.nginx.resources }}
resources: {{- toYaml $.Values.nginx.resources | nindent 10 }}
resources: {{- include "cozy-lib.resources.sanitize" (list $.Values.nginx.resources $) | nindent 10 }}
{{- else if ne $.Values.nginx.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" $.Values.nginx.resourcesPreset "Release" $.Release) | nindent 10 }}
resources: {{- include "cozy-lib.resources.preset" (list $.Values.nginx.resourcesPreset $) | nindent 10 }}
{{- end }}
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}"
readinessProbe:

View File

@@ -0,0 +1,39 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-haproxy
spec:
replicas: {{ .Values.haproxy.replicas }}
minReplicas: 1
kind: http-cache
type: http-cache
selector:
app: {{ $.Release.Name }}-haproxy
version: {{ $.Chart.Version }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-nginx
spec:
replicas: {{ .Values.nginx.replicas }}
minReplicas: 1
kind: http-cache
type: http-cache
selector:
app: {{ $.Release.Name }}-nginx-cache
version: {{ $.Chart.Version }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: http-cache
type: http-cache
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -14,9 +14,9 @@
| `zookeeper.replicas` | Number of ZooKeeper replicas | `3` |
| `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data | `""` |
| `kafka.resources` | Resources | `{}` |
| `kafka.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| `kafka.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `small` |
| `zookeeper.resources` | Resources | `{}` |
| `zookeeper.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| `zookeeper.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `micro` |
### Configuration parameters

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -25,3 +25,14 @@ rules:
- {{ .Release.Name }}
- {{ $.Release.Name }}-zookeeper
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -9,9 +9,9 @@ spec:
kafka:
replicas: {{ .Values.kafka.replicas }}
{{- if .Values.kafka.resources }}
resources: {{- toYaml .Values.kafka.resources | nindent 6 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.kafka.resources $) | nindent 6 }}
{{- else if ne .Values.kafka.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kafka.resourcesPreset "Release" .Release) | nindent 6 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.kafka.resourcesPreset $) | nindent 6 }}
{{- end }}
listeners:
- name: plain
@@ -71,9 +71,9 @@ spec:
zookeeper:
replicas: {{ .Values.zookeeper.replicas }}
{{- if .Values.zookeeper.resources }}
resources: {{- toYaml .Values.zookeeper.resources | nindent 6 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.zookeeper.resources $) | nindent 6 }}
{{- else if ne .Values.zookeeper.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.zookeeper.resourcesPreset "Release" .Release) | nindent 6 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.zookeeper.resourcesPreset $) | nindent 6 }}
{{- end }}
storage:
type: persistent-claim

View File

@@ -33,7 +33,7 @@
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
"default": "small"
}
}
},
@@ -63,7 +63,7 @@
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
"default": "micro"
}
}
},

View File

@@ -25,7 +25,7 @@ kafka:
# memory: 512Mi
## @param kafka.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
resourcesPreset: "small"
zookeeper:
size: 5Gi
@@ -42,7 +42,7 @@ zookeeper:
# memory: 512Mi
## @param zookeeper.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
resourcesPreset: "micro"
## @section Configuration parameters

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.21.0
version: 0.24.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -81,12 +81,13 @@ See the reference for components utilized in this service:
### Common Parameters
| Name | Description | Value |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------- | ------------ |
| `host` | Hostname used to access the Kubernetes cluster externally. Defaults to `<cluster-name>.<tenant-host>` when empty. | `""` |
| `controlPlane.replicas` | Number of replicas for Kubernetes control-plane components. | `2` |
| `storageClass` | StorageClass used to store user data. | `replicated` |
| `nodeGroups` | nodeGroups configuration | `{}` |
| Name | Description | Value |
| ----------------------------------- | ----------------------------------------------------------------------------------------------------------------- | ------------ |
| `host` | Hostname used to access the Kubernetes cluster externally. Defaults to `<cluster-name>.<tenant-host>` when empty. | `""` |
| `controlPlane.replicas` | Number of replicas for Kubernetes control-plane components. | `2` |
| `storageClass` | StorageClass used to store user data. | `replicated` |
| `useCustomSecretForPatchContainerd` | if true, for patch containerd will be used secret: {{ .Release.Name }}-patch-containerd | `false` |
| `nodeGroups` | nodeGroups configuration | `{}` |
### Cluster Addons
@@ -109,16 +110,16 @@ See the reference for components utilized in this service:
### Kubernetes Control Plane Configuration
| Name | Description | Value |
| -------------------------------------------------- | ---------------------------------------------------------------------------- | ------- |
| `controlPlane.apiServer.resources` | Explicit CPU/memory resource requests and limits for the API server. | `{}` |
| `controlPlane.apiServer.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `small` |
| `controlPlane.controllerManager.resources` | Explicit CPU/memory resource requests and limits for the controller manager. | `{}` |
| `controlPlane.controllerManager.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.scheduler.resources` | Explicit CPU/memory resource requests and limits for the scheduler. | `{}` |
| `controlPlane.scheduler.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.konnectivity.server.resources` | Explicit CPU/memory resource requests and limits for the Konnectivity. | `{}` |
| `controlPlane.konnectivity.server.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| Name | Description | Value |
| -------------------------------------------------- | ---------------------------------------------------------------------------- | -------- |
| `controlPlane.apiServer.resources` | Explicit CPU/memory resource requests and limits for the API server. | `{}` |
| `controlPlane.apiServer.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `medium` |
| `controlPlane.controllerManager.resources` | Explicit CPU/memory resource requests and limits for the controller manager. | `{}` |
| `controlPlane.controllerManager.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.scheduler.resources` | Explicit CPU/memory resource requests and limits for the scheduler. | `{}` |
| `controlPlane.scheduler.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.konnectivity.server.resources` | Explicit CPU/memory resource requests and limits for the Konnectivity. | `{}` |
| `controlPlane.konnectivity.server.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
In production environments, it's recommended to set `resources` explicitly.
Example of `controlPlane.*.resources`:

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.21.0@sha256:7605161dcb8ebab8070a9f277452f6f96b33c4184c4a57f9f57f9876367e5f1f
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.24.0@sha256:3a8170433e1632e5cc2b6d9db34d0605e8e6c63c158282c38450415e700e932e

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.21.0@sha256:4bee0705f0e1a6c93f4f5b343c988f2e05031844656ca0ff9ef12c88738687d3
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.24.0@sha256:b478952fab735f85c3ba15835012b1de8af5578b33a8a2670eaf532ffc17681e

View File

@@ -8,7 +8,7 @@ ENV GOARCH=$TARGETARCH
RUN git clone https://github.com/kubevirt/cloud-provider-kubevirt /go/src/kubevirt.io/cloud-provider-kubevirt \
&& cd /go/src/kubevirt.io/cloud-provider-kubevirt \
&& git checkout 443a1fe
&& git checkout a0acf33
WORKDIR /go/src/kubevirt.io/cloud-provider-kubevirt

View File

@@ -37,7 +37,7 @@ index 74166b5d9..4e744f8de 100644
klog.Infof("Initializing kubevirtEPSController")
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 6f6e3d322..b56882c12 100644
index 53388eb8e..b56882c12 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -54,10 +54,10 @@ type Controller struct {
@@ -286,22 +286,6 @@ index 6f6e3d322..b56882c12 100644
for _, eps := range slicesToDelete {
err := c.infraClient.DiscoveryV1().EndpointSlices(eps.Namespace).Delete(context.TODO(), eps.Name, metav1.DeleteOptions{})
if err != nil {
@@ -474,11 +538,11 @@ func (c *Controller) reconcileByAddressType(service *v1.Service, tenantSlices []
// Create the desired port configuration
var desiredPorts []discovery.EndpointPort
- for _, port := range service.Spec.Ports {
+ for i := range service.Spec.Ports {
desiredPorts = append(desiredPorts, discovery.EndpointPort{
- Port: &port.TargetPort.IntVal,
- Protocol: &port.Protocol,
- Name: &port.Name,
+ Port: &service.Spec.Ports[i].TargetPort.IntVal,
+ Protocol: &service.Spec.Ports[i].Protocol,
+ Name: &service.Spec.Ports[i].Name,
})
}
@@ -588,55 +652,114 @@ func ownedBy(endpointSlice *discovery.EndpointSlice, svc *v1.Service) bool {
return false
}
@@ -437,18 +421,10 @@ index 6f6e3d322..b56882c12 100644
return nil
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1fb86e25f..14d92d340 100644
index 1c97035b4..14d92d340 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
@@ -13,6 +13,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/intstr"
+ "k8s.io/apimachinery/pkg/util/sets"
dfake "k8s.io/client-go/dynamic/fake"
"k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/testing"
@@ -189,7 +190,7 @@ func setupTestKubevirtEPSController() *testKubevirtEPSController {
@@ -190,7 +190,7 @@ func setupTestKubevirtEPSController() *testKubevirtEPSController {
}: "VirtualMachineInstanceList",
})
@@ -457,83 +433,87 @@ index 1fb86e25f..14d92d340 100644
err := controller.Init()
if err != nil {
@@ -686,5 +687,229 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
return false, err
}).Should(BeTrue(), "EndpointSlice in infra cluster should be recreated by the controller after deletion")
})
+
+ g.It("Should correctly handle multiple unique ports in EndpointSlice", func() {
+ // Create a VMI in the infra cluster
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+
+ // Create an EndpointSlice in the tenant cluster
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,
+ *createPort("http", 80, v1.ProtocolTCP),
+ []discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
+
@@ -697,51 +697,43 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
*createPort("http", 80, v1.ProtocolTCP),
[]discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
- // Define several unique ports for the Service
+ // Define multiple ports for the Service
+ servicePorts := []v1.ServicePort{
+ {
servicePorts := []v1.ServicePort{
{
- Name: "client",
- Protocol: v1.ProtocolTCP,
- Port: 10001,
- TargetPort: intstr.FromInt(30396),
- NodePort: 30396,
- AppProtocol: nil,
+ Name: "client",
+ Protocol: v1.ProtocolTCP,
+ Port: 10001,
+ TargetPort: intstr.FromInt(30396),
+ NodePort: 30396,
+ },
+ {
},
{
- Name: "dashboard",
- Protocol: v1.ProtocolTCP,
- Port: 8265,
- TargetPort: intstr.FromInt(31003),
- NodePort: 31003,
- AppProtocol: nil,
+ Name: "dashboard",
+ Protocol: v1.ProtocolTCP,
+ Port: 8265,
+ TargetPort: intstr.FromInt(31003),
+ NodePort: 31003,
+ },
+ {
},
{
- Name: "metrics",
- Protocol: v1.ProtocolTCP,
- Port: 8080,
- TargetPort: intstr.FromInt(30452),
- NodePort: 30452,
- AppProtocol: nil,
+ Name: "metrics",
+ Protocol: v1.ProtocolTCP,
+ Port: 8080,
+ TargetPort: intstr.FromInt(30452),
+ NodePort: 30452,
+ },
+ }
+
+ createAndAssertInfraServiceLB("infra-multiport-service", "tenant-service-name", "test-cluster",
},
}
- // Create a Service with the first port
createAndAssertInfraServiceLB("infra-multiport-service", "tenant-service-name", "test-cluster",
- servicePorts[0],
- v1.ServiceExternalTrafficPolicyLocal)
+ servicePorts[0], v1.ServiceExternalTrafficPolicyLocal)
+
+ svc, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(context.TODO(), "infra-multiport-service", metav1.GetOptions{})
+ Expect(err).To(BeNil())
+
+ svc.Spec.Ports = servicePorts
+ _, err = testVals.infraClient.CoreV1().Services(infraNamespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
+ Expect(err).To(BeNil())
+
+ var epsListMultiPort *discoveryv1.EndpointSliceList
+
+ Eventually(func() (bool, error) {
+ epsListMultiPort, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
+ if len(epsListMultiPort.Items) != 1 {
+ return false, err
+ }
+
+ createdSlice := epsListMultiPort.Items[0]
+ expectedPortNames := []string{"client", "dashboard", "metrics"}
+ foundPortNames := []string{}
+
+ for _, port := range createdSlice.Ports {
+ if port.Name != nil {
+ foundPortNames = append(foundPortNames, *port.Name)
+ }
+ }
+
+ if len(foundPortNames) != len(expectedPortNames) {
+ return false, err
+ }
+
+ portSet := sets.NewString(foundPortNames...)
+ expectedPortSet := sets.NewString(expectedPortNames...)
+ return portSet.Equal(expectedPortSet), err
+ }).Should(BeTrue(), "EndpointSlice should contain all unique ports from the Service without duplicates")
+ })
+
- // Update the Service by adding the remaining ports
svc, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(context.TODO(), "infra-multiport-service", metav1.GetOptions{})
Expect(err).To(BeNil())
svc.Spec.Ports = servicePorts
-
_, err = testVals.infraClient.CoreV1().Services(infraNamespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
Expect(err).To(BeNil())
var epsListMultiPort *discoveryv1.EndpointSliceList
- // Verify that the EndpointSlice is created with correct unique ports
Eventually(func() (bool, error) {
epsListMultiPort, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
if len(epsListMultiPort.Items) != 1 {
@@ -758,7 +750,6 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
}
}
- // Verify that all expected ports are present and without duplicates
if len(foundPortNames) != len(expectedPortNames) {
return false, err
}
@@ -769,5 +760,156 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
}).Should(BeTrue(), "EndpointSlice should contain all unique ports from the Service without duplicates")
})
+ g.It("Should not panic when Service changes to have a non-nil selector, causing EndpointSlice deletion with no new slices to create", func() {
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,

View File

@@ -0,0 +1,142 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 53388eb8e..28644236f 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -12,7 +12,6 @@ import (
apiequality "k8s.io/apimachinery/pkg/api/equality"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@@ -669,35 +668,50 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
for _, slice := range tenantSlices {
for _, endpoint := range slice.Endpoints {
// find all unique nodes that correspond to an endpoint in a tenant slice
+ if endpoint.NodeName == nil {
+ klog.Warningf("Skipping endpoint without NodeName in slice %s/%s", slice.Namespace, slice.Name)
+ continue
+ }
nodeSet.Insert(*endpoint.NodeName)
}
}
- klog.Infof("Desired nodes for service %s in namespace %s: %v", service.Name, service.Namespace, sets.List(nodeSet))
+ klog.Infof("Desired nodes for service %s/%s: %v", service.Namespace, service.Name, sets.List(nodeSet))
for _, node := range sets.List(nodeSet) {
// find vmi for node name
- obj := &unstructured.Unstructured{}
- vmi := &kubevirtv1.VirtualMachineInstance{}
-
- obj, err := c.infraDynamic.Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).Namespace(c.infraNamespace).Get(context.TODO(), node, metav1.GetOptions{})
+ obj, err := c.infraDynamic.
+ Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).
+ Namespace(c.infraNamespace).
+ Get(context.TODO(), node, metav1.GetOptions{})
if err != nil {
- klog.Errorf("Failed to get VirtualMachineInstance %s in namespace %s:%v", node, c.infraNamespace, err)
+ klog.Errorf("Failed to get VMI %q in namespace %q: %v", node, c.infraNamespace, err)
continue
}
+ vmi := &kubevirtv1.VirtualMachineInstance{}
err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, vmi)
if err != nil {
klog.Errorf("Failed to convert Unstructured to VirtualMachineInstance: %v", err)
- klog.Fatal(err)
+ continue
}
+ if vmi.Status.NodeName == "" {
+ klog.Warningf("Skipping VMI %s/%s: NodeName is empty", vmi.Namespace, vmi.Name)
+ continue
+ }
+ nodeNamePtr := &vmi.Status.NodeName
+
ready := vmi.Status.Phase == kubevirtv1.Running
serving := vmi.Status.Phase == kubevirtv1.Running
terminating := vmi.Status.Phase == kubevirtv1.Failed || vmi.Status.Phase == kubevirtv1.Succeeded
for _, i := range vmi.Status.Interfaces {
if i.Name == "default" {
+ if i.IP == "" {
+ klog.Warningf("VMI %s/%s interface %q has no IP, skipping", vmi.Namespace, vmi.Name, i.Name)
+ continue
+ }
desiredEndpoints = append(desiredEndpoints, &discovery.Endpoint{
Addresses: []string{i.IP},
Conditions: discovery.EndpointConditions{
@@ -705,9 +719,9 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
Serving: &serving,
Terminating: &terminating,
},
- NodeName: &vmi.Status.NodeName,
+ NodeName: nodeNamePtr,
})
- continue
+ break
}
}
}
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1c97035b4..d205d0bed 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
@@ -771,3 +771,55 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
})
})
+
+var _ = g.Describe("getDesiredEndpoints", func() {
+ g.It("should skip endpoints without NodeName and VMIs without NodeName or IP", func() {
+ // Setup controller
+ ctrl := setupTestKubevirtEPSController().controller
+
+ // Manually inject dynamic client content (1 VMI with missing NodeName)
+ vmi := createUnstructuredVMINode("vmi-without-node", "", "10.0.0.1") // empty NodeName
+ _, err := ctrl.infraDynamic.
+ Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).
+ Namespace(infraNamespace).
+ Create(context.TODO(), vmi, metav1.CreateOptions{})
+ Expect(err).To(BeNil())
+
+ // Create service
+ svc := createInfraServiceLB("test-svc", "test-svc", "test-cluster",
+ v1.ServicePort{
+ Name: "http",
+ Port: 80,
+ TargetPort: intstr.FromInt(8080),
+ Protocol: v1.ProtocolTCP,
+ },
+ v1.ServiceExternalTrafficPolicyLocal,
+ )
+
+ // One endpoint has nil NodeName, another is valid
+ nodeName := "vmi-without-node"
+ tenantSlice := &discoveryv1.EndpointSlice{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "slice",
+ Namespace: tenantNamespace,
+ Labels: map[string]string{
+ discoveryv1.LabelServiceName: "test-svc",
+ },
+ },
+ AddressType: discoveryv1.AddressTypeIPv4,
+ Endpoints: []discoveryv1.Endpoint{
+ { // should be skipped due to nil NodeName
+ Addresses: []string{"10.1.1.1"},
+ NodeName: nil,
+ },
+ { // will hit VMI without NodeName and also be skipped
+ Addresses: []string{"10.1.1.2"},
+ NodeName: &nodeName,
+ },
+ },
+ }
+
+ endpoints := ctrl.getDesiredEndpoints(svc, []*discoveryv1.EndpointSlice{tenantSlice})
+ Expect(endpoints).To(HaveLen(0), "Expected no endpoints due to missing NodeName or IP")
+ })
+})

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.21.0@sha256:fb6d3ce9d6d948285a6d399c852e15259d6922162ec7c44177d2274243f59d1f
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.24.0@sha256:4d3728b2050d4e0adb00b9f4abbb4a020b29e1a39f24ca1447806fc81f110fa6

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.32@sha256:184b81529ae72684279799b12f436cc7a511d8ff5bd1e9a30478799c7707c625
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.32@sha256:e53f2394c7aa76ad10818ffb945e40006cd77406999e47e036d41b8b0bf094cc

View File

@@ -121,21 +121,21 @@ metadata:
spec:
apiServer:
{{- if .Values.controlPlane.apiServer.resources }}
resources: {{- toYaml .Values.controlPlane.apiServer.resources | nindent 6 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.apiServer.resources $) | nindent 6 }}
{{- else if ne .Values.controlPlane.apiServer.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.apiServer.resourcesPreset "Release" .Release) | nindent 6 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.apiServer.resourcesPreset $) | nindent 6 }}
{{- end }}
controllerManager:
{{- if .Values.controlPlane.controllerManager.resources }}
resources: {{- toYaml .Values.controlPlane.controllerManager.resources | nindent 6 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.controllerManager.resources $) | nindent 6 }}
{{- else if ne .Values.controlPlane.controllerManager.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.controllerManager.resourcesPreset "Release" .Release) | nindent 6 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.controllerManager.resourcesPreset $) | nindent 6 }}
{{- end }}
scheduler:
{{- if .Values.controlPlane.scheduler.resources }}
resources: {{- toYaml .Values.controlPlane.scheduler.resources | nindent 6 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.scheduler.resources $) | nindent 6 }}
{{- else if ne .Values.controlPlane.scheduler.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.scheduler.resourcesPreset "Release" .Release) | nindent 6 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.scheduler.resourcesPreset $) | nindent 6 }}
{{- end }}
dataStoreName: "{{ $etcd }}"
addons:
@@ -146,9 +146,9 @@ spec:
server:
port: 8132
{{- if .Values.controlPlane.konnectivity.server.resources }}
resources: {{- toYaml .Values.controlPlane.konnectivity.server.resources | nindent 10 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.konnectivity.server.resources $) | nindent 10 }}
{{- else if ne .Values.controlPlane.konnectivity.server.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.konnectivity.server.resourcesPreset "Release" .Release) | nindent 10 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.konnectivity.server.resourcesPreset $) | nindent 10 }}
{{- end }}
kubelet:
cgroupfs: systemd
@@ -211,12 +211,25 @@ spec:
- ["LABEL=ephemeral", "/ephemeral"]
- ["/ephemeral/kubelet", "/var/lib/kubelet", "none", "bind,nofail"]
- ["/ephemeral/containerd", "/var/lib/containerd", "none", "bind,nofail"]
{{- $sec := lookup "v1" "Secret" $.Release.Namespace (printf "%s-patch-containerd" $.Release.Name) }}
{{- if $sec }}
files:
{{- range $key, $_ := $sec.data }}
- path: /etc/containerd/certs.d/{{ trimSuffix ".toml" $key }}/hosts.toml
contentFrom:
secret:
name: {{ $.Release.Name }}-patch-containerd
key: {{ $key }}
permissions: "0400"
{{- end }}
{{- end }}
preKubeadmCommands:
- sed -i 's|root:x:|root::|' /etc/passwd
- systemctl stop containerd.service
- mkdir -p /ephemeral/kubelet /ephemeral/containerd
- mount -o bind /ephemeral/kubelet /var/lib/kubelet
- mount -o bind /ephemeral/containerd /var/lib/containerd
- sudo sed -i '/\[plugins."io.containerd.grpc.v1.cri".registry\]/,/^\[/ s|^\(\s*config_path\s*=\s*\).*|\1"/etc/containerd/certs.d"|' /etc/containerd/config.toml
- systemctl start containerd.service
joinConfiguration:
nodeRegistration:

View File

@@ -0,0 +1,15 @@
{{- if not .Values.useCustomSecretForPatchContainerd }}
{{- $sourceSecret := lookup "v1" "Secret" "cozy-system" "patch-containerd" }}
{{- if $sourceSecret }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-patch-containerd
namespace: {{ .Release.Namespace }}
type: {{ $sourceSecret.type }}
data:
{{- range $key, $value := $sourceSecret.data }}
{{ printf "%s: %s" $key ($value | quote) | indent 2 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -34,3 +34,14 @@ rules:
- {{ $.Release.Name }}-{{ $groupName }}
{{- end }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -17,6 +17,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig

View File

@@ -24,8 +24,8 @@ spec:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-monitoring-agents
storageNamespace: cozy-monitoring-agents
targetNamespace: cozy-monitoring
storageNamespace: cozy-monitoring
install:
createNamespace: true
timeout: "300s"

View File

@@ -1,4 +1,6 @@
{{- define "cozystack.defaultVPAValues" -}}
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $targetTenant := index $myNS.metadata.annotations "namespace.cozystack.io/monitoring" }}
vertical-pod-autoscaler:
@@ -13,7 +15,7 @@ vertical-pod-autoscaler:
metric-for-pod-labels: kube_pod_labels{job="kube-state-metrics", tenant="{{ .Release.Namespace }}", cluster="{{ .Release.Name }}"}[8d]
pod-name-label: pod
pod-namespace-label: namespace
prometheus-address: http://vmselect-shortterm.{{ $targetTenant }}.svc.cozy.local:8481/select/0/prometheus/
prometheus-address: http://vmselect-shortterm.{{ $targetTenant }}.svc.{{ $clusterDomain }}:8481/select/0/prometheus/
prometheus-cadvisor-job-name: cadvisor
resources:
limits:

View File

@@ -26,7 +26,7 @@
"resourcesPreset": {
"type": "string",
"description": "Use a common resources preset when `resources` is not set explicitly.",
"default": "small",
"default": "medium",
"enum": [
"none",
"nano",
@@ -127,6 +127,11 @@
"description": "StorageClass used to store user data.",
"default": "replicated"
},
"useCustomSecretForPatchContainerd": {
"type": "boolean",
"description": "if true, for patch containerd will be used secret: {{ .Release.Name }}-patch-containerd",
"default": false
},
"addons": {
"type": "object",
"properties": {

View File

@@ -3,9 +3,11 @@
## @param host Hostname used to access the Kubernetes cluster externally. Defaults to `<cluster-name>.<tenant-host>` when empty.
## @param controlPlane.replicas Number of replicas for Kubernetes control-plane components.
## @param storageClass StorageClass used to store user data.
## @param useCustomSecretForPatchContainerd if true, for patch containerd will be used secret: {{ .Release.Name }}-patch-containerd
##
host: ""
storageClass: replicated
useCustomSecretForPatchContainerd: false
## @param nodeGroups [object] nodeGroups configuration
##
@@ -121,7 +123,7 @@ controlPlane:
## cpu: 100m
## memory: 512Mi
##
resourcesPreset: "small"
resourcesPreset: "medium"
resources: {}
controllerManager:

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.0
version: 0.8.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/mariadb-backup:0.7.0@sha256:cfd1c37d8ad24e10681d82d6e6ce8a641b4602c1b0ffa8516ae15b4958bb12d4
ghcr.io/cozystack/cozystack/mariadb-backup:0.8.0@sha256:cfd1c37d8ad24e10681d82d6e6ce8a641b4602c1b0ffa8516ae15b4958bb12d4

View File

@@ -25,3 +25,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -57,6 +57,11 @@ spec:
name: {{ .Release.Name }}-my-cnf
key: config
service:
metadata:
labels:
app.kubernetes.io/instance: {{ $.Release.Name }}
storage:
size: {{ .Values.size }}
resizeInUseVolumes: true
@@ -74,7 +79,7 @@ spec:
# type: LoadBalancer
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 4 }}
{{- end }}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -24,3 +24,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -1,3 +1,5 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $passwords := dict }}
{{- range $user, $u := .Values.users }}
{{- if $u.password }}
@@ -45,12 +47,15 @@ spec:
- name: nats
image: nats:2.10.17-alpine
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 22 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 22 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 22 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 22 }}
{{- end }}
fullnameOverride: {{ .Release.Name }}
config:
cluster:
routeURLs:
k8sClusterDomain: {{ $clusterDomain }}
{{- if or (gt (len $passwords) 0) (gt (len .Values.config.merge) 0) }}
merge:
{{- if gt (len $passwords) 0 }}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.12.0
version: 0.14.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.12.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
ghcr.io/cozystack/cozystack/postgres-backup:0.14.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f

View File

@@ -1,2 +1,2 @@
FROM alpine:3.20
RUN apk add --no-cache postgresql16-client uuidgen restic
FROM alpine:3.22
RUN apk add --no-cache postgresql17-client uuidgen restic

View File

@@ -0,0 +1,11 @@
{{/*
Generate resource requirements based on ResourceQuota named as the namespace.
*/}}
{{- define "postgresjobs.defaultResources" }}
cpu: "1"
memory: 512Mi
{{- end }}
{{- define "postgresjobs.resources" }}
resources:
{{ include "cozy-lib.resources.sanitize" (list (include "postgresjobs.defaultResources" $ | fromYaml) $) | nindent 2 }}
{{- end }}

View File

@@ -81,6 +81,7 @@ spec:
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
{{- include "postgresjobs.resources" . | nindent 12 }}
volumes:
- name: scripts
secret:

View File

@@ -26,3 +26,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -6,9 +6,9 @@ metadata:
spec:
instances: {{ .Values.replicas }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 4 }}
{{- end }}
enableSuperuserAccess: true
@@ -44,6 +44,8 @@ spec:
inheritedMetadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"
app.kubernetes.io/name: postgres.apps.cozystack.io
app.kubernets.io/instance: {{ $.Release.Name }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
@@ -55,6 +57,6 @@ spec:
kind: postgres
type: postgres
selector:
cnpg.io/cluster: {{ .Release.Name }}
cnpg.io/podRole: instance
app.kubernetes.io/name: postgres.apps.cozystack.io
app.kubernets.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -50,6 +50,7 @@ spec:
name: secret
- mountPath: /scripts
name: scripts
{{- include "postgresjobs.resources" . | nindent 8 }}
securityContext:
fsGroup: 26
runAsGroup: 26

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -27,3 +27,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -12,9 +12,9 @@ spec:
type: LoadBalancer
{{- end }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 4 }}
{{- end }}
override:
statefulSet:

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.0
version: 0.8.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -28,3 +28,14 @@ rules:
- {{ .Release.Name }}-redis
- {{ .Release.Name }}-sentinel
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -26,22 +26,25 @@ spec:
sentinel:
replicas: 3
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 6 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 6 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 6 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 6 }}
{{- end }}
redis:
replicas: {{ .Values.replicas }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 6 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 6 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 6 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 6 }}
{{- end }}
{{- with .Values.size }}
storage:
persistentVolumeClaim:
metadata:
name: redisfailover-persistent-data
labels:
app.kubernetes.io/component: redis
app.kubernetes.io/instance: {{ $.Release.Name }}
spec:
accessModes:
- ReadWriteOnce

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0
version: 0.4.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -15,6 +15,7 @@ spec:
metadata:
labels:
app: {{ .Release.Name }}-haproxy
app.kubernetes.io/instance: {{ .Release.Name }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
spec:
@@ -34,9 +35,9 @@ spec:
- image: haproxy:latest
name: haproxy
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 10 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 10 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 10 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 10 }}
{{- end }}
ports:
{{- with .Values.httpAndHttps }}

View File

@@ -0,0 +1,13 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: tcp-balancer
type: haproxy
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -4,4 +4,4 @@ description: Separated tenant namespace
icon: /logos/tenant.svg
type: application
version: 1.9.2
version: 1.10.0

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -23,8 +23,8 @@ metadata:
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
resources: ["pods", "services", "persistentvolumes", "endpoints", "events", "resourcequotas"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
@@ -94,7 +94,12 @@ rules:
- apiGroups:
- ""
resources:
- "*"
- pods
- services
- persistentvolumes
- endpoints
- events
- resourcequotas
verbs:
- get
- list
@@ -119,24 +124,7 @@ metadata:
name: {{ include "tenant.name" . }}-view
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-view
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-view
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-view
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "view" (include "tenant.name" .)) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-view
@@ -165,7 +153,12 @@ rules:
- watch
- apiGroups: [""]
resources:
- "*"
- pods
- services
- persistentvolumes
- endpoints
- events
- resourcequotas
verbs:
- get
- list
@@ -184,6 +177,12 @@ rules:
verbs:
- get
- list
- apiGroups: ["subresources.kubevirt.io"]
resources:
- virtualmachineinstances/portforward
verbs:
- get
- update
- apiGroups:
- cozystack.io
resources:
@@ -196,24 +195,7 @@ metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-use
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-use
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "use" (include "tenant.name" .)) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-use
@@ -234,7 +216,12 @@ rules:
- get
- apiGroups: [""]
resources:
- "*"
- pods
- services
- persistentvolumes
- endpoints
- events
- resourcequotas
verbs:
- get
- list
@@ -253,6 +240,12 @@ rules:
verbs:
- get
- list
- apiGroups: ["subresources.kubevirt.io"]
resources:
- virtualmachineinstances/portforward
verbs:
- get
- update
- apiGroups: ["apps.cozystack.io"]
resources:
- buckets
@@ -293,24 +286,7 @@ metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "admin" (include "tenant.name" .)) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-admin
@@ -331,7 +307,12 @@ rules:
- get
- apiGroups: [""]
resources:
- "*"
- pods
- services
- persistentvolumes
- endpoints
- events
- resourcequotas
verbs:
- get
- list
@@ -349,6 +330,12 @@ rules:
verbs:
- get
- list
- apiGroups: ["subresources.kubevirt.io"]
resources:
- virtualmachineinstances/portforward
verbs:
- get
- update
- apiGroups: ["apps.cozystack.io"]
resources:
- '*'
@@ -366,24 +353,7 @@ metadata:
name: {{ include "tenant.name" . }}-super-admin
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-super-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-super-admin
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-super-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "super-admin" (include "tenant.name" .) ) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-super-admin

Some files were not shown because too many files have changed in this diff Show More