Compare commits

...

48 Commits

Author SHA1 Message Date
kklinch0
b1ed061de9 [bugfix] up pgdump version
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-19 10:59:46 +03:00
Andrei Kvapil
4479ed5e95 [bugfix] fix monitoring agents hr for tenant clusters (#1079)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Updated monitoring agents to use the correct namespaces for deployment
and data storage.

- **Chores**
  - Bumped the Kubernetes chart version to 0.24.1.
- Updated the versions map to reflect the latest chart version and
commit references.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-19 09:03:59 +02:00
kklinch0
b16e73ad42 [bugfix] fix monitoring agents hr for tenant clusters
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-18 18:38:54 +03:00
Andrei Kvapil
4631f85114 Split testing job into several (#1075)
This patch separates the Test job of the PR workflow into several
smaller jobs: 1) create a testing sandbox and deploy Talos, 2) install
Cozystack and configure it, 3) install managed applications and run e2e
tests. This lets developers shorten the feedback loop if tests are
merely acting flaky and aren't really broken. It's not the right way,
but it's 80/20.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a multi-stage workflow for environment preparation,
Cozystack installation, application testing, and cleanup.
- Added automated end-to-end scripts for provisioning Talos clusters and
validating Cozystack installations.
- Added new Makefile targets to streamline cluster preparation and
Cozystack installation processes.
- **Bug Fixes**
- Removed obsolete annotation step in application testing to improve
resource handling.
- Added pre-checks and resource cleanup in application testing to
enhance test reliability.
- **Chores**
- Improved workflow structure for enhanced setup and testing
reliability.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-18 13:42:25 +02:00
Timofei Larkin
746641e523 Split testing job into several
This patch separates the Test job of the PR workflow into several
smaller jobs: 1) create a testing sandbox and deploy Talos, 2) install
Cozystack and configure it, 3) install managed applications and run e2e
tests. This lets developers shorten the feedback loop if tests are
merely acting flaky and aren't really broken. It's not the right way,
but it's 80/20.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-17 18:47:09 +03:00
Andrei Kvapil
3ce6dbe850 Release v0.32.0 (#1074)
This PR prepares the release `v0.32.0`.
2025-06-17 11:30:28 +02:00
Andrei Kvapil
8d5007919f [tests] fix avaiting for vm-disk
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 10:32:59 +02:00
github-actions
08e569918b Prepare release v0.32.0
Signed-off-by: github-actions <github-actions@github.com>
2025-06-16 23:54:35 +00:00
Andrei Kvapil
6498000721 [tests] VM Disk, VMI, VM, DBs (#1048)
Add 'Apps' tests for
Virtual Machine Disk
Virtual Machine Instance
Virtual Machine
PostgreSQL
MySQL
ClickHouse

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **Tests**
- Added new end-to-end tests for creating and validating VM disks, VM
instances, virtual machines, and multiple database types (PostgreSQL,
MySQL, ClickHouse), ensuring correct provisioning and readiness of these
resources.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-17 01:50:32 +02:00
Andrei Kvapil
8486e6b3aa [kubernetes] Fixes for resources and migration to v0.32.4 (#1073)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 01:38:17 +02:00
Andrei Kvapil
3f6b6798f4 [kubernetes] Fixes for resources and migration to v0.32.4
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 01:34:54 +02:00
Andrei Kvapil
c1b928b8ef [cluster-api] Add missing migration for capi-providers (#1072)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Introduced a new migration script to update the system version and
manage related resources during the upgrade from version 14 to 15.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-17 01:34:11 +02:00
Andrei Kvapil
c2e8fba483 [cluster-api] Add missing migration for capi-providers
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 01:33:58 +02:00
Andrei Kvapil
62cb694d72 Release v0.32.0-beta.2 (#1049)
This PR prepares the release `v0.32.0-beta.2`.
2025-06-16 21:36:04 +02:00
github-actions
c619343aa2 Prepare release v0.32.0-beta.2
Signed-off-by: github-actions <github-actions@github.com>
2025-06-16 19:06:14 +00:00
Ahmad Murzahmatov
75ad26989d [tests] VM Disk, VMI, VM
Add 'Apps' tests for
Virtual Machine Disk
Virtual Machine Instance
Virtual Machine
PostgreSQL
MySQL
ClickHouse

Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
2025-06-16 21:00:22 +02:00
Andrei Kvapil
c4fc8c18df Use library chart with k8s managed app (#1026)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Refactor**
- Updated resource configuration rendering in cluster templates to use
standardized resource handling from a shared library, improving
consistency in resource definitions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 20:55:57 +02:00
Timofei Larkin
8663dc940f Use library chart with k8s managed app
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-16 20:54:30 +02:00
Andrei Kvapil
cf983a8f9c [dashboard] Remove dependency on listing secrets (#1062)
This change includes the following commit
6856b66f92

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated the version of a core dependency used in the dashboard and
related services to a newer commit. No user-facing changes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 20:48:01 +02:00
Andrei Kvapil
ad6aa0ca94 Refactor roles and permissions for tenants (#1067)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced advanced Helm template helpers for managing Kubernetes RBAC
(Role-Based Access Control), including access level mapping,
hierarchy-aware group subject generation, and tenant parsing.
- Added dynamic RoleBinding resources across multiple applications to
bind roles to appropriate subjects based on access levels and tenant
namespaces.
- **Bug Fixes**
- Refined tenant application roles by restricting resource permissions
to specific core Kubernetes resources, enhancing security and access
control granularity.
- **Chores**
- Updated chart versions across numerous applications to reflect new
releases.
- Added reference files linking to the shared library in multiple
application chart directories.
- Pinned package versions to specific commits for improved version
stability and tracking.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 20:47:09 +02:00
Andrei Kvapil
9dc5d62f47 [dashboard] Remove dependency on listing secrets
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 20:32:51 +02:00
Andrei Kvapil
3b8a9f9d2c Configure all apps to use new function to generate subjects
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 20:32:11 +02:00
Andrei Kvapil
ab9926a177 Update cozypkg v1.1.0 (#1063) 2025-06-16 20:12:21 +02:00
Andrei Kvapil
f83741eb09 Add extra helper function to generate subjects
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 20:11:41 +02:00
Timofei Larkin
028f2e4e8d Add helper function to generate subjects
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-16 19:19:57 +03:00
Andrei Kvapil
255fa8cbe1 [docs] Review the Clickhouse app docs (#1059)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Improved and clarified documentation for the Managed ClickHouse
Service, including enhanced introductory content and clearer backup
instructions.
- Updated and corrected parameter descriptions for accuracy, especially
regarding shards, replicas, storage sizes, and backup options.
- Expanded explanations and examples for resource configuration in
production environments.
  - Reformatted tables and notes for better readability and usability.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 18:14:02 +02:00
Andrei Kvapil
b42f5cdc01 [bugfix] fix distro full bundle (#1056)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added a new template to automatically create a self-signed
ClusterIssuer for certificate management if one does not already exist.
- **Chores**
- Updated dependency configuration for the snapshot-controller to
simplify its setup process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 18:13:44 +02:00
Andrei Kvapil
74633ad699 Update cozypkg v1.1.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 18:11:27 +02:00
Nick Volynkin
980185ca2b [docs] Review the Clickhouse app docs
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-16 08:40:46 +03:00
Andrei Kvapil
8eabe30548 [platform] Use cozypkg instead of helm (#1057)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced the use of the CozyPkg tool for package deployment and
management, replacing previous Helm-based workflows across installer,
platform, and system components.

- **Refactor**
- Updated Makefiles and scripts to use CozyPkg commands for showing,
applying, diffing, suspending, resuming, and deleting packages.
- Removed dynamic API version handling and simplified deployment command
structures.

- **Chores**
- Updated Docker images to newer base versions and included CozyPkg
installation steps.
- Changed installer image references to use the latest available build.
- Removed obsolete scripts and dependencies related to Helm and
Kustomize.
- Consolidated package installations and updated tooling in Dockerfiles
for improved efficiency.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-14 20:50:12 +02:00
Andrei Kvapil
0c9c688e6d [platform] decrease resources for system applications (#1054)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added resource constraints for the flux-operator and multiple kube-ovn
components, specifying CPU and memory requests and limits.

- **Improvements**
- Reduced default minimum CPU and memory requests for monitoring and
seaweedfs components, as well as for the Redis master in the dashboard,
to optimize resource usage.

- **Chores**
	- Updated version numbers for monitoring and seaweedfs packages.
	- Refreshed version mappings to reflect new releases.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-14 08:21:32 +02:00
Andrei Kvapil
908c75927e [platform] Use cozypkg instead of helm
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-13 19:02:15 +02:00
Andrei Kvapil
0a1f078384 [docs] Note that Cozystack is a CNCF Sandbox project in the readme (#1055)
Fix a few other things in the readme

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated the README to highlight Cozystack's CNCF Sandbox status and
original sponsorship.
- Moved the user interface screenshot to appear directly after the
introduction.
- Reorganized community information into a dedicated section with
clearer invitations and calendar links for meetings.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-13 12:58:44 +02:00
kklinch0
6a713e5eb4 [bugfix] fix distro full bundle
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-13 10:59:14 +03:00
Nick Volynkin
8f0a28bad5 [docs] Note that Cozystack is a CNCF Sandbox project in the readme
Fix a few other things in the readme

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-13 09:47:11 +03:00
kklinch0
0fa70d9d38 [platform] cut resources
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-13 01:06:05 +03:00
klinch0
b14c82d606 [bugfix] add-resource-quotas-for-pg-jobs-and-fix-install-generate (#1051)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added default resource specifications for PostgreSQL jobs to ensure
consistent CPU and memory allocation.
- **Chores**
  - Updated the chart version for the PostgreSQL application.
  - Refreshed version mapping to reflect the latest release.
- Improved Node.js setup and package installation in the pre-commit
workflow.
- **Tests**
- Increased memory allocation for QEMU virtual machines in end-to-end
tests.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-12 10:10:26 +03:00
kklinch0
8e79f24c5b add rq for pg
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-11 22:48:09 +03:00
Andrei Kvapil
3266a5514e Get instance type when reconciling WorkloadMonitor (#1030)
When the WorkloadMonitor is reconciled and child Workload objects are
created, they will now get additional labels in the
`workloads.cozystack.io` namespace, containing metadata about the
workload. This particular commit checks if a pod targeted by a Workload
is owned by a VirtualMachineInstance (i.e. it launches a KubeVirt VMI)
and, if so, gets the VMI instance type and puts it in the
`kubevirt-vmi-instance-type` label.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Workload objects created for Pods now include additional labels
extracted from their owner references, specifically for
VirtualMachineInstance resources.
- If a VirtualMachineInstance has a relevant annotation, its instance
type is now reflected as a label on the associated Workload.
- **Chores**
- Updated and added several dependencies to improve compatibility and
maintainability.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-11 12:55:42 +02:00
Andrei Kvapil
0c37323a15 [kubernetes] Update Kubevirt-CCM (#1052)
Fixes panic, upstream issue:

- https://github.com/kubevirt/cloud-provider-kubevirt/pull/354

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved filtering and error handling for endpoints and virtual
machines with missing or invalid data, ensuring only valid endpoints are
processed.
- **New Features**
- Enhanced support for multi-cluster environments by introducing cluster
name filtering for service and endpoint management.
- **Tests**
- Added new tests to verify correct handling of endpoints and services
across clusters and improved coverage for edge cases.
- **Chores**
- Updated Kubernetes app and image versions for improved tracking and
deployment consistency.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-11 12:55:06 +02:00
Andrei Kvapil
10af98e158 [kubernetes] Update Kubevirt-CCM
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-11 10:30:13 +02:00
Andrei Kvapil
632224a30a Update Kube-OVN v1.13.13 and enable db healthcheck (#1047)
This PR updates Kube-OVN to the latest version and also includes fix
https://github.com/kubeovn/kube-ovn/pull/5294

Ref
https://github.com/kubeovn/kube-ovn/issues/5125#issuecomment-2921920661

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 13:56:31 +02:00
Andrei Kvapil
e8d11e64a6 Update Metallb v0.15.2 (#1045)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added new configuration options to exclude specific address pools from
Prometheus alerts for address pool exhaustion and usage.
- Introduced a new CRD for ServiceBGPStatus to provide detailed BGP peer
status per service and node.
- Added new status fields to track assigned and available IPv4/IPv6
addresses in IPAddressPool.

- **Improvements**
  - Updated Helm chart and dependency versions to the latest releases.
- Enhanced validation for speaker configuration to prevent invalid
settings.
  - Clarified configuration descriptions for easier understanding.
- Increased file descriptor limits for FRR daemons to improve
reliability.
- Simplified Docker image handling by using pre-built MetalLB images
instead of local builds.

- **Bug Fixes**
- Updated RBAC roles to grant necessary permissions for new resources
and status updates.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 13:36:40 +02:00
Andrei Kvapil
27c7a2feb5 Update Cilium v1.17.4 (#1046)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added a new configuration option to require Kubernetes connectivity in
liveness probes.
  - Enabled Kafka API key redaction by default in Hubble settings.

- **Bug Fixes**
- Improved conditional logic for resource creation to prevent
unnecessary resources during preflight mode.
  - Corrected YAML indentation and formatting in configuration files.

- **Chores**
- Upgraded Cilium and related component images from version 1.17.3 to
1.17.4.
- Updated documentation and default configuration values to reflect new
versions and settings.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 11:53:33 +02:00
Andrei Kvapil
9733de38a3 Update Kube-OVN v1.13.13 and enable db healthcheck
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 11:33:19 +02:00
Andrei Kvapil
775a05cc3a Update Metallb v0.15.2
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 11:13:36 +02:00
Andrei Kvapil
4e5cc2ae61 Update Cilium v1.17.4
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 11:03:47 +02:00
Timofei Larkin
911ca64de0 Get instance type when reconciling WorkloadMonitor
When the WorkloadMonitor is reconciled and child Workload objects are
created, they will now get additional labels in the
`workloads.cozystack.io` namespace, containing metadata about the
workload. This particular commit checks if a pod targeted by a Workload
is owned by a VirtualMachineInstance (i.e. it launches a KubeVirt VMI)
and, if so, gets the VMI instance type and puts it in the
`kubevirt-vmi-instance-type` label.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-10 11:17:40 +03:00
152 changed files with 2187 additions and 692 deletions

View File

@@ -30,12 +30,13 @@ jobs:
run: |
sudo apt update
sudo apt install curl -y
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt install nodejs -y
git clone https://github.com/bitnami/readme-generator-for-helm
sudo apt install npm -y
git clone --branch 2.7.0 --depth 1 https://github.com/bitnami/readme-generator-for-helm.git
cd ./readme-generator-for-helm
npm install
npm install -g pkg
npm install -g @yao-pkg/pkg
pkg . -o /usr/local/bin/readme-generator
- name: Run pre-commit hooks

View File

@@ -56,8 +56,8 @@ jobs:
name: talos-image
path: _out/assets/nocloud-amd64.raw.xz
test:
name: Test
prepare_env:
name: Prepare environment
runs-on: [self-hosted]
needs: build
@@ -66,6 +66,12 @@ jobs:
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Download installer
uses: actions/download-artifact@v4
with:
@@ -78,5 +84,74 @@ jobs:
name: talos-image
path: _out/assets/
- name: Test
run: make test
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Prepare environment
run: make SANDBOX_NAME=$SANDBOX_NAME prepare-env
install_cozystack:
name: Install Cozystack
runs-on: [self-hosted]
needs: prepare_env
# Never run when the PR carries the "release" label.
if: |
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Install Cozystack
run: make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME install-cozystack
test_apps:
name: Test applications
runs-on: [self-hosted]
needs: install_cozystack
# Never run when the PR carries the "release" label.
if: |
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: E2E Apps
run: make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME test-apps
cleanup:
name: Tear down environment
runs-on: [self-hosted]
needs: test_apps
# Never run when the PR carries the "release" label.
if: |
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: E2E Apps
run: make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME delete

View File

@@ -49,6 +49,10 @@ test:
make -C packages/core/testing apply
make -C packages/core/testing test
prepare-env:
make -C packages/core/testing apply
make -C packages/core/testing prepare-cluster
generate:
hack/update-codegen.sh

View File

@@ -12,11 +12,15 @@
**Cozystack** is a free PaaS platform and framework for building clouds.
Cozystack is a [CNCF Sandbox Level Project](https://www.cncf.io/sandbox-projects/) that was originally built and sponsored by [Ænix](https://aenix.io/).
With Cozystack, you can transform a bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters,
Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
Use Cozystack to build your own cloud or provide a cost-effective development environment.
![Cozystack user interface](https://cozystack.io/img/screenshot.png)
## Use-Cases
* [**Using Cozystack to build a public cloud**](https://cozystack.io/docs/guides/use-cases/public-cloud/)
@@ -28,9 +32,6 @@ You can use Cozystack as a platform to build a private cloud powered by Infrastr
* [**Using Cozystack as a Kubernetes distribution**](https://cozystack.io/docs/guides/use-cases/kubernetes-distribution/)
You can use Cozystack as a Kubernetes distribution for Bare Metal
## Screenshot
![Cozystack screenshot](https://cozystack.io/img/screenshot.png)
## Documentation
@@ -59,7 +60,10 @@ Commits are used to generate the changelog, and their author will be referenced
If you have **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/cozystack/cozystack/discussions/categories/feature-requests).
You are welcome to join our weekly community meetings (just add this events to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics)) or [Telegram group](https://t.me/cozystack).
## Community
You are welcome to join our [Telegram group](https://t.me/cozystack) and come to our weekly community meetings.
Add them to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics) for convenience.
## License

13
go.mod
View File

@@ -37,6 +37,7 @@ require (
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fluxcd/pkg/apis/kustomize v1.6.1 // indirect
@@ -91,14 +92,14 @@ require (
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.28.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.30.0 // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/sync v0.8.0 // indirect
golang.org/x/sys v0.26.0 // indirect
golang.org/x/term v0.25.0 // indirect
golang.org/x/text v0.19.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.7.0 // indirect
golang.org/x/tools v0.26.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect

28
go.sum
View File

@@ -26,8 +26,8 @@ github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkp
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v0.5.2 h1:xVCHIVMUu1wtM/VkR9jVZ45N3FhZfYMMYGorLCR8P3k=
github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
@@ -212,8 +212,8 @@ go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw=
golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@@ -222,26 +222,26 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24=
golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -1,9 +1,11 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Create tenant with isolated mode enabled" {
kubectl -n tenant-root get tenants.apps.cozystack.io test ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Tenant
@@ -24,6 +26,7 @@ EOF
}
@test "Create a tenant Kubernetes control plane" {
kubectl -n tenant-test get kuberneteses.apps.cozystack.io test ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes
@@ -91,4 +94,260 @@ EOF
kubectl wait deploy --timeout=4m --for=condition=available -n tenant-test kubernetes-test kubernetes-test-cluster-autoscaler kubernetes-test-kccm kubernetes-test-kcsi-controller
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=10m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
kubectl -n tenant-test delete kuberneteses.apps.cozystack.io test
}
@test "Create a VM Disk" {
name='test'
kubectl -n tenant-test get vmdisks.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VMDisk
metadata:
name: $name
namespace: tenant-test
spec:
source:
http:
url: https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
optical: false
storage: 5Gi
storageClass: replicated
EOF
sleep 5
kubectl -n tenant-test wait hr vm-disk-$name --timeout=5s --for=condition=ready
kubectl -n tenant-test wait dv vm-disk-$name --timeout=150s --for=condition=ready
kubectl -n tenant-test wait pvc vm-disk-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
}
@test "Create a VM Instance" {
diskName='test'
name='test'
kubectl -n tenant-test get vminstances.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VMInstance
metadata:
name: $name
namespace: tenant-test
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
running: true
instanceType: "u1.medium"
instanceProfile: ubuntu
disks:
- name: $diskName
gpus: []
resources:
cpu: ""
memory: ""
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
sleep 5
timeout 20 sh -ec "until kubectl -n tenant-test get vmi vm-instance-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 5; done"
kubectl -n tenant-test wait hr vm-instance-$name --timeout=5s --for=condition=ready
kubectl -n tenant-test wait vm vm-instance-$name --timeout=20s --for=condition=ready
kubectl -n tenant-test delete vminstances.apps.cozystack.io $name
kubectl -n tenant-test delete vmdisks.apps.cozystack.io $diskName
}
@test "Create a Virtual Machine" {
name='test'
kubectl -n tenant-test get virtualmachines.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VirtualMachine
metadata:
name: $name
namespace: tenant-test
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
instanceType: "u1.medium"
instanceProfile: ubuntu
systemDisk:
image: ubuntu
storage: 5Gi
storageClass: replicated
gpus: []
resources:
cpu: ""
memory: ""
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
sleep 5
kubectl -n tenant-test wait hr virtual-machine-$name --timeout=10s --for=condition=ready
kubectl -n tenant-test wait dv virtual-machine-$name --timeout=150s --for=condition=ready
kubectl -n tenant-test wait pvc virtual-machine-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
kubectl -n tenant-test wait vm virtual-machine-$name --timeout=100s --for=condition=ready
timeout 120 sh -ec "until kubectl -n tenant-test get vmi virtual-machine-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete virtualmachines.apps.cozystack.io $name
}
@test "Create DB PostgreSQL" {
name='test'
kubectl -n tenant-test get postgreses.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Postgres
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 2
storageClass: ""
postgresql:
parameters:
max_connections: 100
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
users:
testuser:
password: xai7Wepo
databases:
testdb:
roles:
admin:
- testuser
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr postgres-$name --timeout=100s --for=condition=ready
kubectl -n tenant-test wait job.batch postgres-$name-init-job --timeout=50s --for=condition=Complete
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-r -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-ro -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-rw -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-r -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-ro -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-rw -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete postgreses.apps.cozystack.io $name
}
@test "Create DB MySQL" {
name='test'
kubectl -n tenant-test get mysqls.apps.cozystack.io $name ||
kubectl create -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: MySQL
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 2
storageClass: ""
users:
testuser:
maxUserConnections: 1000
password: xai7Wepo
databases:
testdb:
roles:
admin:
- testuser
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr mysql-$name --timeout=30s --for=condition=ready
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name -o jsonpath='{.spec.ports[0].port}' | grep -q '3306'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/mysql-$name --timeout=110s --for=jsonpath='{.status.replicas}'=2
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name-metrics -o jsonpath='{.spec.ports[0].port}' | grep -q '9104'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name-metrics -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test wait deployment.apps/mysql-$name-metrics --timeout=90s --for=jsonpath='{.status.replicas}'=1
kubectl -n tenant-test delete mysqls.apps.cozystack.io $name
}
@test "Create DB ClickHouse" {
name='test'
kubectl -n tenant-test get clickhouses.apps.cozystack.io $name ||
kubectl create -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: $name
namespace: tenant-test
spec:
size: 10Gi
logStorageSize: 2Gi
shards: 1
replicas: 2
storageClass: ""
logTTL: 15
users:
testuser:
password: xai7Wepo
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/clickhouse-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr clickhouse-$name --timeout=20s --for=condition=ready
timeout 180 sh -ec "until kubectl -n tenant-test get svc chendpoint-clickhouse-$name -o jsonpath='{.spec.ports[*].port}' | grep -q '8123 9000'; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-0 --timeout=120s --for=jsonpath='{.status.replicas}'=1
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 100 sh -ec "until kubectl -n tenant-test get svc chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.spec.ports[*].port}' | grep -q '9000 8123 9009'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get sts chi-clickhouse-$name-clickhouse-0-1 ; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-1 --timeout=140s --for=jsonpath='{.status.replicas}'=1
}

View File

@@ -102,7 +102,7 @@ EOF
@test "Boot QEMU VMs" {
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 16384 \
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 24576 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
-drive file=srv${i}/system.img,if=virtio,format=raw \

View File

@@ -0,0 +1,157 @@
#!/usr/bin/env bats
@test "Install Cozystack" {
# Create namespace & configmap required by installer
kubectl create namespace cozy-system --dry-run=client -o yaml | kubectl apply -f -
kubectl create configmap cozystack -n cozy-system \
--from-literal=bundle-name=paas-full \
--from-literal=ipv4-pod-cidr=10.244.0.0/16 \
--from-literal=ipv4-pod-gateway=10.244.0.1 \
--from-literal=ipv4-svc-cidr=10.96.0.0/16 \
--from-literal=ipv4-join-cidr=100.64.0.0/16 \
--from-literal=root-host=example.org \
--from-literal=api-server-endpoint=https://192.168.123.10:6443 \
--dry-run=client -o yaml | kubectl apply -f -
# Apply installer manifests from file
kubectl apply -f _out/assets/cozystack-installer.yaml
# Wait for the installer deployment to become available
kubectl wait deployment/cozystack -n cozy-system --timeout=1m --for=condition=Available
# Wait until HelmReleases appear & reconcile them
timeout 60 sh -ec 'until kubectl get hr -A | grep -q cozys; do sleep 1; done'
sleep 5
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n "$1" hr/"$2" &"} END {print "wait"}' | sh -ex
# Fail the test if any HelmRelease is not Ready
if kubectl get hr -A | grep -v " True " | grep -v NAME; then
kubectl get hr -A
fail "Some HelmReleases failed to reconcile"
fi
}
@test "Wait for ClusterAPI provider deployments" {
# Wait for ClusterAPI provider deployments
timeout 60 sh -ec 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager >/dev/null 2>&1; do sleep 1; done'
kubectl wait deployment/capi-controller-manager deployment/capi-kamaji-controller-manager deployment/capi-kubeadm-bootstrap-controller-manager deployment/capi-operator-cluster-api-operator deployment/capk-controller-manager -n cozy-cluster-api --timeout=1m --for=condition=available
}
@test "Wait for LINSTOR and configure storage" {
# Linstor controller and nodes
kubectl wait deployment/linstor-controller -n cozy-linstor --timeout=5m --for=condition=available
timeout 60 sh -ec 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) -eq 3 ]; do sleep 1; done'
for node in srv1 srv2 srv3; do
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs ${node} /dev/vdc --pool-name data --storage-pool data
done
# Storage classes
kubectl apply -f - <<'EOF'
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: Immediate
allowVolumeExpansion: true
EOF
}
@test "Wait for MetalLB and configure address pool" {
# MetalLB address pool
kubectl apply -f - <<'EOF'
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools: [cozystack]
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses: [192.168.123.200-192.168.123.250]
autoAssign: true
avoidBuggyIPs: false
EOF
}
@test "Check Cozystack API service" {
kubectl wait --for=condition=Available apiservices/v1alpha1.apps.cozystack.io --timeout=2m
}
@test "Configure Tenant and wait for applications" {
# Patch root tenant and wait for its releases
kubectl patch tenants/root -n tenant-root --type merge -p '{"spec":{"host":"example.org","ingress":true,"monitoring":true,"etcd":true,"isolated":true}}'
timeout 60 sh -ec 'until kubectl get hr -n tenant-root etcd ingress monitoring tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/etcd hr/ingress hr/tenant-root -n tenant-root --timeout=2m --for=condition=ready
if ! kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready; then
flux reconcile hr monitoring -n tenant-root --force
kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready
fi
# Expose Cozystack services through ingress
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"expose-services":"api,dashboard,cdi-uploadproxy,vm-exportproxy,keycloak"}}'
# NGINX ingress controller
timeout 60 sh -ec 'until kubectl get deploy root-ingress-controller -n tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait deploy/root-ingress-controller -n tenant-root --timeout=5m --for=condition=available
# etcd statefulset
kubectl wait sts/etcd -n tenant-root --for=jsonpath='{.status.readyReplicas}'=3 --timeout=5m
# VictoriaMetrics components
kubectl wait vmalert/vmalert-shortterm vmalertmanager/alertmanager -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vlogs/generic -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vmcluster/shortterm vmcluster/longterm -n tenant-root --for=jsonpath='{.status.clusterStatus}'=operational --timeout=5m
# Grafana
kubectl wait clusters.postgresql.cnpg.io/grafana-db -n tenant-root --for=condition=ready --timeout=5m
kubectl wait deploy/grafana-deployment -n tenant-root --for=condition=available --timeout=5m
# Verify Grafana via ingress
ingress_ip=$(kubectl get svc root-ingress-controller -n tenant-root -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
if ! curl -sS -k "https://${ingress_ip}" -H 'Host: grafana.example.org' --max-time 30 | grep -q Found; then
echo "Failed to access Grafana via ingress at ${ingress_ip}" >&2
exit 1
fi
}
@test "Keycloak OIDC stack is healthy" {
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"oidc-enabled":"true"}}'
timeout 120 sh -ec 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/keycloak hr/keycloak-configure hr/keycloak-operator -n cozy-keycloak --timeout=10m --for=condition=ready
}

View File

@@ -0,0 +1,235 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Required installer assets exist" {
if [ ! -f _out/assets/cozystack-installer.yaml ]; then
echo "Missing: _out/assets/cozystack-installer.yaml" >&2
exit 1
fi
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing: _out/assets/nocloud-amd64.raw.xz" >&2
exit 1
fi
}
@test "IPv4 forwarding is enabled" {
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
echo "IPv4 forwarding is disabled!" >&2
echo >&2
echo "Enable it with:" >&2
echo " echo 1 > /proc/sys/net/ipv4/ip_forward" >&2
exit 1
fi
}
@test "Clean previous VMs" {
kill $(cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid 2>/dev/null) 2>/dev/null || true
rm -rf srv1 srv2 srv3
}
@test "Prepare networking and masquerading" {
ip link del cozy-br0 2>/dev/null || true
ip link add cozy-br0 type bridge
ip link set cozy-br0 up
ip address add 192.168.123.1/24 dev cozy-br0
# Masquerading rule idempotent (delete first, then add)
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
}
@test "Prepare cloudinit drive for VMs" {
mkdir -p srv1 srv2 srv3
# Generate cloudinit ISOs
for i in 1 2 3; do
echo "hostname: srv${i}" > "srv${i}/meta-data"
cat > "srv${i}/user-data" <<'EOF'
#cloud-config
EOF
cat > "srv${i}/network-config" <<EOF
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- "192.168.123.1${i}/26"
gateway4: "192.168.123.1"
nameservers:
search: [cluster.local]
addresses: [8.8.8.8]
EOF
( cd "srv${i}" && genisoimage \
-output seed.img \
-volid cidata -rational-rock -joliet \
user-data meta-data network-config )
done
}
@test "Use Talos NoCloud image from assets" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing _out/assets/nocloud-amd64.raw.xz" 2>&1
exit 1
fi
rm -f nocloud-amd64.raw
cp _out/assets/nocloud-amd64.raw.xz .
xz --decompress nocloud-amd64.raw.xz
}
@test "Prepare VM disks" {
for i in 1 2 3; do
cp nocloud-amd64.raw srv${i}/system.img
qemu-img resize srv${i}/system.img 50G
qemu-img create srv${i}/data.img 100G
done
}
@test "Create tap devices" {
for i in 1 2 3; do
ip link del cozy-srv${i} 2>/dev/null || true
ip tuntap add dev cozy-srv${i} mode tap
ip link set cozy-srv${i} up
ip link set cozy-srv${i} master cozy-br0
done
}
@test "Boot QEMU VMs" {
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 24576 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
-drive file=srv${i}/system.img,if=virtio,format=raw \
-drive file=srv${i}/seed.img,if=virtio,format=raw \
-drive file=srv${i}/data.img,if=virtio,format=raw \
-display none -daemonize -pidfile srv${i}/qemu.pid
done
# Give qemu a few seconds to start up networking
sleep 5
}
@test "Wait until Talos API port 50000 is reachable on all machines" {
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Generate Talos cluster configuration" {
# Clusterwide patches
cat > patch.yaml <<'EOF'
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.123.0/24
extraConfig:
maxPods: 512
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
- name: spl
registries:
mirrors:
docker.io:
endpoints:
- https://mirror.gcr.io
files:
- content: |
[plugins]
[plugins."io.containerd.cri.v1.runtime"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
apiServer:
extraArgs:
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
oidc-client-id: "kubernetes"
oidc-username-claim: "preferred_username"
oidc-groups-claim: "groups"
network:
cni:
name: none
dnsDomain: cozy.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOF
# Controlplaneonly patches
cat > patch-controlplane.yaml <<'EOF'
machine:
nodeLabels:
node.kubernetes.io/exclude-from-external-load-balancers:
$patch: delete
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.123.10
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.123.0/24
EOF
# Generate secrets once
if [ ! -f secrets.yaml ]; then
talosctl gen secrets
fi
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 \
--config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
}
@test "Apply Talos configuration to the node" {
# Apply the configuration to all three nodes
for node in 11 12 13; do
talosctl apply -f controlplane.yaml -n 192.168.123.${node} -e 192.168.123.${node} -i
done
# Wait for Talos services to come up again
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Bootstrap Talos cluster" {
# Bootstrap etcd on the first node
timeout 10 sh -ec 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
# Wait until etcd is healthy
timeout 180 sh -ec 'until talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 >/dev/null 2>&1; do sleep 1; done'
timeout 60 sh -ec 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep -q "rpc error"; do sleep 1; done'
# Retrieve kubeconfig
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
# Wait until all three nodes register in Kubernetes
timeout 60 sh -ec 'until [ $(kubectl get node --no-headers | wc -l) -eq 3 ]; do sleep 1; done'
}

View File

@@ -248,15 +248,24 @@ func (r *WorkloadMonitorReconciler) reconcilePodForMonitor(
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("pod-%s", pod.Name),
Namespace: pod.Namespace,
Labels: map[string]string{},
},
}
metaLabels := r.getWorkloadMetadata(&pod)
_, err := ctrl.CreateOrUpdate(ctx, r.Client, workload, func() error {
// Update owner references with the new monitor
updateOwnerReferences(workload.GetObjectMeta(), monitor)
// Copy labels from the Pod if needed
workload.Labels = pod.Labels
for k, v := range pod.Labels {
workload.Labels[k] = v
}
// Add workload meta to labels
for k, v := range metaLabels {
workload.Labels[k] = v
}
// Fill Workload status fields:
workload.Status.Kind = monitor.Spec.Kind
@@ -433,3 +442,12 @@ func mapObjectToMonitor[T client.Object](_ T, c client.Client) func(ctx context.
return requests
}
}
func (r *WorkloadMonitorReconciler) getWorkloadMetadata(obj client.Object) map[string]string {
labels := make(map[string]string)
annotations := obj.GetAnnotations()
if instanceType, ok := annotations["kubevirt.io/cluster-instancetype-name"]; ok {
labels["workloads.cozystack.io/kubevirt-vmi-instance-type"] = instanceType
}
return labels
}

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.1.0"
appVersion: "0.2.0"

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -18,3 +18,14 @@ rules:
resourceNames:
- {{ .Release.Name }}-ui
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.9.2
version: 0.10.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1,32 +1,35 @@
# Managed Clickhouse Service
ClickHouse is an open source high-performance and column-oriented SQL database management system (DBMS).
It is used for online analytical processing (OLAP).
Cozystack platform uses Altinity operator to provide ClickHouse.
### How to restore backup:
find snapshot:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name snapshots
```
1. Find a snapshot:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name snapshots
```
restore:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name restore latest --target /tmp/
```
2. Restore it:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name restore latest --target /tmp/
```
more details:
- https://itnext.io/restic-effective-backup-from-stdin-4bc1e8f083c1
For more details, read [Restic: Effective Backup from Stdin](https://blog.aenix.io/restic-effective-backup-from-stdin-4bc1e8f083c1).
## Parameters
### Common parameters
| Name | Description | Value |
| ---------------- | ----------------------------------- | ------ |
| `size` | Persistent Volume size | `10Gi` |
| `logStorageSize` | Persistent Volume for logs size | `2Gi` |
| `shards` | Number of Clickhouse replicas | `1` |
| `replicas` | Number of Clickhouse shards | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `logTTL` | for query_log and query_thread_log | `15` |
| Name | Description | Value |
| ---------------- | -------------------------------------------------------- | ------ |
| `size` | Size of Persistent Volume for data | `10Gi` |
| `logStorageSize` | Size of Persistent Volume for logs | `2Gi` |
| `shards` | Number of Clickhouse shards | `1` |
| `replicas` | Number of Clickhouse replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `logTTL` | TTL (expiration time) for query_log and query_thread_log | `15` |
### Configuration parameters
@@ -36,15 +39,32 @@ more details:
### Backup parameters
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| Name | Description | Value |
| ------------------------ | --------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable periodic backups | `false` |
| `backup.s3Region` | AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | Retention strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | Access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | Secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | Password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Explicit CPU/memory resource requests and limits for the Clickhouse service | `{}` |
| `resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `nano` |
In production environments, it's recommended to set `resources` explicitly.
Example of `resources`:
```yaml
resources:
limits:
cpu: 4000m
memory: 4Gi
requests:
cpu: 100m
memory: 512Mi
```
Allowed values for `resourcesPreset` are `none`, `nano`, `micro`, `small`, `medium`, `large`, `xlarge`, `2xlarge`.
This value is ignored if `resources` value is set.

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/clickhouse-backup:0.9.2@sha256:3faf7a4cebf390b9053763107482de175aa0fdb88c1e77424fd81100b1c3a205
ghcr.io/cozystack/cozystack/clickhouse-backup:0.10.0@sha256:3faf7a4cebf390b9053763107482de175aa0fdb88c1e77424fd81100b1c3a205

View File

@@ -24,3 +24,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -4,22 +4,22 @@
"properties": {
"size": {
"type": "string",
"description": "Persistent Volume size",
"description": "Size of Persistent Volume for data",
"default": "10Gi"
},
"logStorageSize": {
"type": "string",
"description": "Persistent Volume for logs size",
"description": "Size of Persistent Volume for logs",
"default": "2Gi"
},
"shards": {
"type": "number",
"description": "Number of Clickhouse replicas",
"description": "Number of Clickhouse shards",
"default": 1
},
"replicas": {
"type": "number",
"description": "Number of Clickhouse shards",
"description": "Number of Clickhouse replicas",
"default": 2
},
"storageClass": {
@@ -29,7 +29,7 @@
},
"logTTL": {
"type": "number",
"description": "for query_log and query_thread_log",
"description": "TTL (expiration time) for query_log and query_thread_log",
"default": 15
},
"backup": {
@@ -37,17 +37,17 @@
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable pereiodic backups",
"description": "Enable periodic backups",
"default": false
},
"s3Region": {
"type": "string",
"description": "The AWS S3 region where backups are stored",
"description": "AWS S3 region where backups are stored",
"default": "us-east-1"
},
"s3Bucket": {
"type": "string",
"description": "The S3 bucket used for storing backups",
"description": "S3 bucket used for storing backups",
"default": "s3.example.org/clickhouse-backups"
},
"schedule": {
@@ -57,34 +57,34 @@
},
"cleanupStrategy": {
"type": "string",
"description": "The strategy for cleaning up old backups",
"description": "Retention strategy for cleaning up old backups",
"default": "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
},
"s3AccessKey": {
"type": "string",
"description": "The access key for S3, used for authentication",
"description": "Access key for S3, used for authentication",
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu"
},
"s3SecretKey": {
"type": "string",
"description": "The secret key for S3, used for authentication",
"description": "Secret key for S3, used for authentication",
"default": "ju3eum4dekeich9ahM1te8waeGai0oog"
},
"resticPassword": {
"type": "string",
"description": "The password for Restic backup encryption",
"description": "Password for Restic backup encryption",
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
},
"resources": {
"type": "object",
"description": "Resources",
"description": "Explicit CPU/memory resource requests and limits for the Clickhouse service",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"description": "Use a common resources preset when `resources` is not set explicitly.",
"default": "nano"
}
}

View File

@@ -1,11 +1,11 @@
## @section Common parameters
## @param size Persistent Volume size
## @param logStorageSize Persistent Volume for logs size
## @param shards Number of Clickhouse replicas
## @param replicas Number of Clickhouse shards
## @param size Size of Persistent Volume for data
## @param logStorageSize Size of Persistent Volume for logs
## @param shards Number of Clickhouse shards
## @param replicas Number of Clickhouse replicas
## @param storageClass StorageClass used to store the data
## @param logTTL for query_log and query_thread_log
## @param logTTL TTL (expiration time) for query_log and query_thread_log
##
size: 10Gi
logStorageSize: 2Gi
@@ -29,14 +29,14 @@ users: {}
## @section Backup parameters
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.enabled Enable periodic backups
## @param backup.s3Region AWS S3 region where backups are stored
## @param backup.s3Bucket S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.cleanupStrategy The strategy for cleaning up old backups
## @param backup.s3AccessKey The access key for S3, used for authentication
## @param backup.s3SecretKey The secret key for S3, used for authentication
## @param backup.resticPassword The password for Restic backup encryption
## @param backup.cleanupStrategy Retention strategy for cleaning up old backups
## @param backup.s3AccessKey Access key for S3, used for authentication
## @param backup.s3SecretKey Secret key for S3, used for authentication
## @param backup.resticPassword Password for Restic backup encryption
backup:
enabled: false
s3Region: us-east-1
@@ -47,7 +47,7 @@ backup:
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
## @param resources Explicit CPU/memory resource requests and limits for the Clickhouse service
resources: {}
# resources:
# limits:
@@ -56,6 +56,6 @@ resources: {}
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
## @param resourcesPreset Use a common resources preset when `resources` is not set explicitly.
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.1
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.12.1@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
ghcr.io/cozystack/cozystack/postgres-backup:0.14.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f

View File

@@ -24,3 +24,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.1
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -25,3 +25,14 @@ rules:
- {{ .Release.Name }}
- {{ $.Release.Name }}-zookeeper
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.23.1
version: 0.24.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -110,16 +110,16 @@ See the reference for components utilized in this service:
### Kubernetes Control Plane Configuration
| Name | Description | Value |
| -------------------------------------------------- | ---------------------------------------------------------------------------- | ------- |
| `controlPlane.apiServer.resources` | Explicit CPU/memory resource requests and limits for the API server. | `{}` |
| `controlPlane.apiServer.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `small` |
| `controlPlane.controllerManager.resources` | Explicit CPU/memory resource requests and limits for the controller manager. | `{}` |
| `controlPlane.controllerManager.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.scheduler.resources` | Explicit CPU/memory resource requests and limits for the scheduler. | `{}` |
| `controlPlane.scheduler.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.konnectivity.server.resources` | Explicit CPU/memory resource requests and limits for the Konnectivity. | `{}` |
| `controlPlane.konnectivity.server.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| Name | Description | Value |
| -------------------------------------------------- | ---------------------------------------------------------------------------- | -------- |
| `controlPlane.apiServer.resources` | Explicit CPU/memory resource requests and limits for the API server. | `{}` |
| `controlPlane.apiServer.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `medium` |
| `controlPlane.controllerManager.resources` | Explicit CPU/memory resource requests and limits for the controller manager. | `{}` |
| `controlPlane.controllerManager.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.scheduler.resources` | Explicit CPU/memory resource requests and limits for the scheduler. | `{}` |
| `controlPlane.scheduler.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.konnectivity.server.resources` | Explicit CPU/memory resource requests and limits for the Konnectivity. | `{}` |
| `controlPlane.konnectivity.server.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
In production environments, it's recommended to set `resources` explicitly.
Example of `controlPlane.*.resources`:

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.23.1@sha256:7315850634728a5864a3de3150c12f0e1454f3f1ce33cdf21a278f57611dd5e9
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.24.0@sha256:3a8170433e1632e5cc2b6d9db34d0605e8e6c63c158282c38450415e700e932e

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.23.1@sha256:6962bdf51ab2ff40b420b9cff7c850aeea02187da2a65a67f10e0471744649d7
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.24.0@sha256:b478952fab735f85c3ba15835012b1de8af5578b33a8a2670eaf532ffc17681e

View File

@@ -8,7 +8,7 @@ ENV GOARCH=$TARGETARCH
RUN git clone https://github.com/kubevirt/cloud-provider-kubevirt /go/src/kubevirt.io/cloud-provider-kubevirt \
&& cd /go/src/kubevirt.io/cloud-provider-kubevirt \
&& git checkout 443a1fe
&& git checkout a0acf33
WORKDIR /go/src/kubevirt.io/cloud-provider-kubevirt

View File

@@ -37,7 +37,7 @@ index 74166b5d9..4e744f8de 100644
klog.Infof("Initializing kubevirtEPSController")
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 6f6e3d322..b56882c12 100644
index 53388eb8e..b56882c12 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -54,10 +54,10 @@ type Controller struct {
@@ -286,22 +286,6 @@ index 6f6e3d322..b56882c12 100644
for _, eps := range slicesToDelete {
err := c.infraClient.DiscoveryV1().EndpointSlices(eps.Namespace).Delete(context.TODO(), eps.Name, metav1.DeleteOptions{})
if err != nil {
@@ -474,11 +538,11 @@ func (c *Controller) reconcileByAddressType(service *v1.Service, tenantSlices []
// Create the desired port configuration
var desiredPorts []discovery.EndpointPort
- for _, port := range service.Spec.Ports {
+ for i := range service.Spec.Ports {
desiredPorts = append(desiredPorts, discovery.EndpointPort{
- Port: &port.TargetPort.IntVal,
- Protocol: &port.Protocol,
- Name: &port.Name,
+ Port: &service.Spec.Ports[i].TargetPort.IntVal,
+ Protocol: &service.Spec.Ports[i].Protocol,
+ Name: &service.Spec.Ports[i].Name,
})
}
@@ -588,55 +652,114 @@ func ownedBy(endpointSlice *discovery.EndpointSlice, svc *v1.Service) bool {
return false
}
@@ -437,18 +421,10 @@ index 6f6e3d322..b56882c12 100644
return nil
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1fb86e25f..14d92d340 100644
index 1c97035b4..14d92d340 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
@@ -13,6 +13,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/intstr"
+ "k8s.io/apimachinery/pkg/util/sets"
dfake "k8s.io/client-go/dynamic/fake"
"k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/testing"
@@ -189,7 +190,7 @@ func setupTestKubevirtEPSController() *testKubevirtEPSController {
@@ -190,7 +190,7 @@ func setupTestKubevirtEPSController() *testKubevirtEPSController {
}: "VirtualMachineInstanceList",
})
@@ -457,83 +433,87 @@ index 1fb86e25f..14d92d340 100644
err := controller.Init()
if err != nil {
@@ -686,5 +687,229 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
return false, err
}).Should(BeTrue(), "EndpointSlice in infra cluster should be recreated by the controller after deletion")
})
+
+ g.It("Should correctly handle multiple unique ports in EndpointSlice", func() {
+ // Create a VMI in the infra cluster
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+
+ // Create an EndpointSlice in the tenant cluster
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,
+ *createPort("http", 80, v1.ProtocolTCP),
+ []discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
+
@@ -697,51 +697,43 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
*createPort("http", 80, v1.ProtocolTCP),
[]discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
- // Define several unique ports for the Service
+ // Define multiple ports for the Service
+ servicePorts := []v1.ServicePort{
+ {
servicePorts := []v1.ServicePort{
{
- Name: "client",
- Protocol: v1.ProtocolTCP,
- Port: 10001,
- TargetPort: intstr.FromInt(30396),
- NodePort: 30396,
- AppProtocol: nil,
+ Name: "client",
+ Protocol: v1.ProtocolTCP,
+ Port: 10001,
+ TargetPort: intstr.FromInt(30396),
+ NodePort: 30396,
+ },
+ {
},
{
- Name: "dashboard",
- Protocol: v1.ProtocolTCP,
- Port: 8265,
- TargetPort: intstr.FromInt(31003),
- NodePort: 31003,
- AppProtocol: nil,
+ Name: "dashboard",
+ Protocol: v1.ProtocolTCP,
+ Port: 8265,
+ TargetPort: intstr.FromInt(31003),
+ NodePort: 31003,
+ },
+ {
},
{
- Name: "metrics",
- Protocol: v1.ProtocolTCP,
- Port: 8080,
- TargetPort: intstr.FromInt(30452),
- NodePort: 30452,
- AppProtocol: nil,
+ Name: "metrics",
+ Protocol: v1.ProtocolTCP,
+ Port: 8080,
+ TargetPort: intstr.FromInt(30452),
+ NodePort: 30452,
+ },
+ }
+
+ createAndAssertInfraServiceLB("infra-multiport-service", "tenant-service-name", "test-cluster",
},
}
- // Create a Service with the first port
createAndAssertInfraServiceLB("infra-multiport-service", "tenant-service-name", "test-cluster",
- servicePorts[0],
- v1.ServiceExternalTrafficPolicyLocal)
+ servicePorts[0], v1.ServiceExternalTrafficPolicyLocal)
+
+ svc, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(context.TODO(), "infra-multiport-service", metav1.GetOptions{})
+ Expect(err).To(BeNil())
+
+ svc.Spec.Ports = servicePorts
+ _, err = testVals.infraClient.CoreV1().Services(infraNamespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
+ Expect(err).To(BeNil())
+
+ var epsListMultiPort *discoveryv1.EndpointSliceList
+
+ Eventually(func() (bool, error) {
+ epsListMultiPort, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
+ if len(epsListMultiPort.Items) != 1 {
+ return false, err
+ }
+
+ createdSlice := epsListMultiPort.Items[0]
+ expectedPortNames := []string{"client", "dashboard", "metrics"}
+ foundPortNames := []string{}
+
+ for _, port := range createdSlice.Ports {
+ if port.Name != nil {
+ foundPortNames = append(foundPortNames, *port.Name)
+ }
+ }
+
+ if len(foundPortNames) != len(expectedPortNames) {
+ return false, err
+ }
+
+ portSet := sets.NewString(foundPortNames...)
+ expectedPortSet := sets.NewString(expectedPortNames...)
+ return portSet.Equal(expectedPortSet), err
+ }).Should(BeTrue(), "EndpointSlice should contain all unique ports from the Service without duplicates")
+ })
+
- // Update the Service by adding the remaining ports
svc, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(context.TODO(), "infra-multiport-service", metav1.GetOptions{})
Expect(err).To(BeNil())
svc.Spec.Ports = servicePorts
-
_, err = testVals.infraClient.CoreV1().Services(infraNamespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
Expect(err).To(BeNil())
var epsListMultiPort *discoveryv1.EndpointSliceList
- // Verify that the EndpointSlice is created with correct unique ports
Eventually(func() (bool, error) {
epsListMultiPort, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
if len(epsListMultiPort.Items) != 1 {
@@ -758,7 +750,6 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
}
}
- // Verify that all expected ports are present and without duplicates
if len(foundPortNames) != len(expectedPortNames) {
return false, err
}
@@ -769,5 +760,156 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
}).Should(BeTrue(), "EndpointSlice should contain all unique ports from the Service without duplicates")
})
+ g.It("Should not panic when Service changes to have a non-nil selector, causing EndpointSlice deletion with no new slices to create", func() {
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,

View File

@@ -0,0 +1,142 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 53388eb8e..28644236f 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -12,7 +12,6 @@ import (
apiequality "k8s.io/apimachinery/pkg/api/equality"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@@ -669,35 +668,50 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
for _, slice := range tenantSlices {
for _, endpoint := range slice.Endpoints {
// find all unique nodes that correspond to an endpoint in a tenant slice
+ if endpoint.NodeName == nil {
+ klog.Warningf("Skipping endpoint without NodeName in slice %s/%s", slice.Namespace, slice.Name)
+ continue
+ }
nodeSet.Insert(*endpoint.NodeName)
}
}
- klog.Infof("Desired nodes for service %s in namespace %s: %v", service.Name, service.Namespace, sets.List(nodeSet))
+ klog.Infof("Desired nodes for service %s/%s: %v", service.Namespace, service.Name, sets.List(nodeSet))
for _, node := range sets.List(nodeSet) {
// find vmi for node name
- obj := &unstructured.Unstructured{}
- vmi := &kubevirtv1.VirtualMachineInstance{}
-
- obj, err := c.infraDynamic.Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).Namespace(c.infraNamespace).Get(context.TODO(), node, metav1.GetOptions{})
+ obj, err := c.infraDynamic.
+ Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).
+ Namespace(c.infraNamespace).
+ Get(context.TODO(), node, metav1.GetOptions{})
if err != nil {
- klog.Errorf("Failed to get VirtualMachineInstance %s in namespace %s:%v", node, c.infraNamespace, err)
+ klog.Errorf("Failed to get VMI %q in namespace %q: %v", node, c.infraNamespace, err)
continue
}
+ vmi := &kubevirtv1.VirtualMachineInstance{}
err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, vmi)
if err != nil {
klog.Errorf("Failed to convert Unstructured to VirtualMachineInstance: %v", err)
- klog.Fatal(err)
+ continue
}
+ if vmi.Status.NodeName == "" {
+ klog.Warningf("Skipping VMI %s/%s: NodeName is empty", vmi.Namespace, vmi.Name)
+ continue
+ }
+ nodeNamePtr := &vmi.Status.NodeName
+
ready := vmi.Status.Phase == kubevirtv1.Running
serving := vmi.Status.Phase == kubevirtv1.Running
terminating := vmi.Status.Phase == kubevirtv1.Failed || vmi.Status.Phase == kubevirtv1.Succeeded
for _, i := range vmi.Status.Interfaces {
if i.Name == "default" {
+ if i.IP == "" {
+ klog.Warningf("VMI %s/%s interface %q has no IP, skipping", vmi.Namespace, vmi.Name, i.Name)
+ continue
+ }
desiredEndpoints = append(desiredEndpoints, &discovery.Endpoint{
Addresses: []string{i.IP},
Conditions: discovery.EndpointConditions{
@@ -705,9 +719,9 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
Serving: &serving,
Terminating: &terminating,
},
- NodeName: &vmi.Status.NodeName,
+ NodeName: nodeNamePtr,
})
- continue
+ break
}
}
}
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1c97035b4..d205d0bed 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
@@ -771,3 +771,55 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
})
})
+
+var _ = g.Describe("getDesiredEndpoints", func() {
+ g.It("should skip endpoints without NodeName and VMIs without NodeName or IP", func() {
+ // Setup controller
+ ctrl := setupTestKubevirtEPSController().controller
+
+ // Manually inject dynamic client content (1 VMI with missing NodeName)
+ vmi := createUnstructuredVMINode("vmi-without-node", "", "10.0.0.1") // empty NodeName
+ _, err := ctrl.infraDynamic.
+ Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).
+ Namespace(infraNamespace).
+ Create(context.TODO(), vmi, metav1.CreateOptions{})
+ Expect(err).To(BeNil())
+
+ // Create service
+ svc := createInfraServiceLB("test-svc", "test-svc", "test-cluster",
+ v1.ServicePort{
+ Name: "http",
+ Port: 80,
+ TargetPort: intstr.FromInt(8080),
+ Protocol: v1.ProtocolTCP,
+ },
+ v1.ServiceExternalTrafficPolicyLocal,
+ )
+
+ // One endpoint has nil NodeName, another is valid
+ nodeName := "vmi-without-node"
+ tenantSlice := &discoveryv1.EndpointSlice{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "slice",
+ Namespace: tenantNamespace,
+ Labels: map[string]string{
+ discoveryv1.LabelServiceName: "test-svc",
+ },
+ },
+ AddressType: discoveryv1.AddressTypeIPv4,
+ Endpoints: []discoveryv1.Endpoint{
+ { // should be skipped due to nil NodeName
+ Addresses: []string{"10.1.1.1"},
+ NodeName: nil,
+ },
+ { // will hit VMI without NodeName and also be skipped
+ Addresses: []string{"10.1.1.2"},
+ NodeName: &nodeName,
+ },
+ },
+ }
+
+ endpoints := ctrl.getDesiredEndpoints(svc, []*discoveryv1.EndpointSlice{tenantSlice})
+ Expect(endpoints).To(HaveLen(0), "Expected no endpoints due to missing NodeName or IP")
+ })
+})

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.23.1@sha256:b1525163cd21938ac934bb1b860f2f3151464fa463b82880ab058167aeaf3e29
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.24.0@sha256:4d3728b2050d4e0adb00b9f4abbb4a020b29e1a39f24ca1447806fc81f110fa6

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.32@sha256:290264eba144c5c58f68009b17f59a5990f6fafbf768f9cbefd7050c34733930
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.32@sha256:e53f2394c7aa76ad10818ffb945e40006cd77406999e47e036d41b8b0bf094cc

View File

@@ -121,21 +121,21 @@ metadata:
spec:
apiServer:
{{- if .Values.controlPlane.apiServer.resources }}
resources: {{- toYaml .Values.controlPlane.apiServer.resources | nindent 6 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.apiServer.resources $) | nindent 6 }}
{{- else if ne .Values.controlPlane.apiServer.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.apiServer.resourcesPreset "Release" .Release) | nindent 6 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.apiServer.resourcesPreset $) | nindent 6 }}
{{- end }}
controllerManager:
{{- if .Values.controlPlane.controllerManager.resources }}
resources: {{- toYaml .Values.controlPlane.controllerManager.resources | nindent 6 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.controllerManager.resources $) | nindent 6 }}
{{- else if ne .Values.controlPlane.controllerManager.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.controllerManager.resourcesPreset "Release" .Release) | nindent 6 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.controllerManager.resourcesPreset $) | nindent 6 }}
{{- end }}
scheduler:
{{- if .Values.controlPlane.scheduler.resources }}
resources: {{- toYaml .Values.controlPlane.scheduler.resources | nindent 6 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.scheduler.resources $) | nindent 6 }}
{{- else if ne .Values.controlPlane.scheduler.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.scheduler.resourcesPreset "Release" .Release) | nindent 6 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.scheduler.resourcesPreset $) | nindent 6 }}
{{- end }}
dataStoreName: "{{ $etcd }}"
addons:
@@ -146,9 +146,9 @@ spec:
server:
port: 8132
{{- if .Values.controlPlane.konnectivity.server.resources }}
resources: {{- toYaml .Values.controlPlane.konnectivity.server.resources | nindent 10 }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.konnectivity.server.resources $) | nindent 10 }}
{{- else if ne .Values.controlPlane.konnectivity.server.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.konnectivity.server.resourcesPreset "Release" .Release) | nindent 10 }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.konnectivity.server.resourcesPreset $) | nindent 10 }}
{{- end }}
kubelet:
cgroupfs: systemd

View File

@@ -34,3 +34,14 @@ rules:
- {{ $.Release.Name }}-{{ $groupName }}
{{- end }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -24,8 +24,8 @@ spec:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-monitoring-agents
storageNamespace: cozy-monitoring-agents
targetNamespace: cozy-monitoring
storageNamespace: cozy-monitoring
install:
createNamespace: true
timeout: "300s"

View File

@@ -26,7 +26,7 @@
"resourcesPreset": {
"type": "string",
"description": "Use a common resources preset when `resources` is not set explicitly.",
"default": "small",
"default": "medium",
"enum": [
"none",
"nano",

View File

@@ -123,7 +123,7 @@ controlPlane:
## cpu: 100m
## memory: 512Mi
##
resourcesPreset: "small"
resourcesPreset: "medium"
resources: {}
controllerManager:

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.1
version: 0.8.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/mariadb-backup:0.7.1@sha256:cfd1c37d8ad24e10681d82d6e6ce8a641b4602c1b0ffa8516ae15b4958bb12d4
ghcr.io/cozystack/cozystack/mariadb-backup:0.8.0@sha256:cfd1c37d8ad24e10681d82d6e6ce8a641b4602c1b0ffa8516ae15b4958bb12d4

View File

@@ -25,3 +25,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.1
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -24,3 +24,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.12.1
version: 0.14.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.12.1@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
ghcr.io/cozystack/cozystack/postgres-backup:0.14.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f

View File

@@ -1,2 +1,2 @@
FROM alpine:3.20
RUN apk add --no-cache postgresql16-client uuidgen restic
FROM alpine:3.22
RUN apk add --no-cache postgresql17-client uuidgen restic

View File

@@ -0,0 +1,11 @@
{{/*
Generate resource requirements based on ResourceQuota named as the namespace.
*/}}
{{- define "postgresjobs.defaultResources" }}
cpu: "1"
memory: 512Mi
{{- end }}
{{- define "postgresjobs.resources" }}
resources:
{{ include "cozy-lib.resources.sanitize" (list (include "postgresjobs.defaultResources" $ | fromYaml) $) | nindent 2 }}
{{- end }}

View File

@@ -81,6 +81,7 @@ spec:
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
{{- include "postgresjobs.resources" . | nindent 12 }}
volumes:
- name: scripts
secret:

View File

@@ -26,3 +26,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -50,6 +50,7 @@ spec:
name: secret
- mountPath: /scripts
name: scripts
{{- include "postgresjobs.resources" . | nindent 8 }}
securityContext:
fsGroup: 26
runAsGroup: 26

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -27,3 +27,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.1
version: 0.8.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -28,3 +28,14 @@ rules:
- {{ .Release.Name }}-redis
- {{ .Release.Name }}-sentinel
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -4,4 +4,4 @@ description: Separated tenant namespace
icon: /logos/tenant.svg
type: application
version: 1.9.3
version: 1.10.0

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -23,8 +23,8 @@ metadata:
namespace: {{ include "tenant.name" . }}
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
resources: ["pods", "services", "persistentvolumes", "endpoints", "events", "resourcequotas"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
@@ -94,7 +94,12 @@ rules:
- apiGroups:
- ""
resources:
- "*"
- pods
- services
- persistentvolumes
- endpoints
- events
- resourcequotas
verbs:
- get
- list
@@ -119,24 +124,7 @@ metadata:
name: {{ include "tenant.name" . }}-view
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-view
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-view
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-view
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "view" (include "tenant.name" .)) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-view
@@ -165,7 +153,12 @@ rules:
- watch
- apiGroups: [""]
resources:
- "*"
- pods
- services
- persistentvolumes
- endpoints
- events
- resourcequotas
verbs:
- get
- list
@@ -202,24 +195,7 @@ metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-use
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-use
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-use
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "use" (include "tenant.name" .)) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-use
@@ -240,7 +216,12 @@ rules:
- get
- apiGroups: [""]
resources:
- "*"
- pods
- services
- persistentvolumes
- endpoints
- events
- resourcequotas
verbs:
- get
- list
@@ -305,24 +286,7 @@ metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-admin
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "admin" (include "tenant.name" .)) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-admin
@@ -343,7 +307,12 @@ rules:
- get
- apiGroups: [""]
resources:
- "*"
- pods
- services
- persistentvolumes
- endpoints
- events
- resourcequotas
verbs:
- get
- list
@@ -384,24 +353,7 @@ metadata:
name: {{ include "tenant.name" . }}-super-admin
namespace: {{ include "tenant.name" . }}
subjects:
{{- if ne .Release.Namespace "tenant-root" }}
- kind: Group
name: tenant-root-super-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
- kind: Group
name: {{ include "tenant.name" . }}-super-admin
apiGroup: rbac.authorization.k8s.io
{{- if hasPrefix "tenant-" .Release.Namespace }}
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
- kind: Group
name: {{ join "-" (slice $parts 0 (add $i 1)) }}-super-admin
apiGroup: rbac.authorization.k8s.io
{{- end }}
{{- end }}
{{- end }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "super-admin" (include "tenant.name" .) ) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-super-admin

View File

@@ -1,4 +1,5 @@
bucket 0.1.0 HEAD
bucket 0.1.0 632224a3
bucket 0.2.0 HEAD
clickhouse 0.1.0 f7eaab0a
clickhouse 0.2.0 53f2365e
clickhouse 0.2.1 dfbc210b
@@ -10,7 +11,8 @@ clickhouse 0.6.1 c62a83a7
clickhouse 0.6.2 8267072d
clickhouse 0.7.0 93bdf411
clickhouse 0.9.0 6130f43d
clickhouse 0.9.2 HEAD
clickhouse 0.9.2 632224a3
clickhouse 0.10.0 HEAD
ferretdb 0.1.0 e9716091
ferretdb 0.1.1 91b0499a
ferretdb 0.2.0 6c5cf5bf
@@ -20,7 +22,8 @@ ferretdb 0.4.1 1ec10165
ferretdb 0.4.2 8267072d
ferretdb 0.5.0 93bdf411
ferretdb 0.6.0 6130f43d
ferretdb 0.6.1 HEAD
ferretdb 0.6.1 632224a3
ferretdb 0.7.0 HEAD
http-cache 0.1.0 263e47be
http-cache 0.2.0 53f2365e
http-cache 0.3.0 6c5cf5bf
@@ -40,38 +43,10 @@ kafka 0.3.3 8267072d
kafka 0.4.0 85ec09b8
kafka 0.5.0 93bdf411
kafka 0.6.0 6130f43d
kafka 0.6.1 HEAD
kubernetes 0.1.0 263e47be
kubernetes 0.2.0 53f2365e
kubernetes 0.3.0 007d414f
kubernetes 0.4.0 d7cfa53c
kubernetes 0.5.0 dfbc210b
kubernetes 0.6.0 5bbc488e
kubernetes 0.7.0 e9716091
kubernetes 0.8.0 ac11056e
kubernetes 0.8.1 366bcafc
kubernetes 0.8.2 f81be075
kubernetes 0.9.0 6c5cf5bf
kubernetes 0.10.0 b8e33d19
kubernetes 0.11.0 4b90bf5a
kubernetes 0.11.1 5fb9cfe3
kubernetes 0.12.0 bb985806
kubernetes 0.12.1 28fca4ef
kubernetes 0.13.0 1ec10165
kubernetes 0.14.0 bfbde07c
kubernetes 0.14.1 898374b5
kubernetes 0.15.0 4e68e65c
kubernetes 0.15.1 160e4e2a
kubernetes 0.15.2 8267072d
kubernetes 0.16.0 077045b0
kubernetes 0.17.0 1fbbfcd0
kubernetes 0.17.1 fd240701
kubernetes 0.18.0 721c12a7
kubernetes 0.19.0 93bdf411
kubernetes 0.20.0 609e7ede
kubernetes 0.20.1 f9f8bb2f
kubernetes 0.21.0 6130f43d
kubernetes 0.23.1 HEAD
kafka 0.6.1 632224a3
kafka 0.7.0 HEAD
kubernetes 0.24.0 62cb694d
kubernetes 0.24.1 HEAD
mysql 0.1.0 263e47be
mysql 0.2.0 c24a103f
mysql 0.3.0 53f2365e
@@ -82,7 +57,8 @@ mysql 0.5.2 1ec10165
mysql 0.5.3 8267072d
mysql 0.6.0 93bdf411
mysql 0.7.0 6130f43d
mysql 0.7.1 HEAD
mysql 0.7.1 632224a3
mysql 0.8.0 HEAD
nats 0.1.0 e9716091
nats 0.2.0 6c5cf5bf
nats 0.3.0 78366f19
@@ -91,7 +67,8 @@ nats 0.4.0 898374b5
nats 0.4.1 8267072d
nats 0.5.0 93bdf411
nats 0.6.0 6130f43d
nats 0.6.1 HEAD
nats 0.6.1 632224a3
nats 0.7.0 HEAD
postgres 0.1.0 263e47be
postgres 0.2.0 53f2365e
postgres 0.2.1 d7cfa53c
@@ -109,7 +86,9 @@ postgres 0.10.0 721c12a7
postgres 0.10.1 93bdf411
postgres 0.11.0 f9f8bb2f
postgres 0.12.0 6130f43d
postgres 0.12.1 HEAD
postgres 0.12.1 632224a3
postgres 0.14.0 62cb694d
postgres 0.14.1 HEAD
rabbitmq 0.1.0 263e47be
rabbitmq 0.2.0 53f2365e
rabbitmq 0.3.0 6c5cf5bf
@@ -119,7 +98,8 @@ rabbitmq 0.4.2 4b90bf5a
rabbitmq 0.4.3 1ec10165
rabbitmq 0.4.4 8267072d
rabbitmq 0.5.0 93bdf411
rabbitmq 0.6.0 HEAD
rabbitmq 0.6.0 632224a3
rabbitmq 0.7.0 HEAD
redis 0.1.1 263e47be
redis 0.2.0 53f2365e
redis 0.3.0 6c5cf5bf
@@ -128,36 +108,14 @@ redis 0.4.0 84f3ccc0
redis 0.5.0 4e68e65c
redis 0.6.0 93bdf411
redis 0.7.0 6130f43d
redis 0.7.1 HEAD
redis 0.7.1 632224a3
redis 0.8.0 HEAD
tcp-balancer 0.1.0 263e47be
tcp-balancer 0.2.0 53f2365e
tcp-balancer 0.3.0 93bdf411
tcp-balancer 0.4.0 6130f43d
tcp-balancer 0.4.1 HEAD
tenant 0.1.4 afc997ef
tenant 0.1.5 e3ab858a
tenant 1.0.0 263e47be
tenant 1.1.0 c0685f43
tenant 1.2.0 dfbc210b
tenant 1.3.0 e9716091
tenant 1.3.1 91b0499a
tenant 1.4.0 71514249
tenant 1.5.0 1ec10165
tenant 1.6.0 df448b99
tenant 1.6.1 c62a83a7
tenant 1.6.2 898374b5
tenant 1.6.3 2057bb96
tenant 1.6.4 84f3ccc0
tenant 1.6.5 fde4bcfa
tenant 1.6.6 4e68e65c
tenant 1.6.7 0ab39f20
tenant 1.6.8 bc95159a
tenant 1.7.0 24fa7222
tenant 1.8.0 160e4e2a
tenant 1.9.0 728743db
tenant 1.9.1 721c12a7
tenant 1.9.2 6130f43d
tenant 1.9.3 HEAD
tenant 1.10.0 HEAD
virtual-machine 0.1.4 f2015d65
virtual-machine 0.1.5 263e47be
virtual-machine 0.2.0 c0685f43
@@ -173,10 +131,12 @@ virtual-machine 0.8.2 de19450f
virtual-machine 0.9.0 721c12a7
virtual-machine 0.9.1 93bdf411
virtual-machine 0.10.0 6130f43d
virtual-machine 0.10.2 HEAD
virtual-machine 0.10.2 632224a3
virtual-machine 0.11.0 HEAD
vm-disk 0.1.0 d971f2ff
vm-disk 0.1.1 6130f43d
vm-disk 0.1.2 HEAD
vm-disk 0.1.2 632224a3
vm-disk 0.2.0 HEAD
vm-instance 0.1.0 1ec10165
vm-instance 0.2.0 84f3ccc0
vm-instance 0.3.0 4e68e65c
@@ -186,11 +146,13 @@ vm-instance 0.5.0 3fa4dd3a
vm-instance 0.5.1 de19450f
vm-instance 0.6.0 721c12a7
vm-instance 0.7.0 6130f43d
vm-instance 0.7.2 HEAD
vm-instance 0.7.2 632224a3
vm-instance 0.8.0 HEAD
vpn 0.1.0 263e47be
vpn 0.2.0 53f2365e
vpn 0.3.0 6c5cf5bf
vpn 0.3.1 1ec10165
vpn 0.4.0 93bdf411
vpn 0.5.0 6130f43d
vpn 0.5.1 HEAD
vpn 0.5.1 632224a3
vpn 0.6.1 HEAD

View File

@@ -17,10 +17,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.10.2
version: 0.11.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: 0.10.0
appVersion: 0.11.0

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -11,6 +11,17 @@ rules:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.2
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: 0.1.1
appVersion: 0.2.0

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -10,3 +10,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -17,10 +17,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.2
version: 0.8.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: 0.7.0
appVersion: 0.8.0

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -11,6 +11,17 @@ rules:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.1
version: 0.6.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -17,3 +17,14 @@ rules:
resourceNames:
- {{ .Release.Name }}-vpn
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -9,13 +9,13 @@ pre-checks:
../../../hack/pre-checks.sh
show:
helm template -n $(NAMESPACE) $(NAME) .
cozypkg show -n $(NAMESPACE) $(NAME) --plain
apply:
helm template -n $(NAMESPACE) $(NAME) . | kubectl apply -f -
cozypkg show -n $(NAMESPACE) $(NAME) --plain | kubectl apply -f-
diff:
helm template -n $(NAMESPACE) $(NAME) . | kubectl diff -f -
cozypkg show -n $(NAMESPACE) $(NAME) --plain | kubectl diff -f -
update:
hack/gen-profiles.sh
@@ -23,7 +23,6 @@ update:
image: pre-checks image-matchbox image-cozystack image-talos
image-cozystack:
make -C ../../.. repos
docker buildx build -f images/cozystack/Dockerfile ../../.. \
--provenance false \
--tag $(REGISTRY)/installer:$(call settag,$(TAG)) \

View File

@@ -1,4 +1,4 @@
FROM golang:alpine3.21 as k8s-await-election-builder
FROM golang:1.24-alpine as k8s-await-election-builder
ARG K8S_AWAIT_ELECTION_GITREPO=https://github.com/LINBIT/k8s-await-election
ARG K8S_AWAIT_ELECTION_VERSION=0.4.1
@@ -13,7 +13,10 @@ RUN git clone ${K8S_AWAIT_ELECTION_GITREPO} /usr/local/go/k8s-await-election/ \
&& make \
&& mv ./out/k8s-await-election-${TARGETARCH} /k8s-await-election
FROM golang:alpine3.21 as builder
FROM golang:1.24-alpine as builder
ARG TARGETOS
ARG TARGETARCH
RUN apk add --no-cache make git
RUN apk add helm --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community
@@ -21,26 +24,26 @@ RUN apk add helm --repository=https://dl-cdn.alpinelinux.org/alpine/edge/communi
COPY . /src/
WORKDIR /src
RUN go mod download
RUN go build -o /cozystack-assets-server -ldflags '-extldflags "-static" -w -s' ./cmd/cozystack-assets-server
# Check that versions_map is not changed
RUN make repos
FROM alpine:3.21
FROM alpine:3.22
RUN apk add --no-cache make
RUN apk add helm kubectl --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community
RUN apk add yq
RUN apk add coreutils
RUN wget -O- https://github.com/cozystack/cozypkg/raw/refs/heads/main/hack/install.sh | sh -s -- -v 1.1.0
COPY scripts /cozystack/scripts
RUN apk add --no-cache make kubectl coreutils
COPY --from=builder /src/scripts /cozystack/scripts
COPY --from=builder /src/packages/core /cozystack/packages/core
COPY --from=builder /src/packages/system /cozystack/packages/system
COPY --from=builder /src/_out/repos /cozystack/assets/repos
COPY --from=builder /src/_out/logos /cozystack/assets/logos
COPY --from=builder /cozystack-assets-server /usr/bin/cozystack-assets-server
COPY --from=k8s-await-election-builder /k8s-await-election /usr/bin/k8s-await-election
COPY dashboards /cozystack/assets/dashboards
COPY --from=builder /src/dashboards /cozystack/assets/dashboards
WORKDIR /cozystack
ENTRYPOINT ["/usr/bin/k8s-await-election", "/cozystack/scripts/installer.sh" ]

View File

@@ -1,2 +1,2 @@
cozystack:
image: ghcr.io/cozystack/cozystack/installer:v0.32.0-beta.1@sha256:57e449187e4f76c3346de7646b4fe11c7136e63216b05ebd7171f8fd5cb077d2
image: ghcr.io/cozystack/cozystack/installer:v0.32.0@sha256:981f1a073fa654f878e448ea89ef324f50d2479f27d3228449e8b66fda7c567f

View File

@@ -1,23 +1,20 @@
NAME=platform
NAMESPACE=cozy-system
API_VERSIONS_FLAGS=$(addprefix -a ,$(shell kubectl api-versions))
show:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS)
cozypkg show -n $(NAMESPACE) $(NAME) --plain
apply:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) \
| kubectl apply -f-
cozypkg show -n $(NAMESPACE) $(NAME) --plain | kubectl apply -f-
kubectl delete helmreleases.helm.toolkit.fluxcd.io -l cozystack.io/marked-for-deletion=true -A
reconcile: apply
namespaces-show:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml
cozypkg show -n $(NAMESPACE) $(NAME) --plain -s templates/namespaces.yaml
namespaces-apply:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml | kubectl apply -n $(NAMESPACE) -f-
cozypkg show -n $(NAMESPACE) $(NAME) --plain -s templates/namespaces.yaml | kubectl apply -f-
diff:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) | kubectl diff -f-
cozypkg show -n $(NAMESPACE) $(NAME) --plain | kubectl diff -f-

View File

@@ -179,7 +179,7 @@ releases:
releaseName: snapshot-controller
chart: cozy-snapshot-controller
namespace: cozy-snapshot-controller
dependsOn: [cilium,cert-manager-issuers]
dependsOn: [cilium]
- name: objectstorage-controller
releaseName: objectstorage-controller

View File

@@ -32,6 +32,14 @@ image-e2e-sandbox:
test: test-cluster test-apps ## Run the end-to-end tests in existing sandbox
prepare-cluster:
docker cp ../../../_out/assets/cozystack-installer.yaml "${SANDBOX_NAME}":/workspace/_out/assets/cozystack-installer.yaml
docker cp ../../../_out/assets/nocloud-amd64.raw.xz "${SANDBOX_NAME}":/workspace/_out/assets/nocloud-amd64.raw.xz
docker exec "${SANDBOX_NAME}" sh -c 'cd /workspace && hack/cozytest.sh hack/e2e-prepare-cluster.bats'
install-cozystack:
docker exec "${SANDBOX_NAME}" sh -c 'cd /workspace && hack/cozytest.sh hack/e2e-install-cozystack.bats'
test-cluster: ## Run the end-to-end for creating a cluster
docker cp ../../../_out/assets/cozystack-installer.yaml "${SANDBOX_NAME}":/workspace/_out/assets/cozystack-installer.yaml
docker cp ../../../_out/assets/nocloud-amd64.raw.xz "${SANDBOX_NAME}":/workspace/_out/assets/nocloud-amd64.raw.xz

View File

@@ -17,3 +17,5 @@ RUN curl -sSL "https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm
RUN curl -sSL "https://github.com/mikefarah/yq/releases/download/v4.44.3/yq_${TARGETOS}_${TARGETARCH}" -o /usr/local/bin/yq \
&& chmod +x /usr/local/bin/yq
RUN curl -sSL "https://fluxcd.io/install.sh" | bash
RUN curl -sSL "https://github.com/cozystack/cozypkg/raw/refs/heads/main/hack/install.sh" | sh -s
RUN curl -sSL "https://github.com/cozystack/cozypkg/raw/refs/heads/main/hack/install.sh" | sh -s -- -v 1.1.0

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.32.0-beta.1@sha256:bf067b3bb24b918d9840cadae26c112ee5d16a93d5f9b2e47af758a49d432040
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.32.0@sha256:454d5a01c30685ca451a6cd42bda5f4c1d80195642f9dd8ccf09369932ebb531

View File

@@ -3,4 +3,4 @@ name: bootbox
description: PXE hardware provisioning
icon: /logos/bootbox.svg
type: application
version: 0.1.1
version: 0.2.0

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/matchbox:v0.32.0-beta.1@sha256:5a447d0533e2191b02c8f1cb8726641f291cabc8bf4d0ea90694fb6f594e0800
ghcr.io/cozystack/cozystack/matchbox:v0.32.0@sha256:1c5173f0c368dd14e29dae95c3d576574af63c226b6f554c78d05c5f160084b5

View File

@@ -31,5 +31,14 @@ rules:
resourceNames:
- bootbox-matchbox
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "super-admin" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -3,4 +3,4 @@ name: etcd
description: Storage for Kubernetes clusters
icon: /logos/etcd.svg
type: application
version: 2.7.0
version: 2.8.0

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -17,3 +17,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "super-admin" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -3,4 +3,4 @@ name: info
description: Info
icon: /logos/info.svg
type: application
version: 1.0.1
version: 1.1.0

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -10,3 +10,14 @@ rules:
resourceNames:
- kubeconfig-{{ .Release.Namespace }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "view" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -3,4 +3,4 @@ name: ingress
description: NGINX Ingress Controller
icon: /logos/ingress-nginx.svg
type: application
version: 1.6.0
version: 1.7.0

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -17,3 +17,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "admin" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -3,4 +3,4 @@ name: monitoring
description: Monitoring and observability stack
icon: /logos/monitoring.svg
type: application
version: 1.10.0
version: 1.11.0

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/grafana:1.10.0@sha256:c63978e1ed0304e8518b31ddee56c4e8115541b997d8efbe1c0a74da57140399
ghcr.io/cozystack/cozystack/grafana:1.11.0@sha256:c63978e1ed0304e8518b31ddee56c4e8115541b997d8efbe1c0a74da57140399

View File

@@ -49,5 +49,14 @@ rules:
{{- break }}
{{- end }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "admin" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -18,8 +18,8 @@ spec:
{{- if and .vminsert .vminsert.minAllowed }}
{{- toYaml .vminsert.minAllowed | nindent 10 }}
{{- else }}
cpu: 250m
memory: 256Mi
cpu: 25m
memory: 64Mi
{{- end }}
maxAllowed:
{{- if and .vminsert .vminsert.maxAllowed }}
@@ -47,8 +47,8 @@ spec:
{{- if and .vmselect .vmselect.minAllowed }}
{{- toYaml .vmselect.minAllowed | nindent 10 }}
{{- else }}
cpu: 250m
memory: 256Mi
cpu: 25m
memory: 64Mi
{{- end }}
maxAllowed:
{{- if and .vmselect .vmselect.maxAllowed }}
@@ -76,8 +76,8 @@ spec:
{{- if and .vmstorage .vmstorage.minAllowed }}
{{- toYaml .vmstorage.minAllowed | nindent 10 }}
{{- else }}
cpu: 100m
memory: 512Mi
cpu: 25m
memory: 64Mi
{{- end }}
maxAllowed:
{{- if and .vmstorage .vmstorage.maxAllowed }}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0
version: 0.5.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -27,3 +27,14 @@ rules:
- {{ $.Release.Name }}-volume
- {{ $.Release.Name }}-db
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "admin" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

Some files were not shown because too many files have changed in this diff Show More