Compare commits

...

162 Commits

Author SHA1 Message Date
Ahmad Murzahmatov
bbffc8a677 ingress webhook
testing ingress nginx built-in webhook instead of certmanager one
2025-05-21 22:23:34 +06:00
Timofei Larkin
775ecb7b11 (air gapped): disable fluxcd artifact by default (#964)
Default `artifact` field for FluxCD operator to empty string, so it uses embedded manifests and is compatible with air-gapped environments.
2025-05-20 15:46:44 +03:00
Timofei Larkin
8d74a35e9c [e2e] Return genisoimage to the e2e-test Dockerfile (#962)
commit 7a7512da30 removed genisoimage
package installation from Dockerfile which leds to test fail due to the
fact that genisoimage is missing and runner enable to create image.
[issue
reference](https://github.com/cozystack/cozystack/actions/runs/15084476654/job/42406141954).
restored genisoimage package installation in Dockerfile.
2025-05-20 15:21:20 +03:00
kklinch0
b753fd9fa8 (air gapped): disable fluxcd artifact by default
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-05-20 13:10:33 +03:00
Ahmad Murzahmatov
9698f3c9f4 commit 7a7512da30 removed genisoimage package installation from Dockerfile which leds to test fail due to the fact that genisoimage is missing and runner enable to create image. issue reference - https://github.com/cozystack/cozystack/actions/runs/15084476654/job/42406141954. restored genisoimage package installation in Dockerfile
Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
2025-05-20 03:54:03 +06:00
Andrei Kvapil
31b110cd39 Revert "[ingress] avoid invalid externalIPs when config value is empty" (#959)
Reverts cozystack/cozystack#957. This was already fixed by
https://github.com/cozystack/cozystack/pull/952
2025-05-17 13:32:12 +02:00
Andrei Kvapil
b4da00f96f Revert "[ingress] avoid invalid externalIPs when config value is empty" 2025-05-17 13:28:31 +02:00
Andrei Kvapil
0369852035 [ingress] avoid invalid externalIPs when config value is empty (#957)
Fix regression introduced by
https://github.com/cozystack/cozystack/pull/929#discussion_r2090992853

becasue of `splitList "," "" == [""]`

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-17 12:35:34 +02:00
Andrei Kvapil
115497b73f [ingress] avoid invalid externalIPs when config value is empty
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-17 12:31:57 +02:00
Andrei Kvapil
4f78b133c2 [build] Cross-arch builds: components (#932)
Components with existing dockerfiles will be updated in this PR.

Part of #519 

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added support for multi-architecture and cross-platform Docker image
builds across various components, enabling builds for different
operating systems and CPU architectures.

- **Chores**
- Updated Docker build commands in multiple Makefiles to use
configurable builder and platform variables, improving build
flexibility.
- Standardized Dockerfile build arguments and environment variables for
cross-compilation.
- Improved package installation commands for quieter and more minimal
installs in Dockerfiles.
- Changed the default bucket name configuration to "cozystack" in system
bucket settings.
- Updated some maintenance targets and manual update reminders in
Makefiles.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-17 12:16:59 +02:00
Andrei Kvapil
d550a67f19 Merge branch 'main' into 519-cross-arch-components 2025-05-17 12:16:49 +02:00
Andrei Kvapil
8e6941dfbd [cluster-api] Update capi-providers (#947)
v0.10.1 version fixes Bootstrap: Make
joinConfiguration.discovery.bootstrapToken.token optional
(https://github.com/kubernetes-sigs/cluster-api/pull/12136)

ref https://github.com/cozystack/cozystack/issues/939 and
https://github.com/clastix/cluster-api-control-plane-provider-kamaji/issues/212

fixes https://github.com/cozystack/cozystack/issues/939

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Updated the versions of cluster-api CoreProvider and kubeadm
BootstrapProvider from v1.10.0 to v1.10.1.
- Updated the version of kamaji ControlPlaneProvider from v0.14.2 to
v0.15.1.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-17 12:15:10 +02:00
Andrei Kvapil
c54567ab45 Revert "Downgrade CAPI operator" (#946)
Reverts cozystack/cozystack#942

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added support for specifying manifest patches and additional manifests
for all provider types, enabling more flexible customization.
- Introduced an optional property to pass additional arguments to
provider controller managers.
  - Added a JSON schema for validating chart values.

- **Enhancements**
- Provider configuration now uses structured maps instead of strings,
simplifying customization and reducing errors.
- Improved validation and descriptions for condition fields in resource
schemas.

- **Updates**
  - Upgraded Cluster API Operator chart and app versions to 0.19.0.
  - Updated default image tag for the manager container to v0.19.0.

- **Documentation**
  - Added example configurations in the values file for easier setup.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-17 12:14:46 +02:00
Andrei Kvapil
dd592ca676 [kubernetes] Update Kubernetes v1.32.4 (#949)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
  - Updated the application version in the Kubernetes chart to 1.32.4.
- Made version fields in Kubernetes cluster templates dynamically
reference the chart's application version, ensuring consistency during
deployments.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-17 12:14:08 +02:00
Andrei Kvapil
5273722769 Update Kamaji to edge-25.4.1 (#953)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added new validation rules to enforce stricter configuration
requirements for datastore drivers and authentication fields.
- Introduced a new field to specify stop signals for containers and a
new status field to track terminating pods.
  - Added a new "Sleeping" status for version reporting.

- **Improvements**
- Updated and clarified field descriptions for environment variable
sources, volume types, and deployment status.
  - Removed outdated beta feature gate notes from documentation.

- **Bug Fixes**
- Improved handling and validation of sensitive configuration fields
based on driver type.

- **Chores**
  - Updated Go base image and Kamaji version in the Dockerfile.
  - Changed Kamaji image tag to use the latest version.

- **Refactor**
- Moved imagePullSecrets configuration from the deployment to the
ServiceAccount manifest for better management.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-17 12:13:57 +02:00
Andrei Kvapil
fb26e3e9b7 [kubernetes] fix regression: return port specification (#956)
This PR fixes regression from
https://github.com/cozystack/cozystack/pull/867

We have updated Kamaji, removed workaround, but didn't return the port
specification

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Refactor**
- Updated network configuration to explicitly include port 443 in
hostnames for ingress.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-17 12:13:44 +02:00
Andrei Kvapil
5e0b0167fc [kubernetes] fix regression: return port specification
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-16 16:25:39 +02:00
Timofei Larkin
73fdc5ded7 Build patched MetalLB (#945)
Since it's taking a while for metallb/metallb#2726 to get released, the
binaries with the fix are recompiled in-tree. Workaround for #909.
2025-05-16 15:15:32 +03:00
Timofei Larkin
5fe7b3bf16 Build patched MetalLB
Since it's taking a while for metallb/metallb#2726 to get released, the
binaries with the fix are recompiled in-tree. Workaround for #909.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-05-16 14:57:58 +03:00
Andrei Kvapil
4ecf492cd4 Update Kamaji to edge-25.4.1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-16 13:47:04 +02:00
Timofei Larkin
c42a50229f Hotfix: error in template (#952)
Resolves regressions introduced in #928 and #929
2025-05-16 14:42:11 +03:00
Timofei Larkin
6f55a66328 Hotfix: error in template
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-05-16 14:21:08 +03:00
Andrei Kvapil
9d551cc69b [kubernetes] Update Kubernetes v1.32.4
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-15 16:49:40 +02:00
Andrei Kvapil
93b8dbb9ab [cluster-api] Update capi-providers
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-15 16:43:56 +02:00
Andrei Kvapil
8ad010d331 Revert "Downgrade CAPI operator"
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-15 14:54:30 +02:00
Andrei Kvapil
404579c361 [platform] refactor dashboard values (#928)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-15 14:16:38 +02:00
Andrei Kvapil
f8210cf276 [platform] Introduce expose-services, expose-ingress and expose-external-ips options (#929)
docs update: https://github.com/cozystack/website/pull/197

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **New Features**
- Added automated migration script to transition configuration from
HelmRelease to ConfigMap for service exposure and external IPs.
- Introduced new ingress templates for API, CDI upload proxy, and VM
export proxy services, enabling dynamic exposure based on centralized
configuration.

- **Bug Fixes**
  - Updated NGINX Ingress Controller Helm chart version to 1.6.0.

- **Refactor**
- Centralized ingress configuration using a ConfigMap, simplifying and
unifying service exposure and ingress class management.
- Removed legacy parameters and templates for dashboard, CDI upload
proxy, and VM export proxy from values and schema files.
- Simplified ingress templates for dashboard and Keycloak to rely on
centralized ConfigMap data and exposure lists.
- Adjusted ingress controller service to conditionally use external IPs
based on centralized configuration.

- **Documentation**
- Updated documentation to reflect the removal of deprecated parameters
and clarify current configuration options.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-15 14:15:56 +02:00
Andrei Kvapil
545e256695 [platform] refactor dashboard values
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-15 14:13:57 +02:00
Andrei Kvapil
e9c463c867 [platform] Add migration for expose-* options
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-15 13:45:19 +02:00
Andrei Kvapil
798ca12e43 [platform] Introduce expose-external-ips option
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-15 13:45:15 +02:00
Andrei Kvapil
3780925a68 [platform] Introduce expose-services and expose-ingress options
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-15 12:35:02 +02:00
Andrei Kvapil
a240c0b6ed [talos] Update Talos Linux v1.10.1 (#931)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated installer and system component versions to v1.10.1 across all
profiles.
- Refreshed system extension images to newer releases, including updated
versions for drbd and zfs.
- Applied recent date-based updates to firmware and extension images for
improved support and compatibility.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-15 12:33:02 +02:00
Andrei Kvapil
de1b38c64b Update Flux Operator to 0.20.0 (#934)
Now includes a Flux MCP server

(docs: https://fluxcd.control-plane.io/mcp/ - NB: it is not running in
the cluster by default, and I haven't tried it yet)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated Helm chart and app version numbers for Flux Operator and Flux
Instance to 0.20.0.
- **Documentation**
- Updated version badges in the README files to reflect the new 0.20.0
release.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-15 12:31:53 +02:00
nbykov0
15d7b6d99e extra/monitoring: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-14 18:30:15 +03:00
Timofei Larkin
9377f55000 Downgrade CAPI operator (#942)
Resolves #940.
2025-05-14 17:37:54 +03:00
Timofei Larkin
d002879b0b Downgrade CAPI operator
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-05-14 15:16:14 +03:00
Andrei Kvapil
2c6338a2ef Don't overcommit memory (#913)
This patch recreates the resource presets with a non-burstable memory
allocation (request==limit) and without CPU limits. With the new presets
the difference between the larger presets became meaningless, so their
values were adjusted.

Resolves #912 

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Updated resource presets across all application charts to remove CPU
limits, align memory limits with requests, and standardize memory units
for consistency.
- Adjusted CPU and memory request values for larger presets in several
applications.
  - Updated chart versions for all affected applications.
  - Refreshed version mappings to reflect latest commit hashes.
- Added explicit resource configuration for Redis in the dashboard
configuration.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-13 17:19:27 +02:00
Kingdon B
fd72d7c486 Flux Operator 0.20.0
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-05-12 10:15:58 -04:00
Timofei Larkin
db34f31175 Don't overcommit memory or throttle CPU
This patch recreates the resource presets with a non-burstable memory
allocation (request==limit) and without CPU limits. With the new presets
the difference between the larger presets became meaningless, so their
values were adjusted.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-05-12 15:59:28 +03:00
Timofei Larkin
653e2bc774 [519] Cross-arch builds: builders variables (#907)
Added PLATFORM variable to `common-envs.mk`: if not defined, it is
calculated based on docker daemon arch.
May be overridden by e.g. `make -e PLATFORM='linux/arm64' ...`
Added the variable to a single Dockerfile for now.
2025-05-12 13:08:00 +04:00
nbykov0
31ea5eeeb2 system/kubeovn-webhook: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 03:06:57 +03:00
nbykov0
4a2c67e045 apps/kubernetes: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
68fb7570f7 apps/postgres: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
56fc08fab4 apps/mysql: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
b00ba53171 apps/clickhouse: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
4dd52290ea apps/mysql: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
492aff5265 apps/clickhouse: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
395cdc3af1 apps/http-cache: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
e6f3000b3c apps/postgres: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
e21c38c103 extra/monitoring: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
7a7512da30 core/testing: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
58b5f6610d system/cozystack-controller: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
e81053f7dd system/dashboard: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
424aab4a83 system/kubeovn: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
77e6db3381 system/kamaji: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
f6e3188ab8 system/cozystack-api: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
1ca0594060 system/cilium: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
ac59b4540b system/bucket: add meaningful default to values.yaml
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
d0bd4b1329 system/bucket: multiarch support
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
nbykov0
ccbcaf6331 system/cozystack-controller: add multiarch options
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-12 02:50:11 +03:00
Andrei Kvapil
1ad1b15a5b [talos] Update Talos Linux v1.10.1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-09 14:56:27 +02:00
Ubuntu
2349ff61c1 scripts/common-envs.mk: add PLATFORM calculation with json parsing
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-08 19:04:48 +03:00
Ubuntu
13139dd71d Revert "Makefile: add buildx version requirement"
This reverts commit 8d367533550236fc587bd5f236046c15f6b7609a.
The check it introduced is not needed.

Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-08 19:04:48 +03:00
nbykov0
57ac614865 Makefile: add buildx version requirement
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-08 19:04:48 +03:00
nbykov0
bbb93c647d scripts/common-envs.mk: commit suggestions after a review
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-08 19:04:48 +03:00
nbykov0
951ba75d93 scripts/common-envs.mk: add --bootsrap flag to inspects
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-08 19:04:48 +03:00
nbykov0
15c9c4a068 system/cozystack-controller: add PLATFORM and BUILDER variables to Makefile
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-08 19:04:48 +03:00
nbykov0
57fefde732 scrips/common-envs.mk: add BUILDER and PLATFORM calculation
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-08 19:04:48 +03:00
nbykov0
b4a04df6f3 system/cozystack-controller: add PLATFORM variable to Makefile: syntax
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-08 19:04:48 +03:00
nbykov0
1e63b5e8ce system/cozystack-controller: add PLATFORM variable to Makefile
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-08 19:04:48 +03:00
nbykov0
6ad30915eb Add PLATFORM make variable; calculate it if undefined
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2025-05-08 19:04:48 +03:00
Timofei Larkin
557ffa536f Update kube-ovn to latest version (#922)
This commit bumps kube-ovn to 1.13.11 and does away with patching the
code now that the fixes necessary for kube-ovn to work properly in Talos
have been released in the upstream.
2025-05-08 16:33:17 +04:00
Andrei Kvapil
ae05d2f545 [kubernetes] Enable Cilium Gateway API #923 (#924)
Implementation of Cilium Gateway API

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added optional Gateway API addon for Kubernetes clusters, controlled
by a new configuration flag.
- Introduced automated deployment of Gateway API CRDs when the addon is
enabled.
- **Documentation**
- Updated documentation to describe the new Gateway API addon and its
configuration.
- **Chores**
- Added chart metadata and automation files for managing Gateway API
CRDs.
  - Updated chart version to reflect new features.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-08 12:21:17 +02:00
Andrei Kvapil
563c643813 [kubernetes] refactor gatewayAPI option
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-05-08 12:19:07 +02:00
Zdenek Deu Janda
68c85ac9ef [kubernetes] Enable Cilium Gateway API
Signed-off-by: Zdenek Deu Janda <zdenek.janda@cloudevelops.com>
2025-05-08 12:18:40 +02:00
Timofei Larkin
3ac00ea4ec Update kube-ovn to latest version
This commit bumps kube-ovn to 1.13.11 and does away with patching the
code now that the fixes necessary for kube-ovn to work properly in Talos
have been released in the upstream.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-05-07 14:35:00 +03:00
klinch0
29b49496f2 [platform] delete extra dependencies for piraeus operator (#856)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Updated dependency configuration so that piraeus-operator no longer
depends on victoria-metrics-operator.
- **Refactor**
- Improved compatibility by ensuring certain resources (VMPodScrape and
alert definitions) are only rendered if the required API versions are
available in the Kubernetes cluster.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-07 12:30:31 +03:00
kklinch0
3c27192d3e [platform] delete extra dependencies for piraeus operator
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-05-05 16:56:12 +03:00
klinch0
dca732cde0 [platform] add hr reconciler (#870)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new controller to synchronize tenant HelmReleases and
propagate configuration changes.
- Added dynamic host value overrides in multiple Helm templates by
conditionally retrieving values from the "tenant-root" HelmRelease.
- Updated RBAC permissions to allow management of HelmRelease resources.

- **Improvements**
  - Added support for Helm v2 API integration.
- Enhanced HelmRelease reconciliation logic and configuration
propagation for tenant environments.

- **Bug Fixes**
- Fixed periodic reconciliation for the "tenant-root" HelmRelease by
setting its interval to zero.

- **Version Updates**
  - Incremented version numbers for the "info" and "ingress" packages.

- **Chores**
  - Updated version mappings and commit references.
  - Improved .gitignore to exclude the .vscode directory.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-05 16:41:34 +03:00
Timofei Larkin
0346dc05bb Enable user-added params in tenant cluster Cilium (#917)
Users requested the possibility of passing custom values to the Cilium
HelmRelease in tenant k8s clusters to enable its latest features, such
as support for the Gateway API. This customization is now available via
the `valuesOverride` field under `addons.cilium` in the kubernetes' app
values.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added support for custom override values for the Cilium addon,
allowing users to configure Cilium settings via the values file.
- **Chores**
  - Updated the Kubernetes chart version to 0.20.0.
  - Updated version mappings to reflect the new chart version.
- **Documentation**
- Updated Kubernetes managed service docs to include configuration
details for Cilium addon overrides.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-05-05 16:55:17 +04:00
Timofei Larkin
a03cdeff04 Enable user-added params in tenant cluster Cilium
Users requested the possibility of passing custom values to the Cilium
HelmRelease in tenant k8s clusters to enable its latest features, such
as support for the Gateway API. This customization is now available via
the `valuesOverride` field under `addons.cilium` in the kubernetes' app
values.

Additionally add dummy schema for S3 bucket, as it breaks the pre-commit
checks.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-05-05 15:37:34 +03:00
Nick Volynkin
062d72805a [docs] Update release policy: Release Candidate versions (#897)
*Documentation**
- Expanded the release documentation with a new section explaining
Cozystack's staged release process, including details on Release
Candidates, Regular Releases, and Patch Releases.
- Clarified the workflow and purpose of Release Candidates and updated
the explanation of how regular releases are created.

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-05-05 16:15:57 +07:00
Nick Volynkin
70fed8148d [ci] Run pre-commit checks even on doc changes
Pre-commit is now required to merge PRs, so let it run even on documentation updates.
An alternative is to merge with administrator permissions, bypassing rules,
which is not a good practice.

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-05-05 11:57:02 +03:00
Nick Volynkin
12c6df83f5 [docs] Update release policy: Release Candidate versions
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-05-05 11:52:06 +03:00
kklinch0
f61a7817e6 [platform] add hr reconciler
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-05-05 09:26:50 +03:00
Timofei Larkin
c482289b14 Make kubevirt's CPU allocation ratio configurable (#905)
Kubevirt's default cpu-to-vcpu ration is 1:10, which might be a bit
extreme for some users. This patch introduces a new key in the Cozystack
configmap, "cpu-allocation-ratio" where admins of Cozystack can specify
an alternative value, if needed.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added support for optionally configuring a CPU allocation ratio for
KubeVirt deployments when the relevant setting is provided.
- **Chores**
- Improved configuration flexibility for KubeVirt by allowing dynamic
injection of CPU allocation settings.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-30 10:53:42 +04:00
Timofei Larkin
1e59e5fbb6 Fix virtual machine resource tracking (#904)
* Count Workload resources for pods by requests, not limits
* Do not count init container requests
* Prefix Workloads for pods with `pod-`, just like the other types to
prevent possible name collisions (closes #787)

The previous version of the WorkloadMonitor controller incorrectly
summed resource limits on pods, rather than requests. This prevented it
from tracking the resource allocation for pods, which only had requests
specified, which is particularly the case for kubevirt's virtual machine
pods. Additionally, it counted the limits for all containers, including
init containers, which are short-lived and do not contribute much to the
total resource usage.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved handling of workloads with unrecognized prefixes by ensuring
they are properly deleted and not processed further.
- Corrected resource aggregation for Pods to sum container resource
requests instead of limits, and now only includes normal containers.

- **New Features**
	- Added support for monitoring workloads with names prefixed by "pod-".

- **Tests**
- Introduced unit tests to verify correct handling of workload name
prefixes and monitored object creation.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-30 10:52:17 +04:00
Timofei Larkin
6106a9fe51 Make kubevirt's CPU allocation ratio configurable
Kubevirt's default cpu-to-vcpu ration is 1:10, which might be a bit
extreme for some users. This patch introduces a new key in the Cozystack
configmap, "cpu-allocation-ratio" where admins of Cozystack can specify
an alternative value, if needed.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-04-29 16:13:18 +03:00
Timofei Larkin
ec9e26c054 Fix virtual machine resource tracking
* Count Workload resources for pods by requests, not limits
* Do not count init container requests
* Prefix Workloads for pods with `pod-`, just like the other types to
  prevent possible name collisions (closes #787)

The previous version of the WorkloadMonitor controller incorrectly
summed resource limits on pods, rather than requests. This prevented it
from tracking the resource allocation for pods, which only had requests
specified, which is particularly the case for kubevirt's virtual machine
pods. Additionally, it counted the limits for all containers, including
init containers, which are short-lived and do not contribute much to the
total resource usage.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-04-29 15:22:46 +03:00
Andrei Kvapil
108fc647ea [ci] Use dots in release candidtate versions, as per SemVer (#901)
This change also fixes `finalizing release` workflow
https://github.com/cozystack/cozystack/pull/890#issuecomment-2830525103

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated release tag validation to require a dot between "rc" and the
number (e.g., `v0.31.5-rc.1` instead of `v0.31.5-rc1`).
  - Adjusted error messages to reflect the new release tag format.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-25 17:01:13 +02:00
Andrei Kvapil
a9b235048d [ci] Use dots in release candidtate versions, as per SemVer
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-25 16:54:41 +02:00
Andrei Kvapil
e1c14619d2 Revert "[ci] automatically trigger tests in releasing PR" (#900)
Revert https://github.com/cozystack/cozystack/pull/894 due to fact this
logic does not trigger checks in pull requests

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Removed support for manually triggering the pull request release
workflow.
- Simplified release workflow to run automatically only on labeled pull
requests.
- Eliminated the step in the tags workflow that triggered release
verification via manual dispatch.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-25 16:50:06 +02:00
Andrei Kvapil
f644bf20c5 Revert "[ci] automatically trigger tests in releasing PR"
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-25 16:47:45 +02:00
Andrei Kvapil
93bdf41144 Release v0.31.0-rc.1 (#895)
This PR prepares the release `v0.31.0-rc.1`.
2025-04-25 14:56:48 +02:00
Andrei Kvapil
bacf15f037 [e2e] Fix device_ownership_from_security_context CRI (#896)
Currently, you can't create VMDisk or VMInstance. The importer pod in
Error state with logs

`kubectl -n tenant-root logs
importer-prime-84b44042-c0ac-4e52-8fbd-a0313f4701a6`

```
I0422 07:37:02.928787       1 importer.go:107] Starting importer
E0422 07:37:02.929473       1 importer.go:137] exit status 1, blockdev: cannot open /dev/cdi-block-volume: Permission denied

kubevirt.io/containerized-data-importer/pkg/util.GetAvailableSpaceBlock
        pkg/util/file.go:135
kubevirt.io/containerized-data-importer/pkg/util.GetAvailableSpaceByVolumeMode
        pkg/util/util.go:99
main.main
        cmd/cdi-importer/importer.go:135
runtime.main
        GOROOT/src/runtime/proc.go:271
runtime.goexit
        src/runtime/asm_amd64.s:1695
```

This change solves the issue with importer pod

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Refactor**
  - Improved formatting of script commands for better readability.
  - Updated container runtime configuration for enhanced customization.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-25 14:54:56 +02:00
dtrdnk
9239852ec8 Update permissions version for CRI containerd
Signed-off-by: dtrdnk <4demenko@gmail.com>
2025-04-25 15:45:07 +03:00
github-actions
87a286fc74 Prepare release v0.31.0-rc.1
Signed-off-by: github-actions <github-actions@github.com>
2025-04-25 12:37:42 +00:00
Andrei Kvapil
6d253b937b [ci] fix triggering releasing pr tests (#898)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-25 14:33:57 +02:00
Andrei Kvapil
255176c321 [ci] fix triggering releasing pr tests
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-25 14:33:29 +02:00
Andrei Kvapil
fa341deaac [ci] automatically trigger tests in releasing PR (#894)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added the ability to manually trigger the release verification
workflow with a specific commit SHA.
- The release verification workflow now supports both pull request
events and manual triggers.
- **Chores**
- Automated triggering of release verification tests from the tags
workflow when a new release is detected.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-25 14:00:26 +02:00
Nick Volynkin
f08566d3f1 [ci] Use dots in release candidtate versions, as per SemVer (#890)
Before: 0.31.0-rc1
After:  0.31.0-rc.1

Why this matters: we want to do things the right way from the start.
Version patten affects how versions are parsed and sorted.
For example, we have release candidates number 9 and 10:

* In 'rc.9' and 'rc.10', the numeric parts are compared as numbers,
  so 9 comes before 10.
* In 'rc9' and 'rc10', versions are compared lexicographically,
  so 10 comes before 9, which is wrong.

Reference: SemVer items 9–11. https://semver.org/#spec-item-9
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-25 14:13:57 +03:00
Andrei Kvapil
a29040faf7 [ci] automatically trigger tests in releasing PR
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-25 12:59:46 +02:00
Nick Volynkin
637551eb33 [ci] Use dots in release candidtate versions, as per SemVer
Before: 0.31.0-rc1
After:  0.31.0-rc.1

Why this matters: we want to do things the right way from the start.
Version patten affects how versions are parsed and sorted.
For example, we have release candidates number 9 and 10:

* In 'rc.9' and 'rc.10', the numeric parts are compared as numbers,
  so 9 comes before 10.
* In 'rc9' and 'rc10', versions are compared lexicographically,
  so 10 comes before 9, which is wrong.

Reference: SemVer items 9–11. https://semver.org/#spec-item-9
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-25 13:57:03 +03:00
Andrei Kvapil
58d959b305 [tests] refactor tests and remove e2e.applications (#893)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-25 12:55:09 +02:00
Andrei Kvapil
fcc7056e5c [platform] Fix installing release candidate versions (#891)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Updated version constraints for multiple HelmRelease resources to use
an explicit semantic version range (>= 0.0.0-0) instead of a wildcard or
unspecified value, clarifying eligible chart versions for deployment.
- Renamed and updated version variable in build scripts to improve
version tagging and packaging consistency.
- Enhanced deployment verification by adding readiness checks for
HelmReleases, with failure detection and reporting for non-ready
releases.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-25 12:42:20 +02:00
Andrei Kvapil
5d7e56bffe [tests] refactor tests and remove e2e.applications
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-25 12:28:43 +02:00
Andrei Kvapil
69b3ddf717 [e2e] Better output in case of failed HelmReleases
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-25 12:21:49 +02:00
Andrei Kvapil
79b5c6b5af [platform] Use devel versions notation for HelmCharts
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-25 12:13:40 +02:00
Andrei Kvapil
076128c783 [platform] Fix installing release candidate versions
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-25 12:07:30 +02:00
Andrei Kvapil
894cb14d49 [kubernetes] Fix ubuntu-container-disk tag (#887)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 16:40:26 +02:00
Andrei Kvapil
a0935e9ae4 [kubernetes] Fix ubuntu-container-disk tag
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 16:38:42 +02:00
Andrei Kvapil
f2c248acbd [ci] Create long‑lived maintenance branch after release published (#886)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated release workflows to ensure maintenance branches are created
during release finalization instead of during tag creation.
- Removed maintenance branch creation from the tag workflow and added it
to the release finalization process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-24 16:23:01 +02:00
Andrei Kvapil
590f14a614 [ci] Create long‑lived maintenance branch after release published
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 16:22:22 +02:00
Andrei Kvapil
4c8dba880a [ci] fix release branch creation (#884)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 15:57:23 +02:00
Andrei Kvapil
de0c7b94f4 [ci] fix release branch creation
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 15:55:46 +02:00
Andrei Kvapil
2682a6e674 [kube-ovn] fix versions mapping in Makefile (#883)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 15:37:10 +02:00
Andrei Kvapil
e3e0b21612 [kube-ovn] fix versions mapping in Makefile
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 15:36:25 +02:00
Andrei Kvapil
455d66fbe4 [ci] Do not run tests in release building pipeline (#882)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Removed the "Test" step from the release workflow, so tests will no
longer run as part of this process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-24 15:27:27 +02:00
Andrei Kvapil
7db7277636 [ci] Do not run tests in release building pipeline
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 15:00:06 +02:00
Andrei Kvapil
7be5db8cff [fluxcd] update to flux-operator 0.19.0 (#880)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced configurable API priority and fairness settings for the
Flux Operator, allowing prioritization of API requests and inclusion of
extra service accounts.
- Added support for a new `skip` field in the `ResourceSetInputProvider`
CRD to control update skipping based on label conditions.

- **Bug Fixes**
- Updated service account reference in admin ClusterRoleBinding to use
the dedicated service account name for improved accuracy.

- **Documentation**
- Updated Helm chart and app version numbers to 0.19.0 in documentation
and metadata.
- Added documentation for the new `apiPriority` configuration option in
the Flux Operator Helm chart.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-24 14:45:43 +02:00
Andrei Kvapil
249950d94b [kubernetes] Update tenant Kubernetes to v1.32 (#871)
This PR also updates ubuntu-container-disk image to latest 24.04 LTS
(Noble Numbat)

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Updated Kubernetes version references from v1.30.1 to v1.32 in build
and deployment configurations.
	- Changed the base image for Ubuntu container disk to Ubuntu 24.04.
	- Made the Kubernetes version configurable during build processes.
- Updated the kubectl container image in pre-delete jobs to use the
latest tag.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-24 14:43:59 +02:00
Kingdon B
44565dca88 [fluxcd] update to flux-operator 0.19.0
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-04-24 08:25:29 -04:00
Andrei Kvapil
cefcd24ebb [ci] Fix uploading assets to release (#876)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated release workflow to use the full tag string when uploading
assets.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-24 14:25:29 +02:00
Andrei Kvapil
13d7df47d7 [kubernetes] Fix merging valuesOverride for tenant clusters (#879) 2025-04-24 14:24:53 +02:00
Andrei Kvapil
1ccd3074dc [kubernetes] Fix merging valuesOverride for tenant clusters
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 14:24:07 +02:00
Andrei Kvapil
70d3591ed2 [kubernetes] Refactor controlPlane settings (#866)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Documentation**
- Updated documentation to rename and restructure the control plane
resource configuration section, replacing the old naming with a unified
"Kubernetes control plane configuration" and updated parameter prefixes.
- **Refactor**
- Consolidated and renamed control plane configuration from
`kamajiControlPlane` to `controlPlane` across configuration files.
- Flattened configuration structure and updated all related parameter
references and hierarchy for improved clarity and consistency.
- **New Features**
- Enhanced resource preset options with expanded enum values for control
plane components.
- **Bug Fixes**
- Simplified HelmRelease manifests by embedding override values inline,
removing dependency on external Secret resources for addons including
cert-manager, GPU operator, ingress-nginx, and vertical-pod-autoscaler.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-24 14:08:54 +02:00
Andrei Kvapil
700991f4fa [ci] let CI to cancel previus job if new one is scheduled (#873)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Improved reliability of GitHub Actions workflows by ensuring only one
job per pull request or branch runs at a time. If a new workflow run is
triggered, any previous in-progress runs for the same group will be
automatically canceled, preventing overlapping executions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-24 13:58:23 +02:00
Andrei Kvapil
d89acbf44d [ci] get rid of ok-to-test label (#875)
Github requires approval for external users anyway:


https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-workflow-runs/approving-workflow-runs-from-public-forks

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Simplified conditions for running GitHub Actions workflows on pull
requests, removing dependencies on the "ok-to-test" label and repository
origin.
  - Updated comments to reflect the new workflow logic.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-24 13:58:12 +02:00
Andrei Kvapil
59ef3296f0 [ci] Fix uploading assets to release
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 13:57:24 +02:00
Andrei Kvapil
3ed0cdee1c [kubernetes] Update tenant Kubernetes to v1.32
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 13:43:56 +02:00
Andrei Kvapil
9f5230a342 [kubernetes] Refactor controlPlane settings
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 13:35:10 +02:00
Andrei Kvapil
b895ccfdeb [cluster-api] Update operator, providers, remove Kamaji workaround (#867)
- Update Cluster API operator to v0.19.0
- Update Cluster API Kamaji control-plane provider to v0.14.2.
- This change includes [upstream
fix](https://github.com/clastix/cluster-api-control-plane-provider-kamaji/pull/175),
so our workaround get removed
- Update Cluster API KubeVirt infrastructure provider to v0.1.10
- Update Cluster API core provider to v1.10.0
- Update Cluster API kubeadm config provider to v1.10.0



Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 13:30:15 +02:00
Andrei Kvapil
d54a407d68 [ci] Disable pre-commit for release branches
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 11:46:26 +02:00
Andrei Kvapil
f9ec630509 [ci] get rid of ok-to-test label
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 11:39:45 +02:00
Andrei Kvapil
3f47181c10 [postgres] remove douplicated template from backup manifest
Resolves https://github.com/cozystack/cozystack/issues/869



<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Refactor**
- Updated backup cron job configuration for improved clarity and
structure. No changes to backup behavior or scheduling.
- **Chores**
  - Incremented the application chart version to 0.10.1.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-24 11:34:36 +02:00
Ian Simon
19409d801d [postgres] remove douplicated template from backup manifest
Signed-off-by: Ian Simon <cheatmaster114@gmail.com>
2025-04-24 11:29:30 +02:00
Andrei Kvapil
8a4793d571 [ci] let CI to cancel previus job if new one is scheduled
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-24 11:25:10 +02:00
Andrei Kvapil
0fc3fdcb3d Update Kube-OVN to v1.13.10 (#847)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
	- Updated kube-ovn chart and container image to version v1.13.10.
- **Bug Fixes**
- Adjusted volume mount paths in the ovncni DaemonSet for improved
configuration consistency.
- **Chores**
	- Streamlined Dockerfile to use the official kube-ovn image directly.
- Automated version synchronization between chart files and Dockerfile
for better maintainability.
- **Improvements**
- Removed NetworkManager synchronization to optimize controller runtime
behavior.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-24 00:24:59 +02:00
Andrei Kvapil
04e2b3952b Update Kube-OVN to v1.13.10
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-23 23:25:19 +02:00
Andrei Kvapil
b56624a781 [cluster-api] Update operator, providers, remove Kamaji workaround
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-23 17:19:29 +02:00
Timofei Larkin
07d7fadb1a Suppress wget progress bar (#865)
In our CI wget spams thousands of lines of the progress bar into the
output, making it hard to read. Turns out, it doesn't have an option to
just remove the progress bar, but explicitly directing wget's log to
stdout and invoking --show-progress sends that to stderr which we
redirect to dev/null. The downloaded size is still reported at regular
intervals, but --progress=dot:giga shortens that to one line per 32M
which is manageable.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Improved file download process to display clearer progress updates
during downloads.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-23 19:02:12 +04:00
Andrei Kvapil
8db92d53d1 [kubernetes] Add gpu-operator and introduce GPU support for tenant Kubernetes clusters (#834)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added support for GPU resources in Kubernetes clusters, including the
ability to specify GPUs per node group and deploy the NVIDIA GPU
Operator as an optional addon.
- Introduced new configuration options for customizing Kamaji control
plane resources and presets.
- Added support for vertical pod autoscaler customization via override
values.

- **Bug Fixes**
- Corrected typographical errors in label keys across multiple
HelmRelease manifests to ensure consistent labeling.

- **Documentation**
- Updated documentation to describe new GPU and control plane
configuration options, removed the instance type feature matrix, and
added detailed parameter explanations.

- **Chores**
- Incremented Kubernetes app chart version to 0.19.0 and updated version
mappings.
  - Fixed typos in parameter descriptions and comments.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-23 16:44:01 +02:00
Andrei Kvapil
7537235f43 [kubernetes] Add gpu-operator and introduce GPU support for tenant Kubernetes clusters
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-23 16:39:10 +02:00
Timofei Larkin
4bb524e53d Suppress wget progress bar
In our CI wget spams thousands of lines of the progress bar into the
output, making it hard to read. Turns out, it doesn't have an option to
just remove the progress bar, but explicitly directing wget's log to
stdout and invoking --show-progress sends that to stderr which we
redirect to dev/null. The downloaded size is still reported at regular
intervals, but --progress=dot:giga shortens that to one line per 32M
which is manageable.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-04-23 17:37:57 +03:00
Andrei Kvapil
e7ded52f93 [virtual-machine] Fix: Add GPU names to virtual machines spec (#862)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Each GPU device entry now includes a unique identifier alongside its
device name in both VirtualMachine and VM Instance templates.

- **Configuration**
- The default GPU configuration now includes a specific GPU entry by
default, instead of being empty.

- **Version Updates**
- Chart versions for VirtualMachine and VM Instance applications have
been incremented.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-23 16:26:13 +02:00
Andrei Kvapil
8547dc3b21 [virtual-machine] Fix: Add GPU names to virtual machines spec
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-23 15:27:47 +02:00
Andrei Kvapil
c22603bf7e [tenant] Fix networkpolicy for accessing externalIPs from the cluster (#854)
This PR fixes an issue with accessing external IPs of cluster from
cluster itself

```
Policy verdict log: flow 0x6c9bf32e local EP ID 1155, remote ID remote-node, proto 6, ingress, action deny, auth: disabled, match none, 172.27.88.13:46124 -> 10.244.4.174:30274 tcp SYN
xx drop (Policy denied) flow 0x6c9bf32e to endpoint 1155, ifindex 247, file bpf_lxc.c:2181, , identity remote-node->56986: 172.27.88.13:46124 -> 10.244.4.174:30274 tcp SYN
```

related doc:
https://docs.cilium.io/en/stable/security/policy/language/#entities-based


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Expanded network access for the tenant application to allow
connections from both external sources and within the cluster.

- **Chores**
	- Updated the tenant application to version 1.9.2.
	- Adjusted version mappings to reflect the latest release.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-23 14:30:55 +02:00
Andrei Kvapil
89525dedb5 [e2e] fix timeouts for capi and keycloak (#858)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Increased timeout durations for waiting on certain Kubernetes
resources to improve reliability during environment setup.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-23 14:26:17 +02:00
Andrei Kvapil
1c53a6f9f6 [e2e] fix timeouts for capi and keycloak
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-23 14:01:51 +02:00
Andrei Kvapil
16ee0f2c3a [platform]: add vpa for cozy etcd operator (#850)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added support for Vertical Pod Autoscaler (VPA) configuration in the
etcd-operator Helm chart, allowing automatic scaling of CPU and memory
resources for both the operator and kube-rbac-proxy components.
- Introduced new configuration options for enabling VPA, setting
resource limits, and specifying update policies.
- **Documentation**
- Updated documentation to describe the new VPA configuration options
and usage.
- **Chores**
  - Incremented chart version to 0.4.2.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-23 13:58:47 +02:00
Andrei Kvapil
72d0394475 Revert "[platform] Hash tenant config and store in configmap" (#855)
Reverts cozystack/cozystack#818, according to decicion made in
https://github.com/cozystack/cozystack/issues/802#issuecomment-2823950243

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Refactor**
- Removed configuration hash ConfigMaps and related logic from the
system.
- Updated resource templates to no longer reference configuration hash
values.
- Cleaned up internal constants and code related to configuration hash
handling.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-23 13:24:34 +02:00
Andrei Kvapil
0a998c8b49 Revert "[platform] Hash tenant config and store in configmap"
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-23 13:24:14 +02:00
Andrei Kvapil
7bfad655c2 Fix: networkpolicy for tenant to access from cluster
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-23 12:18:40 +02:00
Andrei Kvapil
e81cbf780c [ci] Enable release-candidates and backport functionality (#841)
This PR includes refactored pipeline:
- Automatcially create long-term releasing branch `release-X.Y` after
any tag `vX.Y.*` has publushed
- Allow only tags with names `vX.Y.Z` or `vX.Y.Z-rcN`
- Automatically set `prerelease` option for the release if release is
candidate
- Automatically set `latest` option for the release according to semver
- Add a new workflow to backport PRs with `backport` label into current
feature release
- Do not requrie `ok-to-test` label for internal PRs
2025-04-23 12:06:50 +02:00
kklinch0
e8cc44450a [platform]: add vpa for cozy etcd operator
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-04-22 22:48:47 +03:00
Andrei Kvapil
d3a8a4a7de Update Cilium to v1.17.3 (#848)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-22 20:02:06 +02:00
Andrei Kvapil
fc2c5a0f6b [kubevirt] Enable VMExport feature (#808)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a new configuration option to control the virtual export
proxy service (default disabled).
- Deployed a dedicated ingress configuration to support flexible routing
for the virtual export proxy.
- Enabled a feature toggle for VM export capabilities in KubeVirt
deployments.
- **Documentation**
- Updated user documentation to include details about the new virtual
export proxy parameter.
- **Chores**
- Upgraded the associated ingress component from version 1.4.0 to 1.5.0.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-22 20:01:47 +02:00
Andrei Kvapil
0f8b8e1744 Update LINSTOR to v1.31.0 (#846)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated Helm chart version and container image tags for Piraeus
Operator and related components to newer releases. This includes updates
for controller, satellite, CSI, DRBD, and sig-storage images. No other
configuration changes were made.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-22 19:59:50 +02:00
Andrei Kvapil
703073a164 Update Cilium to v1.17.3
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-22 19:30:30 +02:00
Andrei Kvapil
6a0fc64475 Update LINSTOR to v1.31.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-22 19:12:44 +02:00
Andrei Kvapil
63ebab5c2a [ci] Enable release-candidates and backport functionality
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-22 18:49:40 +02:00
Andrei Kvapil
0ddaff9380 [kubevirt] Enable VMExport feature
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-22 18:40:01 +02:00
232 changed files with 17744 additions and 1737 deletions

53
.github/workflows/backport.yaml vendored Normal file
View File

@@ -0,0 +1,53 @@
name: Automatic Backport
on:
pull_request_target:
types: [closed] # fires when PR is closed (merged)
concurrency:
group: backport-${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
permissions:
contents: write
pull-requests: write
jobs:
backport:
if: |
github.event.pull_request.merged == true &&
contains(github.event.pull_request.labels.*.name, 'backport')
runs-on: [self-hosted]
steps:
# 1. Decide which maintenance branch should receive the backport
- name: Determine target maintenance branch
id: target
uses: actions/github-script@v7
with:
script: |
let rel;
try {
rel = await github.rest.repos.getLatestRelease({
owner: context.repo.owner,
repo: context.repo.repo
});
} catch (e) {
core.setFailed('No existing releases found; cannot determine backport target.');
return;
}
const [maj, min] = rel.data.tag_name.replace(/^v/, '').split('.');
const branch = `release-${maj}.${min}`;
core.setOutput('branch', branch);
console.log(`Latest release ${rel.data.tag_name}; backporting to ${branch}`);
# 2. Checkout (required by backportaction)
- name: Checkout repository
uses: actions/checkout@v4
# 3. Create the backport pull request
- name: Create backport PR
uses: korthout/backport-action@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
label_pattern: '' # don't read labels for targets
target_branches: ${{ steps.target.outputs.branch }}

View File

@@ -1,12 +1,13 @@
name: Pre-Commit Checks
on:
push:
branches:
- main
pull_request:
paths-ignore:
- '**.md'
types: [labeled, opened, synchronize, reopened]
concurrency:
group: pre-commit-${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
pre-commit:
runs-on: ubuntu-22.04

View File

@@ -4,6 +4,10 @@ on:
pull_request:
types: [labeled, opened, synchronize, reopened, closed]
concurrency:
group: pull-requests-release-${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
verify:
name: Test Release
@@ -12,8 +16,8 @@ jobs:
contents: read
packages: write
# Run only when the PR carries the "release" label and not closed.
if: |
contains(github.event.pull_request.labels.*.name, 'ok-to-test') &&
contains(github.event.pull_request.labels.*.name, 'release') &&
github.event.action != 'closed'
@@ -39,38 +43,112 @@ jobs:
runs-on: [self-hosted]
permissions:
contents: write
if: |
github.event.pull_request.merged == true &&
contains(github.event.pull_request.labels.*.name, 'release')
steps:
# Extract tag from branch name (branch = release-X.Y.Z*)
- name: Extract tag from branch name
id: get_tag
uses: actions/github-script@v7
with:
script: |
const branch = context.payload.pull_request.head.ref;
const match = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!match) {
core.setFailed(`Branch '${branch}' does not match expected format 'release-X.Y.Z[-suffix]'`);
} else {
const tag = `v${match[1]}`;
core.setOutput('tag', tag);
console.log(`✅ Extracted tag: ${tag}`);
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!m) {
core.setFailed(`Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
return;
}
const tag = `v${m[1]}`;
core.setOutput('tag', tag);
console.log(`✅ Tag to publish: ${tag}`);
# Checkout repo & create / push annotated tag
- name: Checkout repo
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Create tag on merged commit
- name: Create tag on merge commit
run: |
git tag ${{ steps.get_tag.outputs.tag }} ${{ github.sha }} --force
git push origin ${{ steps.get_tag.outputs.tag }} --force
git tag -f ${{ steps.get_tag.outputs.tag }} ${{ github.sha }}
git push -f origin ${{ steps.get_tag.outputs.tag }}
# Ensure maintenance branch release-X.Y
- name: Ensure maintenance branch release-X.Y
uses: actions/github-script@v7
with:
script: |
const tag = '${{ steps.get_tag.outputs.tag }}'; // e.g. v0.1.3 or v0.1.3-rc3
const match = tag.match(/^v(\d+)\.(\d+)\.\d+(?:[-\w\.]+)?$/);
if (!match) {
core.setFailed(`❌ tag '${tag}' must match 'vX.Y.Z' or 'vX.Y.Z-suffix'`);
return;
}
const line = `${match[1]}.${match[2]}`;
const branch = `release-${line}`;
try {
await github.rest.repos.getBranch({
owner: context.repo.owner,
repo: context.repo.repo,
branch
});
console.log(`Branch '${branch}' already exists`);
} catch (_) {
await github.rest.git.createRef({
owner: context.repo.owner,
repo: context.repo.repo,
ref: `refs/heads/${branch}`,
sha: context.sha
});
console.log(`✅ Branch '${branch}' created at ${context.sha}`);
}
# Get the latest published release
- name: Get the latest published release
id: latest_release
uses: actions/github-script@v7
with:
script: |
try {
const rel = await github.rest.repos.getLatestRelease({
owner: context.repo.owner,
repo: context.repo.repo
});
core.setOutput('tag', rel.data.tag_name);
} catch (_) {
core.setOutput('tag', '');
}
# Compare current tag vs latest using semver-utils
- name: Semver compare
id: semver
uses: madhead/semver-utils@v4.3.0
with:
version: ${{ steps.get_tag.outputs.tag }}
compare-to: ${{ steps.latest_release.outputs.tag }}
# Derive flags: prerelease? make_latest?
- name: Calculate publish flags
id: flags
uses: actions/github-script@v7
with:
script: |
const tag = '${{ steps.get_tag.outputs.tag }}'; // v0.31.5-rc.1
const m = tag.match(/^v(\d+\.\d+\.\d+)(-rc\.\d+)?$/);
if (!m) {
core.setFailed(`❌ tag '${tag}' must match 'vX.Y.Z' or 'vX.Y.Z-rc.N'`);
return;
}
const version = m[1] + (m[2] ?? ''); // 0.31.5rc.1
const isRc = Boolean(m[2]);
core.setOutput('is_rc', isRc);
const outdated = '${{ steps.semver.outputs.comparison-result }}' === '<';
core.setOutput('make_latest', isRc || outdated ? 'false' : 'legacy');
# Publish draft release with correct flags
- name: Publish draft release
uses: actions/github-script@v7
with:
@@ -78,19 +156,17 @@ jobs:
const tag = '${{ steps.get_tag.outputs.tag }}';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo
repo: context.repo.repo
});
const release = releases.data.find(r => r.tag_name === tag && r.draft);
if (!release) {
throw new Error(`Draft release with tag ${tag} not found`);
}
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) throw new Error(`Draft release for ${tag} not found`);
await github.rest.repos.updateRelease({
owner: context.repo.owner,
repo: context.repo.repo,
release_id: release.id,
draft: false
owner: context.repo.owner,
repo: context.repo.repo,
release_id: draft.id,
draft: false,
prerelease: ${{ steps.flags.outputs.is_rc }},
make_latest: '${{ steps.flags.outputs.make_latest }}'
});
console.log(` Published release for ${tag}`);
console.log(`🚀 Published release for ${tag}`);

View File

@@ -4,6 +4,10 @@ on:
pull_request:
types: [labeled, opened, synchronize, reopened]
concurrency:
group: pull-requests-${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
e2e:
name: Build and Test
@@ -12,8 +16,8 @@ jobs:
contents: read
packages: write
# Never run when the PR carries the "release" label.
if: |
contains(github.event.pull_request.labels.*.name, 'ok-to-test') &&
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
@@ -30,10 +34,8 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
- name: make build
run: |
make build
- name: Build
run: make build
- name: make test
run: |
make test
- name: Test
run: make test

View File

@@ -1,10 +1,15 @@
name: Versioned Tag
on:
# Trigger on push if it includes a tag like vX.Y.Z
push:
tags:
- 'v*.*.*'
- 'v*.*.*' # vX.Y.Z
- 'v*.*.*-rc.*' # vX.Y.Z-rc.N
concurrency:
group: tags-${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
prepare-release:
@@ -14,9 +19,10 @@ jobs:
contents: write
packages: write
pull-requests: write
actions: write
steps:
# 1) Check if a non-draft release with this tag already exists
# Check if a non-draft release with this tag already exists
- name: Check if release already exists
id: check_release
uses: actions/github-script@v7
@@ -25,57 +31,67 @@ jobs:
const tag = context.ref.replace('refs/tags/', '');
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo
repo: context.repo.repo
});
const existing = releases.data.find(r => r.tag_name === tag && !r.draft);
if (existing) {
core.setOutput('skip', 'true');
} else {
core.setOutput('skip', 'false');
}
const exists = releases.data.some(r => r.tag_name === tag && !r.draft);
core.setOutput('skip', exists);
console.log(exists ? `Release ${tag} already published` : `No published release ${tag}`);
# If a published release already exists, skip the rest of the workflow
- name: Skip if release already exists
if: steps.check_release.outputs.skip == 'true'
run: echo "Release already exists, skipping workflow."
# 2) Determine the base branch from which the tag was pushed
# Parse tag metadata (rc?, maintenance line, etc.)
- name: Parse tag
if: steps.check_release.outputs.skip == 'false'
id: tag
uses: actions/github-script@v7
with:
script: |
const ref = context.ref.replace('refs/tags/', ''); // e.g. v0.31.5-rc.1
const m = ref.match(/^v(\d+\.\d+\.\d+)(-rc\.\d+)?$/); // ['0.31.5', '-rc.1']
if (!m) {
core.setFailed(`❌ tag '${ref}' must match 'vX.Y.Z' or 'vX.Y.Z-rc.N'`);
return;
}
const version = m[1] + (m[2] ?? ''); // 0.31.5rc.1
const isRc = Boolean(m[2]);
const [maj, min] = m[1].split('.');
core.setOutput('tag', ref); // v0.31.5-rc.1
core.setOutput('version', version); // 0.31.5-rc.1
core.setOutput('is_rc', isRc); // true
core.setOutput('line', `${maj}.${min}`); // 0.31
# Detect base branch (main or releaseX.Y) the tag was pushed from
- name: Get base branch
if: steps.check_release.outputs.skip == 'false'
id: get_base
uses: actions/github-script@v7
with:
script: |
/*
For a push event with a tag, GitHub sets context.payload.base_ref
if the tag was pushed from a branch.
If it's empty, we can't determine the correct base branch and must fail.
*/
const baseRef = context.payload.base_ref;
if (!baseRef) {
core.setFailed(`❌ base_ref is empty. Make sure you push the tag from a branch (e.g. 'git push origin HEAD:refs/tags/vX.Y.Z').`);
core.setFailed(`❌ base_ref is empty. Push the tag via 'git push origin HEAD:refs/tags/<tag>'.`);
return;
}
const shortBranch = baseRef.replace("refs/heads/", "");
const releasePattern = /^release-\d+\.\d+$/;
if (shortBranch !== "main" && !releasePattern.test(shortBranch)) {
core.setFailed(`❌ Tagged commit must belong to branch 'main' or 'release-X.Y'. Got '${shortBranch}'`);
const branch = baseRef.replace('refs/heads/', '');
const ok = branch === 'main' || /^release-\d+\.\d+$/.test(branch);
if (!ok) {
core.setFailed(`❌ Tagged commit must belong to 'main' or 'release-X.Y'. Got '${branch}'`);
return;
}
core.setOutput('branch', branch);
core.setOutput('branch', shortBranch);
# 3) Checkout full git history and tags
# Checkout & login once
- name: Checkout code
if: steps.check_release.outputs.skip == 'false'
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
fetch-tags: true
# 4) Login to GitHub Container Registry
- name: Login to GitHub Container Registry
- name: Login to GHCR
if: steps.check_release.outputs.skip == 'false'
uses: docker/login-action@v3
with:
@@ -83,113 +99,129 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
# 5) Build project artifacts
# Build project artifacts
- name: Build
if: steps.check_release.outputs.skip == 'false'
run: make build
# 6) Optionally commit built artifacts to the repository
# Commit built artifacts
- name: Commit release artifacts
if: steps.check_release.outputs.skip == 'false'
env:
GIT_AUTHOR_NAME: ${{ github.actor }}
GIT_AUTHOR_EMAIL: ${{ github.actor }}@users.noreply.github.com
run: |
git config user.name "github-actions"
git config user.name "github-actions"
git config user.email "github-actions@github.com"
git add .
git commit -m "Prepare release ${GITHUB_REF#refs/tags/}" -s || echo "No changes to commit"
git push origin HEAD || true
# 7) Create a release branch like release-X.Y.Z
# Get `latest_version` from latest published release
- name: Get latest published release
if: steps.check_release.outputs.skip == 'false'
id: latest_release
uses: actions/github-script@v7
with:
script: |
try {
const rel = await github.rest.repos.getLatestRelease({
owner: context.repo.owner,
repo: context.repo.repo
});
core.setOutput('tag', rel.data.tag_name);
} catch (_) {
core.setOutput('tag', '');
}
# Compare tag (A) with latest (B)
- name: Semver compare
if: steps.check_release.outputs.skip == 'false'
id: semver
uses: madhead/semver-utils@v4.3.0
with:
version: ${{ steps.tag.outputs.tag }} # A
compare-to: ${{ steps.latest_release.outputs.tag }} # B
# Create or reuse DRAFT GitHub Release
- name: Create / reuse draft release
if: steps.check_release.outputs.skip == 'false'
id: release
uses: actions/github-script@v7
with:
script: |
const tag = '${{ steps.tag.outputs.tag }}';
const isRc = ${{ steps.tag.outputs.is_rc }};
const outdated = '${{ steps.semver.outputs.comparison-result }}' === '<';
const makeLatest = outdated ? false : 'legacy';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo
});
let rel = releases.data.find(r => r.tag_name === tag);
if (!rel) {
rel = await github.rest.repos.createRelease({
owner: context.repo.owner,
repo: context.repo.repo,
tag_name: tag,
name: tag,
draft: true,
prerelease: isRc,
make_latest: makeLatest
});
console.log(`Draft release created for ${tag}`);
} else {
console.log(`Reusing existing release ${tag}`);
}
core.setOutput('upload_url', rel.upload_url);
# Build + upload assets (optional)
- name: Build & upload assets
if: steps.check_release.outputs.skip == 'false'
run: |
make assets
make upload_assets VERSION=${{ steps.tag.outputs.tag }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Create releaseX.Y.Z branch and push (forceupdate)
- name: Create release branch
if: steps.check_release.outputs.skip == 'false'
run: |
BRANCH_NAME="release-${GITHUB_REF#refs/tags/v}"
git branch -f "$BRANCH_NAME"
git push origin "$BRANCH_NAME" --force
BRANCH="release-${GITHUB_REF#refs/tags/v}"
git branch -f "$BRANCH"
git push -f origin "$BRANCH"
# 8) Create a pull request from release-X.Y.Z to the original base branch
# Create pull request into original base branch (if absent)
- name: Create pull request if not exists
if: steps.check_release.outputs.skip == 'false'
uses: actions/github-script@v7
with:
script: |
const version = context.ref.replace('refs/tags/v', '');
const base = '${{ steps.get_base.outputs.branch }}';
const head = `release-${version}`;
const base = '${{ steps.get_base.outputs.branch }}';
const head = `release-${version}`;
const prs = await github.rest.pulls.list({
owner: context.repo.owner,
repo: context.repo.repo,
head: `${context.repo.owner}:${head}`,
repo: context.repo.repo,
head: `${context.repo.owner}:${head}`,
base
});
if (prs.data.length === 0) {
const newPr = await github.rest.pulls.create({
const pr = await github.rest.pulls.create({
owner: context.repo.owner,
repo: context.repo.repo,
repo: context.repo.repo,
head,
base,
title: `Release v${version}`,
body:
`This PR prepares the release \`v${version}\`.\n` +
`(Please merge it before releasing draft)`,
body: `This PR prepares the release \`v${version}\`.`,
draft: false
});
console.log(`Created pull request #${newPr.data.number} from ${head} to ${base}`);
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: newPr.data.number,
repo: context.repo.repo,
issue_number: pr.data.number,
labels: ['release']
});
console.log(`Created PR #${pr.data.number}`);
} else {
console.log(`Pull request already exists from ${head} to ${base}`);
console.log(`PR already exists from ${head} to ${base}`);
}
# 9) Create or reuse an existing draft GitHub release for this tag
- name: Create or reuse draft release
if: steps.check_release.outputs.skip == 'false'
id: create_release
uses: actions/github-script@v7
with:
script: |
const tag = context.ref.replace('refs/tags/', '');
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo
});
let release = releases.data.find(r => r.tag_name === tag);
if (!release) {
release = await github.rest.repos.createRelease({
owner: context.repo.owner,
repo: context.repo.repo,
tag_name: tag,
name: `${tag}`,
draft: true,
prerelease: false
});
}
core.setOutput('upload_url', release.upload_url);
# 10) Build additional assets for the release (if needed)
- name: Build assets
if: steps.check_release.outputs.skip == 'false'
run: make assets
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# 11) Upload assets to the draft release
- name: Upload assets
if: steps.check_release.outputs.skip == 'false'
run: make upload_assets VERSION=${GITHUB_REF#refs/tags/}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# 12) Run tests
- name: Run tests
if: steps.check_release.outputs.skip == 'false'
run: make test

3
.gitignore vendored
View File

@@ -1,6 +1,7 @@
_out
.git
.idea
.vscode
# User-specific stuff
.idea/**/workspace.xml
@@ -75,4 +76,4 @@ fabric.properties
.idea/caches/build_file_checksums.ser
.DS_Store
**/.DS_Store
**/.DS_Store

View File

@@ -18,6 +18,7 @@ repos:
(cd "$dir" && make generate)
fi
done
git diff --color=always | cat
'
language: script
files: ^.*$

View File

@@ -20,6 +20,7 @@ build: build-deps
make -C packages/system/kubeovn image
make -C packages/system/kubeovn-webhook image
make -C packages/system/dashboard image
make -C packages/system/metallb image
make -C packages/system/kamaji image
make -C packages/system/bucket image
make -C packages/core/testing image
@@ -47,7 +48,6 @@ assets:
test:
make -C packages/core/testing apply
make -C packages/core/testing test
#make -C packages/core/testing test-applications
generate:
hack/update-codegen.sh

View File

@@ -39,6 +39,8 @@ import (
cozystackiov1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
"github.com/cozystack/cozystack/internal/controller"
"github.com/cozystack/cozystack/internal/telemetry"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
// +kubebuilder:scaffold:imports
)
@@ -51,6 +53,7 @@ func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(cozystackiov1alpha1.AddToScheme(scheme))
utilruntime.Must(helmv2.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}
@@ -182,6 +185,14 @@ func main() {
if err = (&controller.WorkloadReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "WorkloadReconciler")
os.Exit(1)
}
if err = (&controller.TenantHelmReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Workload")
os.Exit(1)

View File

@@ -1,10 +1,37 @@
# Release Workflow
This section explains how Cozystack builds and releases are made.
This document describes Cozystacks release process.
## Introduction
Cozystack uses a staged release process to ensure stability and flexibility during development.
There are three types of releases:
- **Release Candidates (RC)** Preview versions (e.g., `v0.42.0-rc.1`) used for final testing and validation.
- **Regular Releases** Final versions (e.g., `v0.42.0`) that are feature-complete and thoroughly tested.
- **Patch Releases** Bugfix-only updates (e.g., `v0.42.1`) made after a stable release, based on a dedicated release branch.
Each type plays a distinct role in delivering reliable and tested updates while allowing ongoing development to continue smoothly.
## Release Candidates
Release candidates are Cozystack versions that introduce new features and are published before a stable release.
Their purpose is to help validate stability before finalizing a new feature release.
They allow for final rounds of testing and bug fixes without freezing development.
Release candidates are given numbers `vX.Y.0-rc.N`, for example, `v0.42.0-rc.1`.
They are created directly in the `main` branch.
An RC is typically tagged when all major features for the upcoming release have been merged into main and the release enters its testing phase.
However, new features and changes can still be added before the regular release `vX.Y.0`.
Each RC contributes to a cumulative set of release notes that will be finalized when `vX.Y.0` is released.
After testing, if no critical issues remain, the regular release (`vX.Y.0`) is tagged from the last RC or a later commit in main.
This begins the regular release process, creates a dedicated `release-X.Y` branch, and opens the way for patch releases.
## Regular Releases
When making regular releases, we take a commit in `main` and decide to make it a release `x.y.0`.
When making a regular release, we tag the latest RC or a subsequent minimal-change commit as `vX.Y.0`.
In this explanation, we'll use version `v0.42.0` as an example:
```mermaid

View File

@@ -1,165 +0,0 @@
#!/bin/bash
RED='\033[0;31m'
GREEN='\033[0;32m'
RESET='\033[0m'
YELLOW='\033[0;33m'
ROOT_NS="tenant-root"
TEST_TENANT="tenant-e2e"
values_base_path="/hack/testdata/"
checks_base_path="/hack/testdata/"
function delete_hr() {
local release_name="$1"
local namespace="$2"
if [[ -z "$release_name" ]]; then
echo -e "${RED}Error: Release name is required.${RESET}"
exit 1
fi
if [[ -z "$namespace" ]]; then
echo -e "${RED}Error: Namespace name is required.${RESET}"
exit 1
fi
if [[ "$release_name" == "tenant-e2e" ]]; then
echo -e "${YELLOW}Skipping deletion for release tenant-e2e.${RESET}"
return 0
fi
kubectl delete helmrelease $release_name -n $namespace
}
function install_helmrelease() {
local release_name="$1"
local namespace="$2"
local chart_path="$3"
local repo_name="$4"
local repo_ns="$5"
local values_file="$6"
if [[ -z "$release_name" ]]; then
echo -e "${RED}Error: Release name is required.${RESET}"
exit 1
fi
if [[ -z "$namespace" ]]; then
echo -e "${RED}Error: Namespace name is required.${RESET}"
exit 1
fi
if [[ -z "$chart_path" ]]; then
echo -e "${RED}Error: Chart path name is required.${RESET}"
exit 1
fi
if [[ -n "$values_file" && -f "$values_file" ]]; then
local values_section
values_section=$(echo " values:" && sed 's/^/ /' "$values_file")
fi
local helmrelease_file=$(mktemp /tmp/HelmRelease.XXXXXX.yaml)
{
echo "apiVersion: helm.toolkit.fluxcd.io/v2"
echo "kind: HelmRelease"
echo "metadata:"
echo " labels:"
echo " cozystack.io/ui: \"true\""
echo " name: \"$release_name\""
echo " namespace: \"$namespace\""
echo "spec:"
echo " chart:"
echo " spec:"
echo " chart: \"$chart_path\""
echo " reconcileStrategy: Revision"
echo " sourceRef:"
echo " kind: HelmRepository"
echo " name: \"$repo_name\""
echo " namespace: \"$repo_ns\""
echo " version: '*'"
echo " interval: 1m0s"
echo " timeout: 5m0s"
[[ -n "$values_section" ]] && echo "$values_section"
} > "$helmrelease_file"
kubectl apply -f "$helmrelease_file"
rm -f "$helmrelease_file"
}
function install_tenant (){
local release_name="$1"
local namespace="$2"
local values_file="${values_base_path}tenant/values.yaml"
local repo_name="cozystack-apps"
local repo_ns="cozy-public"
install_helmrelease "$release_name" "$namespace" "tenant" "$repo_name" "$repo_ns" "$values_file"
}
function make_extra_checks(){
local checks_file="$1"
echo "after exec make $checks_file"
if [[ -n "$checks_file" && -f "$checks_file" ]]; then
echo -e "${YELLOW}Start extra checks with file: ${checks_file}${RESET}"
fi
}
function check_helmrelease_status() {
local release_name="$1"
local namespace="$2"
local checks_file="$3"
local timeout=300 # Timeout in seconds
local interval=5 # Interval between checks in seconds
local elapsed=0
while [[ $elapsed -lt $timeout ]]; do
local status_output
status_output=$(kubectl get helmrelease "$release_name" -n "$namespace" -o json | jq -r '.status.conditions[-1].reason')
if [[ "$status_output" == "InstallSucceeded" || "$status_output" == "UpgradeSucceeded" ]]; then
echo -e "${GREEN}Helm release '$release_name' is ready.${RESET}"
make_extra_checks "$checks_file"
delete_hr $release_name $namespace
return 0
elif [[ "$status_output" == "InstallFailed" ]]; then
echo -e "${RED}Helm release '$release_name': InstallFailed${RESET}"
exit 1
else
echo -e "${YELLOW}Helm release '$release_name' is not ready. Current status: $status_output${RESET}"
fi
sleep "$interval"
elapsed=$((elapsed + interval))
done
echo -e "${RED}Timeout reached. Helm release '$release_name' is still not ready after $timeout seconds.${RESET}"
exit 1
}
chart_name="$1"
if [ -z "$chart_name" ]; then
echo -e "${RED}No chart name provided. Exiting...${RESET}"
exit 1
fi
checks_file="${checks_base_path}${chart_name}/check.sh"
repo_name="cozystack-apps"
repo_ns="cozy-public"
release_name="$chart_name-e2e"
values_file="${values_base_path}${chart_name}/values.yaml"
install_tenant $TEST_TENANT $ROOT_NS
check_helmrelease_status $TEST_TENANT $ROOT_NS "${checks_base_path}tenant/check.sh"
echo -e "${YELLOW}Running tests for chart: $chart_name${RESET}"
install_helmrelease $release_name $TEST_TENANT $chart_name $repo_name $repo_ns $values_file
check_helmrelease_status $release_name $TEST_TENANT $checks_file

View File

@@ -60,7 +60,8 @@ done
# Prepare system drive
if [ ! -f nocloud-amd64.raw ]; then
wget https://github.com/cozystack/cozystack/releases/latest/download/nocloud-amd64.raw.xz -O nocloud-amd64.raw.xz
wget https://github.com/cozystack/cozystack/releases/latest/download/nocloud-amd64.raw.xz \
-O nocloud-amd64.raw.xz --show-progress --output-file /dev/stdout --progress=dot:giga 2>/dev/null
rm -f nocloud-amd64.raw
xz --decompress nocloud-amd64.raw.xz
fi
@@ -85,7 +86,8 @@ done
# Start VMs
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 16384 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5$i -netdev tap,id=net0,ifname=cozy-srv$i,script=no,downscript=no \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5$i \
-netdev tap,id=net0,ifname=cozy-srv$i,script=no,downscript=no \
-drive file=srv$i/system.img,if=virtio,format=raw \
-drive file=srv$i/seed.img,if=virtio,format=raw \
-drive file=srv$i/data.img,if=virtio,format=raw \
@@ -121,7 +123,7 @@ machine:
files:
- content: |
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.cri.v1.runtime"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
@@ -231,11 +233,18 @@ timeout 60 sh -c 'until kubectl get hr -A | grep cozy; do sleep 1; done'
sleep 5
# Wait for all HelmReleases to be installed
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n " $1 " hr/" $2 " &"} END{print "wait"}' | sh -x
failed_hrs=$(kubectl get hr -A | grep -v True)
if [ -n "$(echo "$failed_hrs" | grep -v NAME)" ]; then
printf 'Failed HelmReleases:\n%s\n' "$failed_hrs" >&2
exit 1
fi
# Wait for Cluster-API providers
timeout 30 sh -c 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager; do sleep 1; done'
kubectl wait deploy --timeout=30s --for=condition=available -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager
timeout 60 sh -c 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager; do sleep 1; done'
kubectl wait deploy --timeout=1m --for=condition=available -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager
# Wait for linstor controller
kubectl wait deploy --timeout=5m --for=condition=available -n cozy-linstor linstor-controller
@@ -325,8 +334,8 @@ if ! kubectl wait --timeout=2m --for=condition=ready -n tenant-root hr monitorin
kubectl wait --timeout=2m --for=condition=ready -n tenant-root hr monitoring
fi
kubectl patch -n tenant-root ingresses.apps.cozystack.io ingress --type=merge -p '{"spec":{
"dashboard": true
kubectl patch -n cozy-system cm cozystack --type=merge -p '{"data":{
"expose-services": "api,dashboard,cdi-uploadproxy,vm-exportproxy,keycloak"
}}'
# Wait for nginx-ingress-controller
@@ -357,5 +366,5 @@ kubectl patch -n cozy-system cm/cozystack --type=merge -p '{"data":{
"oidc-enabled": "true"
}}'
timeout 60 sh -c 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator; do sleep 1; done'
timeout 120 sh -c 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator; do sleep 1; done'
kubectl wait --timeout=10m --for=condition=ready -n cozy-keycloak hr keycloak keycloak-configure keycloak-operator

View File

@@ -1 +0,0 @@
return 0

View File

@@ -1,2 +0,0 @@
endpoints:
- 8.8.8.8:443

View File

@@ -1 +0,0 @@
return 0

View File

@@ -1,62 +0,0 @@
## @section Common parameters
## @param host The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host).
## @param controlPlane.replicas Number of replicas for Kubernetes contorl-plane components
## @param storageClass StorageClass used to store user data
##
host: ""
controlPlane:
replicas: 2
storageClass: replicated
## @param nodeGroups [object] nodeGroups configuration
##
nodeGroups:
md0:
minReplicas: 0
maxReplicas: 10
instanceType: "u1.medium"
ephemeralStorage: 20Gi
roles:
- ingress-nginx
resources:
cpu: ""
memory: ""
## @section Cluster Addons
##
addons:
## Cert-manager: automatically creates and manages SSL/TLS certificate
##
certManager:
## @param addons.certManager.enabled Enables the cert-manager
## @param addons.certManager.valuesOverride Custom values to override
enabled: true
valuesOverride: {}
## Ingress-NGINX Controller
##
ingressNginx:
## @param addons.ingressNginx.enabled Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role)
## @param addons.ingressNginx.valuesOverride Custom values to override
##
enabled: true
## @param addons.ingressNginx.hosts List of domain names that should be passed through to the cluster by upper cluster
## e.g:
## hosts:
## - example.org
## - foo.example.net
##
hosts: []
valuesOverride: {}
## Flux CD
##
fluxcd:
## @param addons.fluxcd.enabled Enables Flux CD
## @param addons.fluxcd.valuesOverride Custom values to override
##
enabled: true
valuesOverride: {}

View File

@@ -1 +0,0 @@
return 0

View File

@@ -1,10 +0,0 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param replicas Persistent Volume size for NATS
## @param storageClass StorageClass used to store the data
##
external: false
replicas: 2
storageClass: ""

View File

@@ -1 +0,0 @@
return 0

View File

@@ -1,6 +0,0 @@
host: ""
etcd: false
monitoring: false
ingress: false
seaweedfs: false
isolated: true

View File

@@ -0,0 +1,158 @@
package controller
import (
"context"
"fmt"
"strings"
"time"
e "errors"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
"gopkg.in/yaml.v2"
corev1 "k8s.io/api/core/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
)
type TenantHelmReconciler struct {
client.Client
Scheme *runtime.Scheme
}
func (r *TenantHelmReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)
hr := &helmv2.HelmRelease{}
if err := r.Get(ctx, req.NamespacedName, hr); err != nil {
if errors.IsNotFound(err) {
return ctrl.Result{}, nil
}
logger.Error(err, "unable to fetch HelmRelease")
return ctrl.Result{}, err
}
if !strings.HasPrefix(hr.Name, "tenant-") {
return ctrl.Result{}, nil
}
if len(hr.Status.Conditions) == 0 || hr.Status.Conditions[0].Type != "Ready" {
return ctrl.Result{}, nil
}
if len(hr.Status.History) == 0 {
logger.Info("no history in HelmRelease status", "name", hr.Name)
return ctrl.Result{}, nil
}
if hr.Status.History[0].Status != "deployed" {
return ctrl.Result{}, nil
}
newDigest := hr.Status.History[0].Digest
var hrList helmv2.HelmReleaseList
childNamespace := getChildNamespace(hr.Namespace, hr.Name)
if childNamespace == "tenant-root" && hr.Name == "tenant-root" {
if hr.Spec.Values == nil {
logger.Error(e.New("hr.Spec.Values is nil"), "cant annotate tenant-root ns")
return ctrl.Result{}, nil
}
err := annotateTenantRootNs(*hr.Spec.Values, r.Client)
if err != nil {
logger.Error(err, "cant annotate tenant-root ns")
return ctrl.Result{}, nil
}
logger.Info("namespace 'tenant-root' annotated")
}
if err := r.List(ctx, &hrList, client.InNamespace(childNamespace)); err != nil {
logger.Error(err, "unable to list HelmReleases in namespace", "namespace", hr.Name)
return ctrl.Result{}, err
}
for _, item := range hrList.Items {
if item.Name == hr.Name {
continue
}
oldDigest := item.GetAnnotations()["cozystack.io/tenant-config-digest"]
if oldDigest == newDigest {
continue
}
patchTarget := item.DeepCopy()
if patchTarget.Annotations == nil {
patchTarget.Annotations = map[string]string{}
}
ts := time.Now().Format(time.RFC3339Nano)
patchTarget.Annotations["cozystack.io/tenant-config-digest"] = newDigest
patchTarget.Annotations["reconcile.fluxcd.io/forceAt"] = ts
patchTarget.Annotations["reconcile.fluxcd.io/requestedAt"] = ts
patch := client.MergeFrom(item.DeepCopy())
if err := r.Patch(ctx, patchTarget, patch); err != nil {
logger.Error(err, "failed to patch HelmRelease", "name", patchTarget.Name)
continue
}
logger.Info("patched HelmRelease with new digest", "name", patchTarget.Name, "digest", newDigest, "version", hr.Status.History[0].Version)
}
return ctrl.Result{}, nil
}
func (r *TenantHelmReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&helmv2.HelmRelease{}).
Complete(r)
}
func getChildNamespace(currentNamespace, hrName string) string {
tenantName := strings.TrimPrefix(hrName, "tenant-")
switch {
case currentNamespace == "tenant-root" && hrName == "tenant-root":
// 1) root tenant inside root namespace
return "tenant-root"
case currentNamespace == "tenant-root":
// 2) any other tenant in root namespace
return fmt.Sprintf("tenant-%s", tenantName)
default:
// 3) tenant in a dedicated namespace
return fmt.Sprintf("%s-%s", currentNamespace, tenantName)
}
}
func annotateTenantRootNs(values apiextensionsv1.JSON, c client.Client) error {
var data map[string]interface{}
if err := yaml.Unmarshal(values.Raw, &data); err != nil {
return fmt.Errorf("failed to parse HelmRelease values: %w", err)
}
host, ok := data["host"].(string)
if !ok || host == "" {
return fmt.Errorf("host field not found or not a string")
}
var ns corev1.Namespace
if err := c.Get(context.TODO(), client.ObjectKey{Name: "tenant-root"}, &ns); err != nil {
return fmt.Errorf("failed to get namespace tenant-root: %w", err)
}
if ns.Annotations == nil {
ns.Annotations = map[string]string{}
}
ns.Annotations["namespace.cozystack.io/host"] = host
if err := c.Update(context.TODO(), &ns); err != nil {
return fmt.Errorf("failed to update namespace: %w", err)
}
return nil
}

View File

@@ -39,6 +39,15 @@ func (r *WorkloadReconciler) Reconcile(ctx context.Context, req ctrl.Request) (c
}
t := getMonitoredObject(w)
if t == nil {
err = r.Delete(ctx, w)
if err != nil {
logger.Error(err, "failed to delete workload")
}
return ctrl.Result{}, err
}
err = r.Get(ctx, types.NamespacedName{Name: t.GetName(), Namespace: t.GetNamespace()}, t)
// found object, nothing to do
@@ -68,20 +77,23 @@ func (r *WorkloadReconciler) SetupWithManager(mgr ctrl.Manager) error {
}
func getMonitoredObject(w *cozyv1alpha1.Workload) client.Object {
if strings.HasPrefix(w.Name, "pvc-") {
switch {
case strings.HasPrefix(w.Name, "pvc-"):
obj := &corev1.PersistentVolumeClaim{}
obj.Name = strings.TrimPrefix(w.Name, "pvc-")
obj.Namespace = w.Namespace
return obj
}
if strings.HasPrefix(w.Name, "svc-") {
case strings.HasPrefix(w.Name, "svc-"):
obj := &corev1.Service{}
obj.Name = strings.TrimPrefix(w.Name, "svc-")
obj.Namespace = w.Namespace
return obj
case strings.HasPrefix(w.Name, "pod-"):
obj := &corev1.Pod{}
obj.Name = strings.TrimPrefix(w.Name, "pod-")
obj.Namespace = w.Namespace
return obj
}
obj := &corev1.Pod{}
obj.Name = w.Name
obj.Namespace = w.Namespace
var obj client.Object
return obj
}

View File

@@ -0,0 +1,26 @@
package controller
import (
"testing"
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
corev1 "k8s.io/api/core/v1"
)
func TestUnprefixedMonitoredObjectReturnsNil(t *testing.T) {
w := &cozyv1alpha1.Workload{}
w.Name = "unprefixed-name"
obj := getMonitoredObject(w)
if obj != nil {
t.Errorf(`getMonitoredObject(&Workload{Name: "%s"}) == %v, want nil`, w.Name, obj)
}
}
func TestPodMonitoredObject(t *testing.T) {
w := &cozyv1alpha1.Workload{}
w.Name = "pod-mypod"
obj := getMonitoredObject(w)
if pod, ok := obj.(*corev1.Pod); !ok || pod.Name != "mypod" {
t.Errorf(`getMonitoredObject(&Workload{Name: "%s"}) == %v, want &Pod{Name: "mypod"}`, w.Name, obj)
}
}

View File

@@ -212,15 +212,12 @@ func (r *WorkloadMonitorReconciler) reconcilePodForMonitor(
) error {
logger := log.FromContext(ctx)
// Combine both init containers and normal containers to sum resources properly
combinedContainers := append(pod.Spec.InitContainers, pod.Spec.Containers...)
// totalResources will store the sum of all container resource limits
// totalResources will store the sum of all container resource requests
totalResources := make(map[string]resource.Quantity)
// Iterate over all containers to aggregate their Limits
for _, container := range combinedContainers {
for name, qty := range container.Resources.Limits {
// Iterate over all containers to aggregate their requests
for _, container := range pod.Spec.Containers {
for name, qty := range container.Resources.Requests {
if existing, exists := totalResources[name.String()]; exists {
existing.Add(qty)
totalResources[name.String()] = existing
@@ -249,7 +246,7 @@ func (r *WorkloadMonitorReconciler) reconcilePodForMonitor(
workload := &cozyv1alpha1.Workload{
ObjectMeta: metav1.ObjectMeta{
Name: pod.Name,
Name: fmt.Sprintf("pod-%s", pod.Name),
Namespace: pod.Namespace,
},
}

View File

@@ -0,0 +1,3 @@
# S3 bucket
## Parameters

View File

@@ -11,7 +11,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '*'
version: '>= 0.0.0-0'
interval: 1m0s
timeout: 5m0s
values:

View File

@@ -0,0 +1,5 @@
{
"title": "Chart Values",
"type": "object",
"properties": {}
}

View File

@@ -0,0 +1 @@
{}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.0
version: 0.8.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -7,8 +7,10 @@ generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
image:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/clickhouse-backup \
docker buildx build images/clickhouse-backup \
--provenance false \
--builder=$(BUILDER) \
--platform=$(PLATFORM) \
--tag $(REGISTRY)/clickhouse-backup:$(call settag,$(CLICKHOUSE_BACKUP_TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/clickhouse-backup:latest \
--cache-to type=inline \

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0
version: 0.6.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.10.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
ghcr.io/cozystack/cozystack/postgres-backup:0.10.1@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0
version: 0.5.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -6,8 +6,10 @@ include ../../../scripts/package.mk
image: image-nginx
image-nginx:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/nginx-cache \
docker buildx build images/nginx-cache \
--provenance false \
--builder=$(BUILDER) \
--platform=$(PLATFORM) \
--tag $(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/nginx-cache:latest \
--cache-to type=inline \

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.4.0@sha256:bef7344da098c4dc400a9e20ffad10ac991df67d09a30026207454abbc91f28b
ghcr.io/cozystack/cozystack/nginx-cache:0.4.0@sha256:4e1f5153d2673a399b315252238f4dc3eb5d6c59295aef594691710cc5b72eb4

View File

@@ -1,4 +1,4 @@
FROM ubuntu:22.04 as stage
FROM ubuntu:22.04 AS stage
ARG NGINX_VERSION=1.25.3
ARG IP2LOCATION_C_VERSION=8.6.1
@@ -9,11 +9,15 @@ ARG FIFTYONEDEGREES_NGINX_VERSION=3.2.21.1
ARG NGINX_CACHE_PURGE_VERSION=2.5.3
ARG NGINX_VTS_VERSION=0.2.2
ARG TARGETOS
ARG TARGETARCH
# Install required packages for development
RUN apt-get update -q \
&& apt-get install -yq \
RUN apt update -q \
&& apt install -yq --no-install-recommends \
ca-certificates \
unzip \
autoconf \
automake \
build-essential \
libtool \
libpcre3 \
@@ -68,7 +72,7 @@ RUN checkinstall \
--default \
--pkgname=ip2location-c \
--pkgversion=${IP2LOCATION_C_VERSION} \
--pkgarch=amd64 \
--pkgarch=${TARGETARCH} \
--pkggroup=lib \
--pkgsource="https://github.com/chrislim2888/IP2Location-C-Library" \
--maintainer="Eduard Generalov <eduard@generalov.net>" \
@@ -97,7 +101,7 @@ RUN checkinstall \
--default \
--pkgname=ip2proxy-c \
--pkgversion=${IP2PROXY_C_VERSION} \
--pkgarch=amd64 \
--pkgarch=${TARGETARCH} \
--pkggroup=lib \
--pkgsource="https://github.com/ip2location/ip2proxy-c" \
--maintainer="Eduard Generalov <eduard@generalov.net>" \
@@ -144,7 +148,7 @@ RUN checkinstall \
--default \
--pkgname=nginx \
--pkgversion=$VERS \
--pkgarch=amd64 \
--pkgarch=${TARGETARCH} \
--pkggroup=web \
--provides=nginx \
--requires=ip2location-c,ip2proxy-c,libssl3,libc-bin,libc6,libzstd1,libpcre++0v5,libpcre16-3,libpcre2-8-0,libpcre3,libpcre32-3,libpcrecpp0v5,libmaxminddb0 \
@@ -165,10 +169,9 @@ COPY nginx-reloader.sh /usr/bin/nginx-reloader.sh
RUN set -x \
&& groupadd --system --gid 101 nginx \
&& useradd --system --gid nginx --no-create-home --home /nonexistent --comment "nginx user" --shell /bin/false --uid 101 nginx \
&& apt update \
&& apt-get install --no-install-recommends --no-install-suggests -y gnupg1 ca-certificates inotify-tools \
&& apt -y install /packages/*.deb \
&& apt-get clean \
&& apt update -q \
&& apt install -yq --no-install-recommends --no-install-suggests gnupg1 ca-certificates inotify-tools \
&& apt install -y /packages/*.deb \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /var/lib/nginx /var/log/nginx \
&& ln -sf /dev/stdout /var/log/nginx/access.log \

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0
version: 0.6.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.18.1
version: 0.20.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.30.1"
appVersion: 1.32.4

View File

@@ -1,4 +1,4 @@
UBUNTU_CONTAINER_DISK_TAG = v1.30.1
KUBERNETES_VERSION = v1.32
KUBERNETES_PKG_TAG = $(shell awk '$$1 == "version:" {print $$2}' Chart.yaml)
include ../../../scripts/common-envs.mk
@@ -6,27 +6,36 @@ include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
yq -o json -i '.properties.controlPlane.properties.apiServer.properties.resourcesPreset.enum = ["none","nano","micro","small","medium","large","xlarge","2xlarge"]' values.schema.json
yq -o json -i '.properties.controlPlane.properties.controllerManager.properties.resourcesPreset.enum = ["none","nano","micro","small","medium","large","xlarge","2xlarge"]' values.schema.json
yq -o json -i '.properties.controlPlane.properties.scheduler.properties.resourcesPreset.enum = ["none","nano","micro","small","medium","large","xlarge","2xlarge"]' values.schema.json
yq -o json -i '.properties.controlPlane.properties.konnectivity.properties.server.properties.resourcesPreset.enum = ["none","nano","micro","small","medium","large","xlarge","2xlarge"]' values.schema.json
image: image-ubuntu-container-disk image-kubevirt-cloud-provider image-kubevirt-csi-driver image-cluster-autoscaler
image-ubuntu-container-disk:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/ubuntu-container-disk \
docker buildx build images/ubuntu-container-disk \
--provenance false \
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG)) \
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG)-$(TAG)) \
--builder=$(BUILDER) \
--platform=$(PLATFORM) \
--build-arg KUBERNETES_VERSION=${KUBERNETES_VERSION} \
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(KUBERNETES_VERSION)) \
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(KUBERNETES_VERSION)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/ubuntu-container-disk:latest \
--cache-to type=inline \
--metadata-file images/ubuntu-container-disk.json \
--push=$(PUSH) \
--label "org.opencontainers.image.source=https://github.com/cozystack/cozystack" \
--load=$(LOAD)
echo "$(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG))@$$(yq e '."containerimage.digest"' images/ubuntu-container-disk.json -o json -r)" \
echo "$(REGISTRY)/ubuntu-container-disk:$(call settag,$(KUBERNETES_VERSION))@$$(yq e '."containerimage.digest"' images/ubuntu-container-disk.json -o json -r)" \
> images/ubuntu-container-disk.tag
rm -f images/ubuntu-container-disk.json
image-kubevirt-cloud-provider:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/kubevirt-cloud-provider \
docker buildx build images/kubevirt-cloud-provider \
--provenance false \
--builder=$(BUILDER) \
--platform=$(PLATFORM) \
--tag $(REGISTRY)/kubevirt-cloud-provider:$(call settag,$(KUBERNETES_PKG_TAG)) \
--tag $(REGISTRY)/kubevirt-cloud-provider:$(call settag,$(KUBERNETES_PKG_TAG)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/kubevirt-cloud-provider:latest \
@@ -40,8 +49,10 @@ image-kubevirt-cloud-provider:
rm -f images/kubevirt-cloud-provider.json
image-kubevirt-csi-driver:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/kubevirt-csi-driver \
docker buildx build images/kubevirt-csi-driver \
--provenance false \
--builder=$(BUILDER) \
--platform=$(PLATFORM) \
--tag $(REGISTRY)/kubevirt-csi-driver:$(call settag,$(KUBERNETES_PKG_TAG)) \
--tag $(REGISTRY)/kubevirt-csi-driver:$(call settag,$(KUBERNETES_PKG_TAG)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/kubevirt-csi-driver:latest \
@@ -56,8 +67,10 @@ image-kubevirt-csi-driver:
image-cluster-autoscaler:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/cluster-autoscaler \
docker buildx build images/cluster-autoscaler \
--provenance false \
--builder=$(BUILDER) \
--platform=$(PLATFORM) \
--tag $(REGISTRY)/cluster-autoscaler:$(call settag,$(KUBERNETES_PKG_TAG)) \
--tag $(REGISTRY)/cluster-autoscaler:$(call settag,$(KUBERNETES_PKG_TAG)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/cluster-autoscaler:latest \

View File

@@ -27,20 +27,48 @@ How to access to deployed cluster:
kubectl get secret -n <namespace> kubernetes-<clusterName>-admin-kubeconfig -o go-template='{{ printf "%s\n" (index .data "super-admin.conf" | base64decode) }}' > test
```
# Series
## Parameters
<!-- source: https://github.com/kubevirt/common-instancetypes/blob/main/README.md -->
### Common parameters
. | U | O | CX | M | RT
----------------------------|-----|-----|------|-----|------
*Has GPUs* | | | | |
*Hugepages* | | | | ✓ | ✓
*Overcommitted Memory* | | | | |
*Dedicated CPU* | | | | | ✓
*Burstable CPU performance* | ✓ | ✓ | | ✓ |
*Isolated emulator threads* | | | ✓ | | ✓
*vNUMA* | | | ✓ | | ✓
*vCPU-To-Memory Ratio* | 1:4 | 1:4 | 1:2 | 1:8 | 1:4
| Name | Description | Value |
| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | ------------ |
| `host` | The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host). | `""` |
| `controlPlane.replicas` | Number of replicas for Kubernetes control-plane components | `2` |
| `storageClass` | StorageClass used to store user data | `replicated` |
| `nodeGroups` | nodeGroups configuration | `{}` |
### Cluster Addons
| Name | Description | Value |
| --------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `addons.certManager.enabled` | Enables the cert-manager | `false` |
| `addons.certManager.valuesOverride` | Custom values to override | `{}` |
| `addons.cilium.valuesOverride` | Custom values to override | `{}` |
| `addons.gatewayAPI.enabled` | Enables the Gateway API | `false` |
| `addons.ingressNginx.enabled` | Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role) | `false` |
| `addons.ingressNginx.valuesOverride` | Custom values to override | `{}` |
| `addons.ingressNginx.hosts` | List of domain names that should be passed through to the cluster by upper cluster | `[]` |
| `addons.gpuOperator.enabled` | Enables the gpu-operator | `false` |
| `addons.gpuOperator.valuesOverride` | Custom values to override | `{}` |
| `addons.fluxcd.enabled` | Enables Flux CD | `false` |
| `addons.fluxcd.valuesOverride` | Custom values to override | `{}` |
| `addons.monitoringAgents.enabled` | Enables MonitoringAgents (fluentbit, vmagents for sending logs and metrics to storage) if tenant monitoring enabled, send to tenant storage, else to root storage | `false` |
| `addons.monitoringAgents.valuesOverride` | Custom values to override | `{}` |
| `addons.verticalPodAutoscaler.valuesOverride` | Custom values to override | `{}` |
### Kubernetes control plane configuration
| Name | Description | Value |
| -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `controlPlane.apiServer.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `small` |
| `controlPlane.apiServer.resources` | Resources | `{}` |
| `controlPlane.controllerManager.resources` | Resources | `{}` |
| `controlPlane.controllerManager.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `micro` |
| `controlPlane.scheduler.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `micro` |
| `controlPlane.scheduler.resources` | Resources | `{}` |
| `controlPlane.konnectivity.server.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `micro` |
| `controlPlane.konnectivity.server.resources` | Resources | `{}` |
## U Series

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.18.0@sha256:85371c6aabf5a7fea2214556deac930c600e362f92673464fe2443784e2869c3
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.19.0@sha256:85371c6aabf5a7fea2214556deac930c600e362f92673464fe2443784e2869c3

View File

@@ -1,7 +1,14 @@
# Source: https://raw.githubusercontent.com/kubernetes/autoscaler/refs/heads/master/cluster-autoscaler/Dockerfile.amd64
ARG builder_image=docker.io/library/golang:1.23.4
ARG BASEIMAGE=gcr.io/distroless/static:nonroot-amd64
ARG BASEIMAGE=gcr.io/distroless/static:nonroot-${TARGETARCH}
FROM ${builder_image} AS builder
ARG TARGETOS
ARG TARGETARCH
ENV GOOS=$TARGETOS
ENV GOARCH=$TARGETARCH
RUN git clone https://github.com/kubernetes/autoscaler /src/autoscaler \
&& cd /src/autoscaler/cluster-autoscaler \
&& git checkout cluster-autoscaler-1.32.0
@@ -14,6 +21,8 @@ RUN make build
FROM $BASEIMAGE
LABEL maintainer="Marcin Wielgus <mwielgus@google.com>"
COPY --from=builder /src/autoscaler/cluster-autoscaler/cluster-autoscaler-amd64 /cluster-autoscaler
ARG TARGETARCH
COPY --from=builder /src/autoscaler/cluster-autoscaler/cluster-autoscaler-${TARGETARCH} /cluster-autoscaler
WORKDIR /
CMD ["/cluster-autoscaler"]

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.18.0@sha256:795d8e1ef4b2b0df2aa1e09d96cd13476ebb545b4bf4b5779b7547a70ef64cf9
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.19.0@sha256:795d8e1ef4b2b0df2aa1e09d96cd13476ebb545b4bf4b5779b7547a70ef64cf9

View File

@@ -1,5 +1,10 @@
# Source: https://github.com/kubevirt/cloud-provider-kubevirt/blob/main/build/images/kubevirt-cloud-controller-manager/Dockerfile
FROM --platform=linux/amd64 golang:1.20.6 AS builder
FROM golang:1.20.6 AS builder
ARG TARGETOS
ARG TARGETARCH
ENV GOOS=$TARGETOS
ENV GOARCH=$TARGETARCH
RUN git clone https://github.com/kubevirt/cloud-provider-kubevirt /go/src/kubevirt.io/cloud-provider-kubevirt \
&& cd /go/src/kubevirt.io/cloud-provider-kubevirt \
@@ -14,7 +19,7 @@ RUN go get 'k8s.io/endpointslice/util@v0.28' 'k8s.io/apiserver@v0.28'
RUN go mod tidy
RUN go mod vendor
RUN CGO_ENABLED=0 GOOS=linux go build -mod=vendor -ldflags="-s -w" -o bin/kubevirt-cloud-controller-manager ./cmd/kubevirt-cloud-controller-manager
RUN CGO_ENABLED=0 go build -mod=vendor -ldflags="-s -w" -o bin/kubevirt-cloud-controller-manager ./cmd/kubevirt-cloud-controller-manager
FROM registry.access.redhat.com/ubi9/ubi-micro
COPY --from=builder /go/src/kubevirt.io/cloud-provider-kubevirt/bin/kubevirt-cloud-controller-manager /bin/kubevirt-cloud-controller-manager

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.18.0@sha256:6f9091c3e7e4951c5e43fdafd505705fcc9f1ead290ee3ae42e97e9ec2b87b20
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.19.0@sha256:5717919c75e609902c6d67138311a2a8fd07be822e2173f3802b67cf5f3486e9

View File

@@ -5,6 +5,11 @@ RUN git clone https://github.com/kubevirt/csi-driver /src/kubevirt-csi-driver \
&& cd /src/kubevirt-csi-driver \
&& git checkout 35836e0c8b68d9916d29a838ea60cdd3fc6199cf
ARG TARGETOS
ARG TARGETARCH
ENV GOOS=$TARGETOS
ENV GOARCH=$TARGETARCH
WORKDIR /src/kubevirt-csi-driver
RUN make build

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.30.1@sha256:07392e7a87a3d4ef1c86c1b146e6c5de5c2b524aed5a53bf48870dc8a296f99a
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.32@sha256:4a4f8bee150e04d1efcd5ff1ea83e12f495a98851cc5fd47ef41ac7aebce9b74

View File

@@ -1,19 +1,26 @@
FROM ubuntu:22.04 as guestfish
# TODO: Here we use ubuntu:22.04, as guestfish has some network issues running in ubuntu:24.04
FROM ubuntu:22.04 AS guestfish
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get -y install \
libguestfs-tools \
linux-image-generic \
wget \
make \
bash-completion \
&& apt-get clean
bash-completion
WORKDIR /build
FROM guestfish as builder
FROM guestfish AS builder
RUN wget -O image.img https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
ARG TARGETOS
ARG TARGETARCH
# noble is a code name for the Ubuntu 24.04 LTS release
RUN wget -O image.img https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-${TARGETARCH}.img --show-progress --output-file /dev/stdout --progress=dot:giga 2>/dev/null
ARG KUBERNETES_VERSION
RUN qemu-img resize image.img 5G \
&& eval "$(guestfish --listen --network)" \
@@ -24,19 +31,21 @@ RUN qemu-img resize image.img 5G \
&& guestfish --remote command "resize2fs /dev/sda1" \
# docker repo
&& guestfish --remote sh "curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg" \
&& guestfish --remote sh 'echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list' \
&& guestfish --remote sh 'echo "deb [signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list' \
# kubernetes repo
&& guestfish --remote sh "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" \
&& guestfish --remote sh "echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list" \
&& guestfish --remote sh "curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBERNETES_VERSION}/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" \
&& guestfish --remote sh "echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBERNETES_VERSION}/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list" \
&& guestfish --remote command "apt-get check -q" \
# install containerd
&& guestfish --remote command "apt-get update -y" \
&& guestfish --remote command "apt-get install -y containerd.io" \
&& guestfish --remote command "apt-get update -q" \
&& guestfish --remote command "apt-get install -yq containerd.io" \
# configure containerd
&& guestfish --remote command "mkdir -p /etc/containerd" \
&& guestfish --remote sh "containerd config default | tee /etc/containerd/config.toml" \
&& guestfish --remote command "sed -i '/SystemdCgroup/ s/=.*/= true/' /etc/containerd/config.toml" \
&& guestfish --remote command "containerd config dump >/dev/null" \
# install kubernetes
&& guestfish --remote command "apt-get install -y kubelet kubeadm" \
&& guestfish --remote command "apt-get install -yq kubelet kubeadm" \
# clean apt cache
&& guestfish --remote sh 'apt-get clean && rm -rf /var/lib/apt/lists/*' \
# write system configuration

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -39,6 +39,13 @@ spec:
sockets: 1
{{- end }}
devices:
{{- if .group.gpus }}
gpus:
{{- range $i, $gpu := .group.gpus }}
- name: gpu{{ add $i 1 }}
deviceName: {{ $gpu.name }}
{{- end }}
{{- end }}
disks:
- name: system
disk:
@@ -103,22 +110,22 @@ metadata:
kamaji.clastix.io/kubeconfig-secret-key: "super-admin.svc"
spec:
apiServer:
{{- if .Values.kamajiControlPlane.apiServer.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.apiServer.resources | nindent 6 }}
{{- else if ne .Values.kamajiControlPlane.apiServer.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.apiServer.resourcesPreset "Release" .Release) | nindent 6 }}
{{- if .Values.controlPlane.apiServer.resources }}
resources: {{- toYaml .Values.controlPlane.apiServer.resources | nindent 6 }}
{{- else if ne .Values.controlPlane.apiServer.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.apiServer.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
controllerManager:
{{- if .Values.kamajiControlPlane.controllerManager.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.controllerManager.resources | nindent 6 }}
{{- else if ne .Values.kamajiControlPlane.controllerManager.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.controllerManager.resourcesPreset "Release" .Release) | nindent 6 }}
{{- if .Values.controlPlane.controllerManager.resources }}
resources: {{- toYaml .Values.controlPlane.controllerManager.resources | nindent 6 }}
{{- else if ne .Values.controlPlane.controllerManager.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.controllerManager.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
scheduler:
{{- if .Values.kamajiControlPlane.scheduler.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.scheduler.resources | nindent 6 }}
{{- else if ne .Values.kamajiControlPlane.scheduler.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.scheduler.resourcesPreset "Release" .Release) | nindent 6 }}
{{- if .Values.controlPlane.scheduler.resources }}
resources: {{- toYaml .Values.controlPlane.scheduler.resources | nindent 6 }}
{{- else if ne .Values.controlPlane.scheduler.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.scheduler.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
dataStoreName: "{{ $etcd }}"
addons:
@@ -128,10 +135,10 @@ spec:
konnectivity:
server:
port: 8132
{{- if .Values.kamajiControlPlane.addons.konnectivity.server.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.addons.konnectivity.server.resources | nindent 10 }}
{{- else if ne .Values.kamajiControlPlane.addons.konnectivity.server.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.addons.konnectivity.server.resourcesPreset "Release" .Release) | nindent 10 }}
{{- if .Values.controlPlane.konnectivity.server.resources }}
resources: {{- toYaml .Values.controlPlane.konnectivity.server.resources | nindent 10 }}
{{- else if ne .Values.controlPlane.konnectivity.server.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.konnectivity.server.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
kubelet:
cgroupfs: systemd
@@ -143,14 +150,14 @@ spec:
ingress:
extraAnnotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
hostname: {{ .Values.host | default (printf "%s.%s" .Release.Name $host) }}
hostname: {{ .Values.host | default (printf "%s.%s" .Release.Name $host) }}:443
className: "{{ $ingress }}"
deployment:
podAdditionalMetadata:
labels:
policy.cozystack.io/allow-to-etcd: "true"
replicas: 2
version: 1.30.1
version: {{ $.Chart.AppVersion }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
@@ -276,7 +283,7 @@ spec:
kind: KubevirtMachineTemplate
name: {{ $.Release.Name }}-{{ $groupName }}-{{ $kubevirtmachinetemplateHash }}
namespace: {{ $.Release.Namespace }}
version: v1.30.1
version: v{{ $.Chart.AppVersion }}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineHealthCheck

View File

@@ -4,7 +4,7 @@ metadata:
name: {{ .Release.Name }}-cert-manager-crds
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: cert-manager-crds
@@ -16,6 +16,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig

View File

@@ -5,7 +5,7 @@ metadata:
name: {{ .Release.Name }}-cert-manager
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: cert-manager
@@ -17,6 +17,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig
@@ -30,11 +31,9 @@ spec:
upgrade:
remediation:
retries: -1
{{- if .Values.addons.certManager.valuesOverride }}
valuesFrom:
- kind: Secret
name: {{ .Release.Name }}-cert-manager-values-override
valuesKey: values
{{- with .Values.addons.certManager.valuesOverride }}
values:
{{- toYaml . | nindent 4 }}
{{- end }}
dependsOn:
@@ -47,13 +46,3 @@ spec:
- name: {{ .Release.Name }}-cert-manager-crds
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.addons.certManager.valuesOverride }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-cert-manager-values-override
stringData:
values: |
{{- toYaml .Values.addons.certManager.valuesOverride | nindent 4 }}
{{- end }}

View File

@@ -1,10 +1,25 @@
{{- define "cozystack.defaultCiliumValues" -}}
cilium:
k8sServiceHost: {{ .Release.Name }}.{{ .Release.Namespace }}.svc
k8sServicePort: 6443
routingMode: tunnel
enableIPv4Masquerade: true
ipv4NativeRoutingCIDR: ""
{{- if $.Values.addons.gatewayAPI.enabled }}
gatewayAPI:
enabled: true
envoy:
enabled: true
{{- end }}
{{- end }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-cilium
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: cilium
@@ -16,6 +31,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig
@@ -30,14 +46,13 @@ spec:
remediation:
retries: -1
values:
cilium:
k8sServiceHost: {{ .Release.Name }}.{{ .Release.Namespace }}.svc
k8sServicePort: 6443
routingMode: tunnel
enableIPv4Masquerade: true
ipv4NativeRoutingCIDR: ""
{{- toYaml (deepCopy .Values.addons.cilium.valuesOverride | mergeOverwrite (fromYaml (include "cozystack.defaultCiliumValues" .))) | nindent 4 }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if $.Values.addons.gatewayAPI.enabled }}
- name: {{ .Release.Name }}-gateway-api-crds
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -4,7 +4,7 @@ metadata:
name: {{ .Release.Name }}-csi
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: csi
@@ -16,6 +16,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig

View File

@@ -20,7 +20,7 @@ spec:
effect: "NoSchedule"
containers:
- name: kubectl
image: docker.io/clastix/kubectl:v1.30.1
image: docker.io/clastix/kubectl:v1.32
command:
- /bin/sh
- -c
@@ -30,6 +30,7 @@ spec:
patch
helmrelease
{{ .Release.Name }}-cilium
{{ .Release.Name }}-gateway-api-crds
{{ .Release.Name }}-csi
{{ .Release.Name }}-cert-manager
{{ .Release.Name }}-cert-manager-crds
@@ -38,6 +39,7 @@ spec:
{{ .Release.Name }}-ingress-nginx
{{ .Release.Name }}-fluxcd-operator
{{ .Release.Name }}-fluxcd
{{ .Release.Name }}-gpu-operator
-p '{"spec": {"suspend": true}}'
--type=merge --field-manager=flux-client-side-apply || true
---
@@ -76,6 +78,7 @@ rules:
- {{ .Release.Name }}-ingress-nginx
- {{ .Release.Name }}-fluxcd-operator
- {{ .Release.Name }}-fluxcd
- {{ .Release.Name }}-gpu-operator
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@@ -5,7 +5,7 @@ metadata:
name: {{ .Release.Name }}-fluxcd-operator
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: fluxcd-operator
@@ -17,6 +17,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig
@@ -49,7 +50,7 @@ metadata:
name: {{ .Release.Name }}-fluxcd
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: fluxcd
@@ -61,6 +62,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
@@ -73,11 +75,9 @@ spec:
upgrade:
remediation:
retries: -1
{{- if .Values.addons.fluxcd.valuesOverride }}
valuesFrom:
- kind: Secret
name: {{ .Release.Name }}-fluxcd-values-override
valuesKey: values
{{- with .Values.addons.fluxcd.valuesOverride }}
values:
{{- toYaml . | nindent 4 }}
{{- end }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
@@ -89,14 +89,3 @@ spec:
- name: {{ .Release.Name }}-fluxcd-operator
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.addons.fluxcd.valuesOverride }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-fluxcd-values-override
stringData:
values: |
{{- toYaml .Values.addons.fluxcd.valuesOverride | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,38 @@
{{- if $.Values.addons.gatewayAPI.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-gateway-api-crds
labels:
cozystack.io/repository: system
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: gateway-api-crds
chart:
spec:
chart: cozy-gateway-api-crds
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: kube-system
storageNamespace: kube-system
install:
createNamespace: false
remediation:
retries: -1
upgrade:
remediation:
retries: -1
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,46 @@
{{- if .Values.addons.gpuOperator.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-gpu-operator
labels:
cozystack.io/repository: system
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: gpu-operator
chart:
spec:
chart: cozy-gpu-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-gpu-operator
storageNamespace: cozy-gpu-operator
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
{{- with .Values.addons.gpuOperator.valuesOverride }}
values:
{{- toYaml . | nindent 4 }}
{{- end }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,3 +1,15 @@
{{- define "cozystack.defaultIngressValues" -}}
ingress-nginx:
fullnameOverride: ingress-nginx
controller:
kind: DaemonSet
hostNetwork: true
service:
enabled: false
nodeSelector:
node-role.kubernetes.io/ingress-nginx: ""
{{- end }}
{{- if .Values.addons.ingressNginx.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
@@ -5,7 +17,7 @@ metadata:
name: {{ .Release.Name }}-ingress-nginx
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: ingress-nginx
@@ -17,6 +29,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig
@@ -31,21 +44,7 @@ spec:
remediation:
retries: -1
values:
ingress-nginx:
fullnameOverride: ingress-nginx
controller:
kind: DaemonSet
hostNetwork: true
service:
enabled: false
nodeSelector:
node-role.kubernetes.io/ingress-nginx: ""
{{- if .Values.addons.ingressNginx.valuesOverride }}
valuesFrom:
- kind: Secret
name: {{ .Release.Name }}-ingress-nginx-values-override
valuesKey: values
{{- end }}
{{- toYaml (deepCopy .Values.addons.ingressNginx.valuesOverride | mergeOverwrite (fromYaml (include "cozystack.defaultIngressValues" .))) | nindent 4 }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
@@ -54,14 +53,3 @@ spec:
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.addons.ingressNginx.valuesOverride }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-ingress-nginx-values-override
stringData:
values: |
{{- toYaml .Values.addons.ingressNginx.valuesOverride | nindent 4 }}
{{- end }}

View File

@@ -7,7 +7,7 @@ metadata:
name: {{ .Release.Name }}-monitoring-agents
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: cozy-monitoring-agents
@@ -19,6 +19,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig

View File

@@ -5,7 +5,7 @@ metadata:
name: {{ .Release.Name }}-vertical-pod-autoscaler-crds
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: vertical-pod-autoscaler-crds
@@ -17,6 +17,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig

View File

@@ -1,5 +1,28 @@
{{- define "cozystack.defaultVPAValues" -}}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $targetTenant := index $myNS.metadata.annotations "namespace.cozystack.io/monitoring" }}
vertical-pod-autoscaler:
recommender:
extraArgs:
container-name-label: container
container-namespace-label: namespace
container-pod-name-label: pod
storage: prometheus
memory-saver: true
pod-label-prefix: label_
metric-for-pod-labels: kube_pod_labels{job="kube-state-metrics", tenant="{{ .Release.Namespace }}", cluster="{{ .Release.Name }}"}[8d]
pod-name-label: pod
pod-namespace-label: namespace
prometheus-address: http://vmselect-shortterm.{{ $targetTenant }}.svc.cozy.local:8481/select/0/prometheus/
prometheus-cadvisor-job-name: cadvisor
resources:
limits:
memory: 1600Mi
requests:
cpu: 100m
memory: 1600Mi
{{- end }}
{{- if .Values.addons.monitoringAgents.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
@@ -7,7 +30,7 @@ metadata:
name: {{ .Release.Name }}-vertical-pod-autoscaler
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: vertical-pod-autoscaler
@@ -19,6 +42,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig
@@ -33,32 +57,7 @@ spec:
remediation:
retries: -1
values:
vertical-pod-autoscaler:
recommender:
extraArgs:
container-name-label: container
container-namespace-label: namespace
container-pod-name-label: pod
storage: prometheus
memory-saver: true
pod-label-prefix: label_
metric-for-pod-labels: kube_pod_labels{job="kube-state-metrics", tenant="{{ .Release.Namespace }}", cluster="{{ .Release.Name }}"}[8d]
pod-name-label: pod
pod-namespace-label: namespace
prometheus-address: http://vmselect-shortterm.{{ $targetTenant }}.svc.cozy.local:8481/select/0/prometheus/
prometheus-cadvisor-job-name: cadvisor
resources:
limits:
memory: 1600Mi
requests:
cpu: 100m
memory: 1600Mi
{{- if .Values.addons.verticalPodAutoscaler.valuesOverride }}
valuesFrom:
- kind: Secret
name: {{ .Release.Name }}-vertical-pod-autoscaler-values-override
valuesKey: values
{{- end }}
{{- toYaml (deepCopy .Values.addons.verticalPodAutoscaler.valuesOverride | mergeOverwrite (fromYaml (include "cozystack.defaultVPAValues" .))) | nindent 4 }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}

View File

@@ -5,7 +5,7 @@ metadata:
name: {{ .Release.Name }}-cozy-victoria-metrics-operator
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
cozystack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: cozy-victoria-metrics-operator
@@ -17,6 +17,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig

View File

@@ -1,97 +1,247 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"host": {
"type": "string",
"description": "The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host).",
"default": ""
"title": "Chart Values",
"type": "object",
"properties": {
"host": {
"type": "string",
"description": "The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host).",
"default": ""
},
"controlPlane": {
"type": "object",
"properties": {
"replicas": {
"type": "number",
"description": "Number of replicas for Kubernetes control-plane components",
"default": 2
},
"controlPlane": {
"type": "object",
"properties": {
"replicas": {
"type": "number",
"description": "Number of replicas for Kubernetes contorl-plane components",
"default": 2
}
"apiServer": {
"type": "object",
"properties": {
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "small",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
}
}
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store user data",
"default": "replicated"
},
"addons": {
"type": "object",
"properties": {
"certManager": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables the cert-manager",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
},
"ingressNginx": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role)",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
},
"hosts": {
"type": "array",
"description": "List of domain names that should be passed through to the cluster by upper cluster",
"default": [],
"items": {}
}
}
},
"fluxcd": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables Flux CD",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
},
"monitoringAgents": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables MonitoringAgents (fluentbit, vmagents for sending logs and metrics to storage) if tenant monitoring enabled, send to tenant storage, else to root storage",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
}
"controllerManager": {
"type": "object",
"properties": {
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "micro",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
}
}
},
"scheduler": {
"type": "object",
"properties": {
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "micro",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
}
}
},
"konnectivity": {
"type": "object",
"properties": {
"server": {
"type": "object",
"properties": {
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "micro",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
}
}
}
}
}
}
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store user data",
"default": "replicated"
},
"addons": {
"type": "object",
"properties": {
"certManager": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables the cert-manager",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
},
"cilium": {
"type": "object",
"properties": {
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
},
"gatewayAPI": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables the Gateway API",
"default": false
}
}
},
"ingressNginx": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role)",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
},
"hosts": {
"type": "array",
"description": "List of domain names that should be passed through to the cluster by upper cluster",
"default": [],
"items": {}
}
}
},
"gpuOperator": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables the gpu-operator",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
},
"fluxcd": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables Flux CD",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
},
"monitoringAgents": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables MonitoringAgents (fluentbit, vmagents for sending logs and metrics to storage) if tenant monitoring enabled, send to tenant storage, else to root storage",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
},
"verticalPodAutoscaler": {
"type": "object",
"properties": {
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
}
}
}
}
}

View File

@@ -1,12 +1,10 @@
## @section Common parameters
## @param host The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host).
## @param controlPlane.replicas Number of replicas for Kubernetes contorl-plane components
## @param controlPlane.replicas Number of replicas for Kubernetes control-plane components
## @param storageClass StorageClass used to store user data
##
host: ""
controlPlane:
replicas: 2
storageClass: replicated
## @param nodeGroups [object] nodeGroups configuration
@@ -24,6 +22,14 @@ nodeGroups:
cpu: ""
memory: ""
## List of GPUs to attach (WARN: NVIDIA driver requires at least 4 GiB of RAM)
## e.g:
## instanceType: "u1.xlarge"
## gpus:
## - name: nvidia.com/AD102GL_L40S
gpus: []
## @section Cluster Addons
##
addons:
@@ -36,6 +42,18 @@ addons:
enabled: false
valuesOverride: {}
## Cilium CNI plugin
##
cilium:
## @param addons.cilium.valuesOverride Custom values to override
valuesOverride: {}
## Gateway API
##
gatewayAPI:
## @param addons.gatewayAPI.enabled Enables the Gateway API
enabled: false
## Ingress-NGINX Controller
##
ingressNginx:
@@ -52,6 +70,14 @@ addons:
hosts: []
valuesOverride: {}
## GPU-operator: NVIDIA GPU Operator
##
gpuOperator:
## @param addons.gpuOperator.enabled Enables the gpu-operator
## @param addons.gpuOperator.valuesOverride Custom values to override
enabled: false
valuesOverride: {}
## Flux CD
##
fluxcd:
@@ -77,62 +103,42 @@ addons:
##
valuesOverride: {}
## @section Kamaji control plane
## @section Kubernetes control plane configuration
##
kamajiControlPlane:
controlPlane:
replicas: 2
apiServer:
## @param kamajiControlPlane.apiServer.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.apiServer.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
## @param controlPlane.apiServer.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
## @param controlPlane.apiServer.resources Resources
## e.g:
## resources:
## limits:
## cpu: 4000m
## memory: 4Gi
## requests:
## cpu: 100m
## memory: 512Mi
##
resourcesPreset: "small"
resources: {}
controllerManager:
## @param kamajiControlPlane.controllerManager.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.controllerManager.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
## @param controlPlane.controllerManager.resources Resources
## @param controlPlane.controllerManager.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"
resources: {}
scheduler:
## @param kamajiControlPlane.scheduler.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.scheduler.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
## @param controlPlane.scheduler.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
## @param controlPlane.scheduler.resources Resources
resourcesPreset: "micro"
addons:
konnectivity:
server:
## @param kamajiControlPlane.addons.konnectivity.server.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.addons.konnectivity.server.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"
resources: {}
konnectivity:
server:
## @param controlPlane.konnectivity.server.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
## @param controlPlane.konnectivity.server.resources Resources
resourcesPreset: "micro"
resources: {}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -7,8 +7,10 @@ generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
image:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/mariadb-backup \
docker buildx build images/mariadb-backup \
--provenance false \
--builder=$(BUILDER) \
--platform=$(PLATFORM) \
--tag $(REGISTRY)/mariadb-backup:$(call settag,$(MARIADB_BACKUP_TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/mariadb-backup:latest \
--cache-to type=inline \

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0
version: 0.6.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -33,7 +33,7 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '*'
version: '>= 0.0.0-0'
interval: 1m0s
timeout: 5m0s
values:

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.10.0
version: 0.11.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -7,8 +7,10 @@ generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
image:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/postgres-backup \
docker buildx build images/postgres-backup \
--provenance false \
--builder=$(BUILDER) \
--platform=$(PLATFORM) \
--tag $(REGISTRY)/postgres-backup:$(call settag,$(POSTGRES_BACKUP_TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/postgres-backup:latest \
--cache-to type=inline \

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.10.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
ghcr.io/cozystack/cozystack/postgres-backup:0.10.1@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -13,9 +13,6 @@ spec:
jobTemplate:
spec:
backoffLimit: 2
template:
spec:
restartPolicy: OnFailure
template:
metadata:
annotations:
@@ -24,7 +21,7 @@ spec:
spec:
imagePullSecrets:
- name: {{ .Release.Name }}-regsecret
restartPolicy: Never
restartPolicy: OnFailure
containers:
- name: pgdump
image: "{{ $.Files.Get "images/postgres-backup.tag" | trim }}"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0
version: 0.6.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0
version: 0.4.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -4,4 +4,4 @@ description: Separated tenant namespace
icon: /logos/tenant.svg
type: application
version: 1.9.1
version: 1.9.2

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cozy-tenant-configuration-hash
namespace: {{ include "tenant.name" . }}
data:
cozyTenantConfigurationHash: {{ sha256sum (toJson .Values) | quote }}

View File

@@ -24,6 +24,7 @@ spec:
ingress:
- fromEntities:
- world
- cluster
egress:
- toEntities:
- world

View File

@@ -8,7 +8,8 @@ clickhouse 0.5.0 0f312d5c
clickhouse 0.6.0 1ec10165
clickhouse 0.6.1 c62a83a7
clickhouse 0.6.2 8267072d
clickhouse 0.7.0 HEAD
clickhouse 0.7.0 93bdf411
clickhouse 0.8.0 HEAD
ferretdb 0.1.0 e9716091
ferretdb 0.1.1 91b0499a
ferretdb 0.2.0 6c5cf5bf
@@ -16,12 +17,14 @@ ferretdb 0.3.0 b8e33d19
ferretdb 0.4.0 b40e1b09
ferretdb 0.4.1 1ec10165
ferretdb 0.4.2 8267072d
ferretdb 0.5.0 HEAD
ferretdb 0.5.0 93bdf411
ferretdb 0.6.0 HEAD
http-cache 0.1.0 263e47be
http-cache 0.2.0 53f2365e
http-cache 0.3.0 6c5cf5bf
http-cache 0.3.1 0f312d5c
http-cache 0.4.0 HEAD
http-cache 0.4.0 93bdf411
http-cache 0.5.0 HEAD
kafka 0.1.0 f7eaab0a
kafka 0.2.0 c0685f43
kafka 0.2.1 dfbc210b
@@ -32,7 +35,8 @@ kafka 0.3.1 c62a83a7
kafka 0.3.2 93c46161
kafka 0.3.3 8267072d
kafka 0.4.0 85ec09b8
kafka 0.5.0 HEAD
kafka 0.5.0 93bdf411
kafka 0.6.0 HEAD
kubernetes 0.1.0 263e47be
kubernetes 0.2.0 53f2365e
kubernetes 0.3.0 007d414f
@@ -59,7 +63,8 @@ kubernetes 0.16.0 077045b0
kubernetes 0.17.0 1fbbfcd0
kubernetes 0.17.1 fd240701
kubernetes 0.18.0 721c12a7
kubernetes 0.18.1 HEAD
kubernetes 0.19.0 93bdf411
kubernetes 0.20.0 HEAD
mysql 0.1.0 263e47be
mysql 0.2.0 c24a103f
mysql 0.3.0 53f2365e
@@ -68,14 +73,16 @@ mysql 0.5.0 b40e1b09
mysql 0.5.1 0f312d5c
mysql 0.5.2 1ec10165
mysql 0.5.3 8267072d
mysql 0.6.0 HEAD
mysql 0.6.0 93bdf411
mysql 0.7.0 HEAD
nats 0.1.0 e9716091
nats 0.2.0 6c5cf5bf
nats 0.3.0 78366f19
nats 0.3.1 c62a83a7
nats 0.4.0 898374b5
nats 0.4.1 8267072d
nats 0.5.0 HEAD
nats 0.5.0 93bdf411
nats 0.6.0 HEAD
postgres 0.1.0 263e47be
postgres 0.2.0 53f2365e
postgres 0.2.1 d7cfa53c
@@ -89,7 +96,9 @@ postgres 0.7.0 4b90bf5a
postgres 0.7.1 1ec10165
postgres 0.8.0 4e68e65c
postgres 0.9.0 8267072d
postgres 0.10.0 HEAD
postgres 0.10.0 721c12a7
postgres 0.10.1 93bdf411
postgres 0.11.0 HEAD
rabbitmq 0.1.0 263e47be
rabbitmq 0.2.0 53f2365e
rabbitmq 0.3.0 6c5cf5bf
@@ -98,17 +107,20 @@ rabbitmq 0.4.1 1128d0cb
rabbitmq 0.4.2 4b90bf5a
rabbitmq 0.4.3 1ec10165
rabbitmq 0.4.4 8267072d
rabbitmq 0.5.0 HEAD
rabbitmq 0.5.0 93bdf411
rabbitmq 0.6.0 HEAD
redis 0.1.1 263e47be
redis 0.2.0 53f2365e
redis 0.3.0 6c5cf5bf
redis 0.3.1 c62a83a7
redis 0.4.0 84f3ccc0
redis 0.5.0 4e68e65c
redis 0.6.0 HEAD
redis 0.6.0 93bdf411
redis 0.7.0 HEAD
tcp-balancer 0.1.0 263e47be
tcp-balancer 0.2.0 53f2365e
tcp-balancer 0.3.0 HEAD
tcp-balancer 0.3.0 93bdf411
tcp-balancer 0.4.0 HEAD
tenant 0.1.4 afc997ef
tenant 0.1.5 e3ab858a
tenant 1.0.0 263e47be
@@ -130,7 +142,8 @@ tenant 1.6.8 bc95159a
tenant 1.7.0 24fa7222
tenant 1.8.0 160e4e2a
tenant 1.9.0 728743db
tenant 1.9.1 HEAD
tenant 1.9.1 721c12a7
tenant 1.9.2 HEAD
virtual-machine 0.1.4 f2015d65
virtual-machine 0.1.5 263e47be
virtual-machine 0.2.0 c0685f43
@@ -143,7 +156,8 @@ virtual-machine 0.7.1 0ab39f20
virtual-machine 0.8.0 3fa4dd3a
virtual-machine 0.8.1 93c46161
virtual-machine 0.8.2 de19450f
virtual-machine 0.9.0 HEAD
virtual-machine 0.9.0 721c12a7
virtual-machine 0.9.1 HEAD
vm-disk 0.1.0 d971f2ff
vm-disk 0.1.1 HEAD
vm-instance 0.1.0 1ec10165
@@ -153,9 +167,11 @@ vm-instance 0.4.0 e23286a3
vm-instance 0.4.1 0ab39f20
vm-instance 0.5.0 3fa4dd3a
vm-instance 0.5.1 de19450f
vm-instance 0.6.0 HEAD
vm-instance 0.6.0 721c12a7
vm-instance 0.6.1 HEAD
vpn 0.1.0 263e47be
vpn 0.2.0 53f2365e
vpn 0.3.0 6c5cf5bf
vpn 0.3.1 1ec10165
vpn 0.4.0 HEAD
vpn 0.4.0 93bdf411
vpn 0.5.0 HEAD

View File

@@ -17,7 +17,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.9.0
version: 0.9.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -74,7 +74,8 @@ spec:
{{- if .Values.gpus }}
gpus:
{{- range $i, $gpu := .Values.gpus }}
- deviceName: {{ $gpu.name }}
- name: gpu{{ add $i 1 }}
deviceName: {{ $gpu.name }}
{{- end }}
{{- end }}
disks:

View File

@@ -17,7 +17,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.6.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -46,7 +46,8 @@ spec:
{{- if .Values.gpus }}
gpus:
{{- range $i, $gpu := .Values.gpus }}
- deviceName: {{ $gpu.name }}
- name: gpu{{ add $i 1 }}
deviceName: {{ $gpu.name }}
{{- end }}
{{- end }}
disks:

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0
version: 0.5.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -11,35 +11,34 @@ These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}

View File

@@ -3,24 +3,24 @@
arch: amd64
platform: metal
secureboot: false
version: v1.9.5
version: v1.10.1
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.9.5
imageRef: ghcr.io/siderolabs/installer:v1.10.1
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250311
- imageRef: ghcr.io/siderolabs/amd-ucode:20250410
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20241110
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250311
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250410
- imageRef: ghcr.io/siderolabs/i915-ucode:20241110
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250311
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250410
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250311
- imageRef: ghcr.io/siderolabs/drbd:9.2.12-v1.9.5
- imageRef: ghcr.io/siderolabs/zfs:2.2.7-v1.9.5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250410
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.1
- imageRef: ghcr.io/siderolabs/zfs:2.3.1-v1.10.1
output:
kind: initramfs
imageOptions: {}

View File

@@ -3,24 +3,24 @@
arch: amd64
platform: metal
secureboot: false
version: v1.9.5
version: v1.10.1
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.9.5
imageRef: ghcr.io/siderolabs/installer:v1.10.1
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250311
- imageRef: ghcr.io/siderolabs/amd-ucode:20250410
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20241110
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250311
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250410
- imageRef: ghcr.io/siderolabs/i915-ucode:20241110
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250311
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250410
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250311
- imageRef: ghcr.io/siderolabs/drbd:9.2.12-v1.9.5
- imageRef: ghcr.io/siderolabs/zfs:2.2.7-v1.9.5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250410
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.1
- imageRef: ghcr.io/siderolabs/zfs:2.3.1-v1.10.1
output:
kind: installer
imageOptions: {}

View File

@@ -3,24 +3,24 @@
arch: amd64
platform: metal
secureboot: false
version: v1.9.5
version: v1.10.1
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.9.5
imageRef: ghcr.io/siderolabs/installer:v1.10.1
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250311
- imageRef: ghcr.io/siderolabs/amd-ucode:20250410
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20241110
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250311
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250410
- imageRef: ghcr.io/siderolabs/i915-ucode:20241110
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250311
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250410
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250311
- imageRef: ghcr.io/siderolabs/drbd:9.2.12-v1.9.5
- imageRef: ghcr.io/siderolabs/zfs:2.2.7-v1.9.5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250410
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.1
- imageRef: ghcr.io/siderolabs/zfs:2.3.1-v1.10.1
output:
kind: iso
imageOptions: {}

View File

@@ -3,24 +3,24 @@
arch: amd64
platform: metal
secureboot: false
version: v1.9.5
version: v1.10.1
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.9.5
imageRef: ghcr.io/siderolabs/installer:v1.10.1
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250311
- imageRef: ghcr.io/siderolabs/amd-ucode:20250410
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20241110
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250311
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250410
- imageRef: ghcr.io/siderolabs/i915-ucode:20241110
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250311
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250410
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250311
- imageRef: ghcr.io/siderolabs/drbd:9.2.12-v1.9.5
- imageRef: ghcr.io/siderolabs/zfs:2.2.7-v1.9.5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250410
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.1
- imageRef: ghcr.io/siderolabs/zfs:2.3.1-v1.10.1
output:
kind: kernel
imageOptions: {}

Some files were not shown because too many files have changed in this diff Show More