Compare commits

...

98 Commits

Author SHA1 Message Date
Andrei Kvapil
b1dac3c3c9 Release v0.41.11 (#2185)
This PR prepares the release `v0.41.11`.
2026-03-10 21:21:40 +01:00
cozystack-bot
ab9643c35e Prepare release v0.41.11
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-03-10 11:48:01 +00:00
Andrei Kvapil
c720bde0e9 fix(etcd-operator): replace deprecated kube-rbac-proxy image (#2181)
## Summary
- Replace deprecated `gcr.io/kubebuilder/kube-rbac-proxy:v0.16.0` with
`quay.io/brancz/kube-rbac-proxy:v0.18.1` in the vendored etcd-operator
chart
- The GCR-hosted image became unavailable after March 18, 2025

Fixes #2172 #488

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated proxy component to v0.18.1 with configuration changes for
improved stability and compatibility.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-03-10 12:38:58 +01:00
Andrei Kvapil
c7b2f60d18 Release v0.41.10 (#2139)
This PR prepares the release `v0.41.10`.
2026-03-04 00:24:11 +01:00
cozystack-bot
2a766df6e0 Prepare release v0.41.10
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-03-03 01:36:20 +00:00
Andrei Kvapil
d2ac669b29 fix(platform): correct cozy-proxy releaseName to avoid conflict with installer (#2127)
## What this PR does

Fixes cozy-proxy `releaseName` from `cozystack` to `cozy-proxy` in
paas-full and
distro-full bundles.

The cozy-proxy component was incorrectly configured with `releaseName:
cozystack`,
which is the same name used by the installer helm release. During
upgrade to v1.0,
the cozy-proxy HelmRelease reconciles and overwrites the installer
release, deleting
the cozystack-operator deployment.

### Release note

```release-note
[platform] Fix cozy-proxy releaseName collision with installer that caused operator deletion during v1.0 upgrade
```
2026-03-02 12:57:26 +01:00
Andrei Kvapil
e7bfa9b138 fix(platform): correct cozy-proxy releaseName to avoid conflict with installer
The cozy-proxy component was incorrectly configured with
releaseName: cozystack, which collides with the installer helm release
name. This causes the cozy-proxy HelmRelease to overwrite the installer
release during upgrade to v1.0, deleting the cozystack-operator.

Change releaseName from "cozystack" to "cozy-proxy" in both paas-full
and distro-full bundles.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-03-02 12:55:22 +01:00
Andrei Kvapil
d5a5d31354 Release v0.41.9 (#2078)
This PR prepares the release `v0.41.9`.
2026-02-21 21:48:10 +01:00
cozystack-bot
dd67bd56c4 Prepare release v0.41.9
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-21 01:37:37 +00:00
Andrei Kvapil
513b2e20df Update Kube-OVN to v1.15.3
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-20 10:51:09 +01:00
Andrei Kvapil
8d8f7defd7 fix(cozystack-basics) Deny resourcequotas deletion for tenant admin (#2076)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
Fixed cozy:tenant:admin:base ClusterRole to deny deletion of tenant ResourceQuotas for the tenant admin and superadmin
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

* **Bug Fixes**
* Removed resource quota management permissions from tenant admin role
to reduce unnecessary administrative access.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-20 10:28:12 +01:00
Andrei Kvapil
7bcc3a3d01 Release v0.41.8 (#2029)
This PR prepares the release `v0.41.8`.
2026-02-11 17:09:01 +01:00
cozystack-bot
ff10d684da Prepare release v0.41.8
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-11 11:20:45 +00:00
Andrei Kvapil
dfb280d091 [Backport release-0.41] [dashboard] Add startupProbe to prevent container restarts on slow hardware (#2014)
# Description
Backport of #1996 to `release-0.41`.
2026-02-10 12:30:39 +01:00
Andrei Kvapil
32b1bc843a [Backport release-0.41] [vm] allow switching between instancetype and custom resources (#2013)
# Description
Backport of #2008 to `release-0.41`.
2026-02-10 12:30:11 +01:00
Andrei Kvapil
2c87a83949 [Backport release-0.41] feat(kubernetes): auto-enable Gateway API support in cert-manager (#2012)
# Description
Backport of #1997 to `release-0.41`.
2026-02-10 12:29:57 +01:00
Andrei Kvapil
a53df5eb90 fix(dashboard): add startupProbe to prevent container restarts on slow hardware
Kubelet kills bff and web containers on slow hardware because the
livenessProbe only allows 33 seconds for startup. Add startupProbe
with failureThreshold=30 and periodSeconds=2, giving containers up
to 60 seconds to start before livenessProbe kicks in.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 330cbe70d4)
2026-02-10 11:22:41 +00:00
Kirill Ilin
b212dc02f3 [vm] add validation for resources
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 13d848efc3)
2026-02-10 11:20:54 +00:00
Kirill Ilin
ec50052ea4 [vm] allow switching between instancetype and custom resources
Implemented by upgrade hook atomically patching VM resource

Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit cf2c6bc15f)
2026-02-10 11:20:54 +00:00
Andrei Kvapil
9b61d1318c feat(kubernetes): auto-enable Gateway API support in cert-manager
When the Gateway API addon is enabled, automatically configure
cert-manager with enableGatewayAPI: true. Uses the same default
values + mergeOverwrite pattern as Cilium for consistency.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 90ac6de475)
2026-02-10 11:20:04 +00:00
Andrei Kvapil
1c3a5f721c Release v0.41.7 (#1995)
This PR prepares the release `v0.41.7`.
2026-02-06 08:56:31 +01:00
cozystack-bot
6274f91c74 Prepare release v0.41.7
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-06 01:40:42 +00:00
Andrei Kvapil
f347b4fd70 [Backport release-0.41] fix(postgres-operator): correct PromQL syntax in CNPGClusterOffline alert (#1989)
# Description
Backport of #1981 to `release-0.41`.
2026-02-05 20:34:33 +01:00
mattia-eleuteri
40d51f4f92 fix(postgres-operator): correct PromQL syntax in CNPGClusterOffline alert
Remove extra closing parenthesis in the CNPGClusterOffline alert expression
that causes vmalert pods to crash with "bad prometheus expr" error.

Signed-off-by: mattia-eleuteri <mattia@hidora.io>
(cherry picked from commit 2cb299e602)
2026-02-05 19:25:36 +00:00
Andrei Kvapil
38c73ae3bd [Backport release-0.41] [dashboard] Verify JWT token (#1983)
# Description
Backport of #1980 to `release-0.41`.
2026-02-05 09:39:31 +01:00
Timofei Larkin
0496a1b0e8 [dashboard] Verify JWT token
## What this PR does

When OIDC is disabled, the dashboard's token-proxy now properly
validates bearer tokens against the k8s API's JWKS url.

### Release note

```release-note
[dashboard] Verify bearer tokens against the issuer's JWKS url.
```

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
(cherry picked from commit 23e399bd9a)
2026-02-04 15:11:15 +00:00
Andrei Kvapil
b49a6d1152 Release v0.41.6 (#1979)
This PR prepares the release `v0.41.6`.
2026-02-04 04:02:38 +01:00
cozystack-bot
0dac208d43 Prepare release v0.41.6
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-04 01:41:27 +00:00
Andrei Kvapil
30adc52ce3 [Backport release-0.41] fix coredns serviceaccount to match kubernetes bootstrap rbac (#1978)
# Description
Backport of #1958 to `release-0.41`.
2026-02-04 02:04:27 +01:00
mattia-eleuteri
044dae0d1e fix coredns serviceaccount to match kubernetes bootstrap rbac
The Kubernetes bootstrap creates a ClusterRoleBinding 'system:kube-dns'
that references ServiceAccount 'kube-dns' in 'kube-system'. However,
the coredns chart was using the 'default' ServiceAccount because
serviceAccount.create was not enabled.

This caused CoreDNS pods to fail with 'Failed to watch' errors after
restarts, as they lacked RBAC permissions to watch the Kubernetes API.

Configure the chart to create the 'kube-dns' ServiceAccount, which
matches the expected binding from Kubernetes bootstrap.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: mattia-eleuteri <mattia@hidora.io>
(cherry picked from commit 7320edd71d)
2026-02-04 01:02:13 +00:00
Andrei Kvapil
26e083a71e [Backport release-0.41] [1.0][branding] Separate values for keycloak (#1963)
# Description
Backport of #1947 to `release-0.41`.
2026-02-03 13:05:54 +01:00
Andrei Kvapil
8468711545 [Backport release-0.41] [vm] allow changing field external after creation (#1962)
# Description
Backport of #1956 to `release-0.41`.
2026-02-03 13:05:44 +01:00
Andrei Kvapil
462ab1bdcb Release v0.41.5 (#1936)
This PR prepares the release `v0.41.5`.
2026-02-03 08:35:52 +01:00
cozystack-bot
a3821162af Prepare release v0.41.5
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-03 01:39:08 +00:00
Andrei Kvapil
0838bafdb9 [Backport release-0.41] fix manifests for kubernetes deployment (#1945)
# Description
Backport of #1943 to `release-0.41`.
2026-02-02 22:08:03 +01:00
Andrei Kvapil
9723992410 [0.41][branding] Separate values for keycloak (#1946)
## What this PR does
Adds separate values to keycloak branding.

### Release note
```release-note
Added separate values to keycloak branding
```
2026-02-02 22:07:08 +01:00
nbykov0
3b904d83a8 [branding] Separate values for keycloak
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
(cherry picked from commit 8a034c58b1)
2026-02-02 21:06:57 +00:00
Kirill Ilin
96b801b06b [vm] allow changing field external after creation
Service will be recreated

Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 3a8e8fc290)
2026-02-02 21:05:22 +00:00
Andrei Kvapil
4048234b9d [Backport release-0.41] Add instance profile label to workload monitor (#1957)
# Description
Backport of #1954 to `release-0.41`.
2026-02-02 22:04:28 +01:00
Timofei Larkin
b8d32fb894 Apply suggestion from @gemini-code-assist[bot]
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
(cherry picked from commit 09cd9e05c3)
2026-02-02 14:15:16 +00:00
Matthieu ROBIN
eae630ffb5 Update internal/controller/workloadmonitor_controller.go
Co-authored-by: Timofei Larkin <lllamnyp@gmail.com>
Signed-off-by: Matthieu ROBIN <info@matthieurobin.com>
(cherry picked from commit 3f59ce4876)
2026-02-02 14:15:16 +00:00
Matthieu ROBIN
c514d7525b Add instance profile label to workload monitor
Signed-off-by: Matthieu ROBIN <info@matthieurobin.com>
(cherry picked from commit 1e8da1fca4)
2026-02-02 14:15:16 +00:00
nbykov0
d0bb00f3cd [branding] Separate values for keycloak
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2026-02-02 12:51:05 +03:00
IvanHunters
6db4bb15d2 fix manifests for kubernetes deployment
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
(cherry picked from commit 281715b365)
2026-02-02 09:33:24 +00:00
Andrei Kvapil
4f3502456f [Backport release-0.41] [dashboard] Add resource quota usage to tenant details page (#1932)
# Description
Backport of #1929 to `release-0.41`.
2026-01-29 10:31:39 +01:00
Andrei Kvapil
8d803cd619 [Backport release-0.41] [dashboard] Add "Edit" button to all resources (#1931)
# Description
Backport of #1928 to `release-0.41`.
2026-01-29 10:31:28 +01:00
Kirill Ilin
7ebcc0d264 [dashboard] Add resource quota usage to tenant info resource
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 9e63bd533c)
2026-01-29 09:30:42 +00:00
Kirill Ilin
21e7183375 [dashboard] Add "Edit" button to all resources
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit a56fc00c5c)
2026-01-29 09:29:12 +00:00
Andrei Kvapil
760c732ed6 Release v0.41.4 (#1926)
This PR prepares the release `v0.41.4`.
2026-01-29 10:08:40 +01:00
cozystack-bot
c7e54262f1 Prepare release v0.41.4
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-29 01:39:21 +00:00
Andrei Kvapil
6d772811dd Update cozyhr v1.6.1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-27 23:10:34 +01:00
Andrei Kvapil
c8dbad7dc9 Release v0.41.3 (#1916)
This PR prepares the release `v0.41.3`.
2026-01-27 18:56:59 +01:00
Andrei Kvapil
f54a1e6911 [Backport release-0.41] [dashboard] Improve dashboard session params (#1920)
# Description
Backport of #1913 to `release-0.41`.
2026-01-27 18:56:38 +01:00
Timofei Larkin
ca3d5e1a5b [dashboard] Improve dashboard session params
## What this PR does

This patch enables the `offline_access` scope for the dashbord keycloak
client, so that users get a refresh token which gatekeeper can use to
automatically refresh an expiring access token. Also session timeouts
were increased.

### Release note

```release-note
[dashboard] Increase session timeouts, add the offline_access scope,
enable refresh tokens to improve the overall user experience when
working with the dashboard.
```

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
(cherry picked from commit 7cebafbafd)
2026-01-27 17:18:43 +00:00
cozystack-bot
e67aed0e6c Prepare release v0.41.3
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-27 01:37:01 +00:00
Andrei Kvapil
0129f20ae4 [Backport release-0.41] [kubernetes] show Service and Ingress resources for kubernetes app in… (#1915)
# Description
Backport of #1912 to `release-0.41`.
2026-01-26 18:36:14 +01:00
Andrei Kvapil
58e2b4c632 [Backport release-0.41] [dashboard] Fix filtering on Pods tab for Service (#1914)
# Description
Backport of #1909 to `release-0.41`.
2026-01-26 18:36:05 +01:00
Kirill Ilin
056a0d801e [kubernetes] show Service and Ingress resources for kubernetes app in dashboard
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit befbdf0964)
2026-01-26 17:35:10 +00:00
Kirill Ilin
f8b2aa8343 [dashboard] Fix filtering on Pods tab for Service
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit ee759dd11e)
2026-01-26 17:33:29 +00:00
Andrei Kvapil
f1f1ff5681 Release v0.41.2 (#1907)
This PR prepares the release `v0.41.2`.
2026-01-23 09:07:49 +01:00
cozystack-bot
d4ffce1ff6 Prepare release v0.41.2
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-23 01:34:38 +00:00
Andrei Kvapil
a24174a0c3 [Backport release-0.41] [monitoring-agents] Set minReplicas to 1 for VPA for VMAgent (#1905)
# Description
Backport of #1894 to `release-0.41`.
2026-01-22 23:18:11 +01:00
Kirill Ilin
d91f1f1882 [monitoring-agents] Set minReplicas to 1 for VPA for VMAgent
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 207a5171f0)
2026-01-22 22:17:25 +00:00
Andrei Kvapil
e38b7a0afd [Backport release-0.41] [mongodb] Remove user-configurable images from MongoDB chart (#1904)
# Description
Backport of #1901 to `release-0.41`.
2026-01-22 23:16:34 +01:00
Andrei Kvapil
247e19252f [mongodb] Remove user-configurable images from MongoDB chart
Remove the ability for users to specify custom container images
(images.pmm and images.backup) in the MongoDB application values.

This is a security hardening measure - allowing users to specify
arbitrary container images could lead to running malicious or
compromised images, supply chain attacks, or privilege escalation.

The images are now hardcoded in the template:
- percona/pmm-client:2.44.1
- percona/percona-backup-mongodb:2.11.0

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit beb6e1a0ba)
2026-01-22 22:15:29 +00:00
Andrei Kvapil
211d57b17a Release v0.41.1 (#1900)
This PR prepares the release `v0.41.1`.
2026-01-22 09:14:27 +01:00
cozystack-bot
6d3f5bbc60 Prepare release v0.41.1
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-22 01:34:35 +00:00
Andrei Kvapil
4532603bbd [Backport release-0.41] [kubernetes] Add enum validation for IngressNginx exposeMethod (#1897)
# Description
Backport of #1895 to `release-0.41`.
2026-01-21 13:55:36 +01:00
Kirill Ilin
73cd3edbb7 [kubernetes] Add enum validation for IngressNginx exposeMethod
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 0b95a72fa3)
2026-01-21 12:55:19 +00:00
Andrei Kvapil
ea466820fc Release v0.41.0 (#1889)
This PR prepares the release `v0.41.0`.
2026-01-20 03:32:28 +01:00
cozystack-bot
31422aba38 Prepare release v0.41.0
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-20 01:21:24 +00:00
Andrei Kvapil
8f6a09bca5 [Backport release-0.41][apps] Add MongoDB managed application (#1881)
# Description
Backport of #1822 to `release-0.40`.
2026-01-20 02:15:08 +01:00
Andrei Kvapil
d13a45c8d6 Update Talos Linux v1.11.6 (#1879)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

This PR updates Talos Linux which includes DRBD fixes:
- https://github.com/LINBIT/drbd/releases/tag/drbd-9.2.15
- https://github.com/LINBIT/drbd/releases/tag/drbd-9.2.16

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
Update Talos Linux v1.11.
```
2026-01-20 01:54:40 +01:00
Andrei Kvapil
6b3cd40ba5 [Backport release-0.40] [fix] Fix view of loadbalancer ip in services window (#1887)
# Description
Backport of #1884 to `release-0.40`.
2026-01-20 01:54:17 +01:00
IvanHunters
838d1abff6 [fix] Fix view of loadbalancer ip in services window
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
(cherry picked from commit 5c7c3359b3)
2026-01-20 00:48:39 +00:00
Andrei Kvapil
4cdd1113d1 [Backport release-0.40] fix(kubernetes): increase kube-apiserver startup probe threshold (#1883)
# Description
Backport of #1876 to `release-0.40`.
2026-01-19 20:13:03 +01:00
Andrei Kvapil
4d7ee809e5 [Backport release-0.40] [kubernetes] Increase default apiServer resourcesPreset to large (#1882)
# Description
Backport of #1875 to `release-0.40`.
2026-01-19 20:12:53 +01:00
Andrei Kvapil
125a57eb97 fix(kubernetes): increase kube-apiserver startup probe threshold
Increase startupProbe.failureThreshold from 3 to 30 for kube-apiserver
container. The default 30 seconds (3 failures * 10s period) is often
insufficient for apiserver to complete RBAC bootstrap, especially under
load or with slower etcd connections.

This change gives kube-apiserver up to 300 seconds to initialize,
preventing unnecessary CrashLoopBackOff cycles.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 8261ea4fcf)
2026-01-19 19:12:43 +00:00
Andrei Kvapil
d33a2357ed fix(kubernetes): increase default apiServer resourcesPreset to large
Change the default resourcesPreset for kube-apiserver from "medium" to
"large" to prevent OOM kills during normal operation. The "medium"
preset provides insufficient memory for clusters with moderate workloads.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 6256893040)
2026-01-19 19:12:33 +00:00
Aleksei Sviridkin
6ce60917c7 feat(apps): add MongoDB managed application
Add MongoDB managed service based on Percona Operator for MongoDB with:

- Replica set mode (default) and sharded cluster mode
- Configurable replicas, storage, and resource presets
- Custom users with role-based access control
- S3-compatible backup with PITR support
- Bootstrap/restore from backup
- External access support
- WorkloadMonitor integration for dashboard
- Comprehensive helm-unittest test coverage (91 tests)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-01-19 13:45:57 +01:00
Andrei Kvapil
a8a7e510ac [Backport release-0.40] [etcd] Increase probe thresholds for better recovery (#1878)
# Description
Backport of #1874 to `release-0.40`.
2026-01-19 13:13:49 +01:00
Andrei Kvapil
cf0598bcf1 Update Talos Linux v1.11.6
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-19 11:01:48 +01:00
Andrei Kvapil
6169220317 fix(etcd): increase probe thresholds for better recovery
Increase startup probe failureThreshold to 300 (25 minutes) to allow
etcd members more time to sync with the cluster after restart or
recovery. This prevents pods from being killed during initial
synchronization when VPA assigns minimal resources.

Also increase liveness probe failureThreshold to 10 to reduce
unnecessary restarts during temporary network issues.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit c58c959df6)
2026-01-19 09:56:16 +00:00
Andrei Kvapil
fbe33e1191 Release v0.40.3 (#1871)
This PR prepares the release `v0.40.3`.
2026-01-19 09:04:37 +01:00
cozystack-bot
9507f24332 Prepare release v0.40.3
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-19 01:34:56 +00:00
Andrei Kvapil
2aa839b68f [Backport release-0.40] [cilium] Update cilium to v1.18.6 (#1870)
# Description
Backport of #1868 to `release-0.40`.
2026-01-16 20:49:18 +01:00
Kirill Ilin
40683e28a7 [cilium] Update cilium to v1.18.6
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 464b6b3bb6)
2026-01-16 19:15:21 +00:00
Andrei Kvapil
4ddb374680 [apiserver] Fix Watch resourceVersion and bookmark handling (#1860)
Fixes aggregated API server Watch implementation to properly work with
controller-runtime informers. External controllers watching Tenant
resources via the aggregated API were experiencing issues with cache
sync timeouts and missing reconciliations on startup.

**ResourceVersion handling in List:**
- Compute ResourceVersion from items when the cached client doesn't set
it on the list itself

**Bookmark handling:**
- Pass through bookmark events with converted types for proper informer
sync

**ADDED event filtering (main fix):**
- Simplified the filtering logic that was incorrectly skipping initial
events
- Only skip ADDED events when `startingRV > 0` AND `objRV <= startingRV`
(client already has from List)
- When `startingRV == 0`, always send ADDED events (client wants full
state)
- Removed the complex `initialSyncComplete` tracking that had inverted
logic

When a controller starts watching resources, controller-runtime may call
Watch with `resourceVersion=""`. The server should send all existing
objects as ADDED events. The previous `initialSyncComplete` logic was
inverted and could skip these events, causing objects (like Tenants with
lock annotations) to not be reconciled on controller startup.

```release-note
[apiserver] Fix Watch resourceVersion and bookmark handling for controller-runtime compatibility
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

* **Improvements**
* Enhanced resource watching and synchronization with Kubernetes 1.27+
compatibility, including proper handling of initial events and
bookmarks.
* Optimized event filtering and resource version tracking for
applications, tenant modules, tenant namespaces, and tenant secrets to
reduce unnecessary event noise.
* Improved list metadata consistency by deriving accurate resource
versions when unavailable in responses.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-16 17:23:38 +01:00
Andrei Kvapil
e07fafb393 Release v0.40.2 (#1858)
This PR prepares the release `v0.40.2`.
2026-01-13 17:22:13 +01:00
cozystack-bot
7b932a400d Prepare release v0.40.2
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-13 12:32:18 +00:00
Andrei Kvapil
8f317c9065 [Backport release-0.40] [linstor] Refactor node-level RWX validation (#1857)
# Description
Backport of #1856 to `release-0.40`.
2026-01-13 13:28:11 +01:00
Andrei Kvapil
e73bf9905d [linstor] Refactor node-level RWX validation
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 3c4f0cd952)
2026-01-13 12:27:52 +00:00
Andrei Kvapil
c5222aae97 [linstor] Remove node-level RWX validation (#1851)
## What this PR does

Removes node-level RWX block validation from linstor-csi as
controller-level check is sufficient. The controller already validates
that all pods attached to RWX block volume belong to the same VM by
extracting vmName from pod owner references (VirtualMachineInstance).

This simplifies the validation logic and fixes VM live migration issues.

### Release note

```release-note
[linstor] Remove node-level RWX block validation to fix VM live migration
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Improvements**
* Enhanced RWX (read-write-many) block validation with VM-aware checks
across node and controller flows, including support for hotplug-disk
pods and stricter prevention of cross-VM block sharing.
* Improved propagation and resolution of VM identity for attachments to
ensure consistent validation.

* **Tests**
* Added comprehensive unit tests covering single/multiple pod scenarios,
VM ownership, hotplug disks, upgrade paths, and legacy volumes.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-13 11:28:03 +01:00
Andrei Kvapil
e176cdec87 feat(linstor): add linstor-csi image build with RWX validation
Add custom linstor-csi image build to packages/system/linstor:

- Add Dockerfile based on upstream linstor-csi
- Import patch from upstream PR #403 for RWX block volume validation
  (prevents misuse of allow-two-primaries in KubeVirt live migration)
- Update Makefile to build both piraeus-server and linstor-csi images
- Configure LinstorCluster CR to use custom linstor-csi image in
  CSI controller and node pods

The RWX validation patch ensures that RWX block volumes with
allow-two-primaries are only used by pods belonging to the same
KubeVirt VM during live migration.

Upstream PR: https://github.com/piraeusdatastore/linstor-csi/pull/403

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-13 11:27:55 +01:00
Andrei Kvapil
477d391440 Release v0.40.1 (#1855)
This PR prepares the release `v0.40.1`.
2026-01-13 08:04:35 +01:00
cozystack-bot
9ba76d4839 Prepare release v0.40.1
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-13 01:39:20 +00:00
Andrei Kvapil
f5ccbe94f9 [Backport release-0.40] [linstor] Update piraeus-server patches with critical fixes (#1852)
# Description
Backport of #1850 to `release-0.40`.
2026-01-12 23:23:59 +01:00
Andrei Kvapil
f900db6338 [linstor] Update piraeus-server patches with critical fixes
Update piraeus-server patches to address critical production issues:

- Add fix-duplicate-tcp-ports.diff to prevent duplicate TCP ports
  after toggle-disk operations (upstream PR #476)

- Update skip-adjust-when-device-inaccessible.diff with comprehensive
  fixes for resources stuck in StandAlone after reboot, Unknown state
  race condition, and encrypted LUKS resource deletion (upstream PR #477)

```release-note
[linstor] Fix DRBD resources stuck in StandAlone state after reboot and encrypted resource deletion issues
```

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit dc2773ba26)
2026-01-12 22:23:42 +00:00
154 changed files with 32029 additions and 368 deletions

View File

@@ -0,0 +1,31 @@
#!/usr/bin/env bats
@test "Create DB MongoDB" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: MongoDB
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 1
storageClass: ""
resourcesPreset: "nano"
backup:
enabled: false
EOF
sleep 5
# Wait for HelmRelease
kubectl -n tenant-test wait hr mongodb-$name --timeout=60s --for=condition=ready
# Wait for MongoDB service (port 27017)
timeout 120 sh -ec "until kubectl -n tenant-test get svc mongodb-$name-rs0 -o jsonpath='{.spec.ports[0].port}' | grep -q '27017'; do sleep 10; done"
# Wait for endpoints
timeout 180 sh -ec "until kubectl -n tenant-test get endpoints mongodb-$name-rs0 -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
# Wait for StatefulSet replicas
kubectl -n tenant-test wait statefulset.apps/mongodb-$name-rs0 --timeout=300s --for=jsonpath='{.status.replicas}'=1
# Cleanup
kubectl -n tenant-test delete mongodbs.apps.cozystack.io $name
}

View File

@@ -34,9 +34,6 @@ func (m *Manager) ensureCustomColumnsOverride(ctx context.Context, crd *cozyv1al
obj.SetName(name)
href := fmt.Sprintf("/openapi-ui/{2}/{reqsJsonPath[0]['.metadata.namespace']['-']}/factory/%s/{reqsJsonPath[0]['.metadata.name']['-']}", detailsSegment)
if g == "apps.cozystack.io" && kind == "Tenant" && plural == "tenants" {
href = "/openapi-ui/{2}/{reqsJsonPath[0]['.status.namespace']['-']}/api-table/core.cozystack.io/v1alpha1/tenantmodules"
}
desired := map[string]any{
"spec": map[string]any{

View File

@@ -174,6 +174,48 @@ func detailsTab(kind, endpoint, schemaJSON string, keysOrder [][]string) map[str
}),
)
}
if kind == "Info" {
rightColStack = append(rightColStack,
antdFlexVertical("resource-quotas-block", 4, []any{
antdText("resource-quotas-label", true, "Resource Quotas", map[string]any{
"fontSize": float64(20),
"marginBottom": float64(12),
}),
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "resource-quotas-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-resource-quotas",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/resourcequotas",
"pathToItems": []any{`items`},
},
},
}),
)
}
if kind == "Tenant" {
rightColStack = append(rightColStack,
antdFlexVertical("resource-quotas-block", 4, []any{
antdText("resource-quotas-label", true, "Resource Quotas", map[string]any{
"fontSize": float64(20),
"marginBottom": float64(12),
}),
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "resource-quotas-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-resource-quotas",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/resourcequotas",
"pathToItems": []any{`items`},
},
},
}),
)
}
return map[string]any{
"key": "details",

View File

@@ -134,7 +134,7 @@ func CreateAllCustomColumnsOverrides() []*dashboardv1alpha1.CustomColumnsOverrid
createCustomColumnsOverride("factory-details-v1.services", []any{
createCustomColumnWithSpecificColor("Name", "Service", "", "/openapi-ui/{2}/{reqsJsonPath[0]['.metadata.namespace']['-']}/factory/kube-service-details/{reqsJsonPath[0]['.metadata.name']['-']}"),
createStringColumn("ClusterIP", ".spec.clusterIP"),
createStringColumn("LoadbalancerIP", ".spec.loadBalancerIP"),
createStringColumn("LoadbalancerIP", ".status.loadBalancer.ingress[0].ip"),
createTimestampColumn("Created", ".metadata.creationTimestamp"),
}),
@@ -189,6 +189,14 @@ func CreateAllCustomColumnsOverrides() []*dashboardv1alpha1.CustomColumnsOverrid
createStringColumn("Values", "_flatMapData_Value"),
}),
// Factory resource quotas
createCustomColumnsOverride("factory-resource-quotas", []any{
createFlatMapColumn("Data", ".spec.hard"),
createStringColumn("Resource", "_flatMapData_Key"),
createStringColumn("Hard", "_flatMapData_Value"),
createStringColumn("Used", ".status.used['{_flatMapData_Key}']"),
}),
// Factory ingress details rules
createCustomColumnsOverride("factory-kube-ingress-details-rules", []any{
createStringColumn("Host", ".host"),
@@ -1120,7 +1128,7 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-node-details-/v1/pods",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/pods",
"labelsSelectorFull": map[string]any{
"labelSelectorFull": map[string]any{
"pathToLabels": ".spec.selector",
"reqIndex": 0,
},

View File

@@ -102,6 +102,22 @@ func antdFlex(id string, gap float64, children []any) map[string]any {
}
}
func antdFlexSpaceBetween(id string, children []any) map[string]any {
if id == "" {
id = generateContainerID("auto", "flex")
}
return map[string]any{
"type": "antdFlex",
"data": map[string]any{
"id": id,
"align": "center",
"justify": "space-between",
},
"children": children,
}
}
func antdFlexVertical(id string, gap float64, children []any) map[string]any {
// Auto-generate ID if not provided
if id == "" {

View File

@@ -237,9 +237,16 @@ func createUnifiedFactory(config UnifiedResourceConfig, tabs []any, urlsToFetch
"lineHeight": "24px",
})
header := antdFlex(generateContainerID("header", "row"), float64(6), []any{
badge,
nameText,
header := antdFlexSpaceBetween(generateContainerID("header", "row"), []any{
antdFlex(generateContainerID("header", "title-text"), float64(6), []any{
badge,
nameText,
}),
antdLink(generateLinkID("header", "edit"),
"Edit",
fmt.Sprintf("/openapi-ui/{2}/{3}/forms/apis/{reqsJsonPath[0]['.apiVersion']['-']}/%s/{reqsJsonPath[0]['.metadata.name']['-']}",
config.Plural),
),
})
// Add marginBottom style to header

View File

@@ -467,5 +467,8 @@ func (r *WorkloadMonitorReconciler) getWorkloadMetadata(obj client.Object) map[s
if instanceType, ok := annotations["kubevirt.io/cluster-instancetype-name"]; ok {
labels["workloads.cozystack.io/kubevirt-vmi-instance-type"] = instanceType
}
if instanceProfile, ok := annotations["kubevirt.io/cluster-instanceprofile-name"]; ok {
labels["workloads.cozystack.io/kubevirt-vmi-instance-profile"] = instanceProfile
}
return labels
}

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.0.0@sha256:31ebc09cfa11d8b438d2bbb32fa61b133aaf4b48b1a1282c9e59b5c127af61c1
ghcr.io/cozystack/cozystack/nginx-cache:0.0.0@sha256:cb25e40cb665b8bbeee8cb1ec39da4c9a7452ef3f2f371912bbc0d1b1e2d40a8

View File

@@ -145,31 +145,31 @@ See the reference for components utilized in this service:
### Kubernetes Control Plane Configuration
| Name | Description | Type | Value |
| --------------------------------------------------- | ------------------------------------------------ | ---------- | -------- |
| `controlPlane` | Kubernetes control-plane configuration. | `object` | `{}` |
| `controlPlane.replicas` | Number of control-plane replicas. | `int` | `2` |
| `controlPlane.apiServer` | API Server configuration. | `object` | `{}` |
| `controlPlane.apiServer.resources` | CPU and memory resources for API Server. | `object` | `{}` |
| `controlPlane.apiServer.resources.cpu` | CPU available. | `quantity` | `""` |
| `controlPlane.apiServer.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
| `controlPlane.apiServer.resourcesPreset` | Preset if `resources` omitted. | `string` | `medium` |
| `controlPlane.controllerManager` | Controller Manager configuration. | `object` | `{}` |
| `controlPlane.controllerManager.resources` | CPU and memory resources for Controller Manager. | `object` | `{}` |
| `controlPlane.controllerManager.resources.cpu` | CPU available. | `quantity` | `""` |
| `controlPlane.controllerManager.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
| `controlPlane.controllerManager.resourcesPreset` | Preset if `resources` omitted. | `string` | `micro` |
| `controlPlane.scheduler` | Scheduler configuration. | `object` | `{}` |
| `controlPlane.scheduler.resources` | CPU and memory resources for Scheduler. | `object` | `{}` |
| `controlPlane.scheduler.resources.cpu` | CPU available. | `quantity` | `""` |
| `controlPlane.scheduler.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
| `controlPlane.scheduler.resourcesPreset` | Preset if `resources` omitted. | `string` | `micro` |
| `controlPlane.konnectivity` | Konnectivity configuration. | `object` | `{}` |
| `controlPlane.konnectivity.server` | Konnectivity Server configuration. | `object` | `{}` |
| `controlPlane.konnectivity.server.resources` | CPU and memory resources for Konnectivity. | `object` | `{}` |
| `controlPlane.konnectivity.server.resources.cpu` | CPU available. | `quantity` | `""` |
| `controlPlane.konnectivity.server.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
| `controlPlane.konnectivity.server.resourcesPreset` | Preset if `resources` omitted. | `string` | `micro` |
| Name | Description | Type | Value |
| --------------------------------------------------- | ------------------------------------------------ | ---------- | ------- |
| `controlPlane` | Kubernetes control-plane configuration. | `object` | `{}` |
| `controlPlane.replicas` | Number of control-plane replicas. | `int` | `2` |
| `controlPlane.apiServer` | API Server configuration. | `object` | `{}` |
| `controlPlane.apiServer.resources` | CPU and memory resources for API Server. | `object` | `{}` |
| `controlPlane.apiServer.resources.cpu` | CPU available. | `quantity` | `""` |
| `controlPlane.apiServer.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
| `controlPlane.apiServer.resourcesPreset` | Preset if `resources` omitted. | `string` | `large` |
| `controlPlane.controllerManager` | Controller Manager configuration. | `object` | `{}` |
| `controlPlane.controllerManager.resources` | CPU and memory resources for Controller Manager. | `object` | `{}` |
| `controlPlane.controllerManager.resources.cpu` | CPU available. | `quantity` | `""` |
| `controlPlane.controllerManager.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
| `controlPlane.controllerManager.resourcesPreset` | Preset if `resources` omitted. | `string` | `micro` |
| `controlPlane.scheduler` | Scheduler configuration. | `object` | `{}` |
| `controlPlane.scheduler.resources` | CPU and memory resources for Scheduler. | `object` | `{}` |
| `controlPlane.scheduler.resources.cpu` | CPU available. | `quantity` | `""` |
| `controlPlane.scheduler.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
| `controlPlane.scheduler.resourcesPreset` | Preset if `resources` omitted. | `string` | `micro` |
| `controlPlane.konnectivity` | Konnectivity configuration. | `object` | `{}` |
| `controlPlane.konnectivity.server` | Konnectivity Server configuration. | `object` | `{}` |
| `controlPlane.konnectivity.server.resources` | CPU and memory resources for Konnectivity. | `object` | `{}` |
| `controlPlane.konnectivity.server.resources.cpu` | CPU available. | `quantity` | `""` |
| `controlPlane.konnectivity.server.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
| `controlPlane.konnectivity.server.resourcesPreset` | Preset if `resources` omitted. | `string` | `micro` |
## Parameter examples and reference

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.0.0@sha256:372ad087ae96bd0cd642e2b0855ec7ffb1369d6cf4f0b92204725557c11bc0ff
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.0.0@sha256:3753b735b0315bee90de54cb25cfebc63bd2cc90ad11ca4fdc0e70439abd5096

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:b42c6af641ee0eadb7e0a42e368021b4759f443cb7b71b7e745a64f0fc8b752e
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:bb5b17044969e663c3b391f7274883735c0ffe05a9523988469bdf2974de2dea

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.33@sha256:d25e567bc8b17b596e050f5ff410e36112c7966e33f4b372c752e7350bacc894
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.33@sha256:9d4ad080ef729e0f9f1f5919cb85c0c9b6dc772a22d52046b2de9ccba3772715

View File

@@ -10,3 +10,8 @@ data:
enableEPSController: true
selectorless: true
namespace: {{ .Release.Namespace }}
infraLabels:
apps.cozystack.io/application.group: apps.cozystack.io
apps.cozystack.io/application.kind: Kubernetes
apps.cozystack.io/application.name: {{ .Release.Name | trimPrefix "kubernetes-" }}
internal.cozystack.io/tenantresource: "true"

View File

@@ -292,6 +292,12 @@ metadata:
{{- end }}
spec:
clusterName: {{ $.Release.Name }}
replicas: 2
strategy:
rollingUpdate:
maxSurge: {{ $group.maxReplicas }}
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: {{ $.Release.Name }}
@@ -326,6 +332,7 @@ metadata:
namespace: {{ $.Release.Namespace }}
spec:
clusterName: {{ $.Release.Name }}
maxUnhealthy: 0
nodeStartupTimeout: 10m
selector:
matchLabels:

View File

@@ -1,3 +1,13 @@
{{- define "cozystack.defaultCertManagerValues" -}}
{{- if $.Values.addons.gatewayAPI.enabled }}
cert-manager:
config:
apiVersion: controller.config.cert-manager.io/v1alpha1
kind: ControllerConfiguration
enableGatewayAPI: true
{{- end }}
{{- end }}
{{- if .Values.addons.certManager.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
@@ -33,11 +43,8 @@ spec:
force: true
remediation:
retries: -1
{{- with .Values.addons.certManager.valuesOverride }}
values:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- toYaml (deepCopy .Values.addons.certManager.valuesOverride | mergeOverwrite (fromYaml (include "cozystack.defaultCertManagerValues" .))) | nindent 4 }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}

View File

@@ -14,6 +14,11 @@ metadata:
}
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
labels:
apps.cozystack.io/application.group: apps.cozystack.io
apps.cozystack.io/application.kind: Kubernetes
apps.cozystack.io/application.name: {{ .Release.Name | trimPrefix "kubernetes-" }}
internal.cozystack.io/tenantresource: "true"
spec:
ingressClassName: "{{ $ingress }}"
rules:
@@ -41,6 +46,11 @@ apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-ingress-nginx
labels:
apps.cozystack.io/application.group: apps.cozystack.io
apps.cozystack.io/application.kind: Kubernetes
apps.cozystack.io/application.name: {{ .Release.Name | trimPrefix "kubernetes-" }}
internal.cozystack.io/tenantresource: "true"
spec:
ports:
- appProtocol: http

View File

@@ -150,7 +150,11 @@
"exposeMethod": {
"description": "Method to expose the controller. Allowed values: `Proxied`, `LoadBalancer`.",
"type": "string",
"default": "Proxied"
"default": "Proxied",
"enum": [
"Proxied",
"LoadBalancer"
]
},
"hosts": {
"description": "Domains routed to this tenant cluster when `exposeMethod` is `Proxied`.",
@@ -287,7 +291,7 @@
"resourcesPreset": {
"description": "Preset if `resources` omitted.",
"type": "string",
"default": "medium",
"default": "large",
"enum": [
"nano",
"micro",

View File

@@ -76,9 +76,13 @@ host: ""
## @typedef {struct} GatewayAPIAddon - Gateway API addon.
## @field {bool} enabled - Enable Gateway API.
## @enum {string} IngressNginxExposeMethod - Method to expose the controller
## @value Proxied
## @value LoadBalancer
## @typedef {struct} IngressNginxAddon - Ingress-NGINX controller.
## @field {bool} enabled - Enable the controller (requires nodes labeled `ingress-nginx`).
## @field {string} exposeMethod - Method to expose the controller. Allowed values: `Proxied`, `LoadBalancer`.
## @field {IngressNginxExposeMethod} exposeMethod - Method to expose the controller. Allowed values: `Proxied`, `LoadBalancer`.
## @field {[]string} hosts - Domains routed to this tenant cluster when `exposeMethod` is `Proxied`.
## @field {object} valuesOverride - Custom Helm values overrides.
@@ -153,7 +157,7 @@ addons:
## @typedef {struct} APIServer - API Server configuration.
## @field {Resources} resources - CPU and memory resources for API Server.
## @field {ResourcesPreset} resourcesPreset="medium" - Preset if `resources` omitted.
## @field {ResourcesPreset} resourcesPreset="large" - Preset if `resources` omitted.
## @typedef {struct} ControllerManager - Controller Manager configuration.
## @field {Resources} resources - CPU and memory resources for Controller Manager.
@@ -182,7 +186,7 @@ controlPlane:
replicas: 2
apiServer:
resources: {}
resourcesPreset: "medium"
resourcesPreset: "large"
controllerManager:
resources: {}
resourcesPreset: "micro"

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,7 @@
apiVersion: v2
name: mongodb
description: Managed MongoDB service
icon: /logos/mongodb.svg
type: application
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process
appVersion: "8.0"

View File

@@ -0,0 +1,11 @@
include ../../../scripts/package.mk
.PHONY: generate update
generate:
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
../../../hack/update-crd.sh
update:
hack/update-versions.sh
make generate

View File

@@ -0,0 +1,104 @@
# Managed MongoDB Service
MongoDB is a popular document-oriented NoSQL database known for its flexibility and scalability.
The Managed MongoDB Service provides a self-healing replicated cluster managed by the Percona Operator for MongoDB.
## Deployment Details
This managed service is controlled by the Percona Operator for MongoDB, ensuring efficient management and seamless operation.
- Docs: <https://docs.percona.com/percona-operator-for-mongodb/>
- Github: <https://github.com/percona/percona-server-mongodb-operator>
## Deployment Modes
### Replica Set Mode (default)
By default, MongoDB deploys as a replica set with the specified number of replicas.
This mode is suitable for most use cases requiring high availability.
### Sharded Cluster Mode
Enable `sharding: true` for horizontal scaling across multiple shards.
Each shard is a replica set, and mongos routers handle query routing.
## Notes
### External Access
When `external: true` is enabled:
- **Replica Set mode**: Traffic is load-balanced across all replica set members. This works well for read operations, but write operations require connecting to the primary. MongoDB drivers handle primary discovery automatically using the replica set connection string.
- **Sharded mode**: Traffic is routed through mongos routers, which handle both reads and writes correctly.
### Credentials
On first install, the credentials secret will be empty until the Percona operator initializes the cluster.
Run `helm upgrade` after MongoDB is ready to populate the credentials secret with the actual password.
## Parameters
### Common parameters
| Name | Description | Type | Value |
| ------------------ | --------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
| `replicas` | Number of MongoDB replicas in replica set. | `int` | `3` |
| `resources` | Explicit CPU and memory configuration for each MongoDB replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `size` | Persistent Volume Claim size available for application data. | `quantity` | `10Gi` |
| `storageClass` | StorageClass used to store the data. | `string` | `""` |
| `external` | Enable external access from outside the cluster. | `bool` | `false` |
| `version` | MongoDB major version to deploy. | `string` | `v8` |
### Sharding configuration
| Name | Description | Type | Value |
| ----------------------------------- | ------------------------------------------------------------------ | ---------- | ------- |
| `sharding` | Enable sharded cluster mode. When disabled, deploys a replica set. | `bool` | `false` |
| `shardingConfig` | Configuration for sharded cluster mode. | `object` | `{}` |
| `shardingConfig.configServers` | Number of config server replicas. | `int` | `3` |
| `shardingConfig.configServerSize` | PVC size for config servers. | `quantity` | `3Gi` |
| `shardingConfig.mongos` | Number of mongos router replicas. | `int` | `2` |
| `shardingConfig.shards` | List of shard configurations. | `[]object` | `[...]` |
| `shardingConfig.shards[i].name` | Shard name. | `string` | `""` |
| `shardingConfig.shards[i].replicas` | Number of replicas in this shard. | `int` | `0` |
| `shardingConfig.shards[i].size` | PVC size for this shard. | `quantity` | `""` |
### Users configuration
| Name | Description | Type | Value |
| --------------------------- | --------------------------------------------------- | ------------------- | ----- |
| `users` | Custom MongoDB users configuration map. | `map[string]object` | `{}` |
| `users[name].password` | Password for the user (auto-generated if omitted). | `string` | `""` |
| `users[name].db` | Database to authenticate against. | `string` | `""` |
| `users[name].roles` | List of MongoDB roles with database scope. | `[]object` | `[]` |
| `users[name].roles[i].name` | Role name (e.g., readWrite, dbAdmin, clusterAdmin). | `string` | `""` |
| `users[name].roles[i].db` | Database the role applies to. | `string` | `""` |
### Backup parameters
| Name | Description | Type | Value |
| ------------------------ | ------------------------------------------------------ | -------- | ----------------------------------- |
| `backup` | Backup configuration. | `object` | `{}` |
| `backup.enabled` | Enable regular backups. | `bool` | `false` |
| `backup.schedule` | Cron schedule for automated backups. | `string` | `0 2 * * *` |
| `backup.retentionPolicy` | Retention policy (e.g. "30d"). | `string` | `30d` |
| `backup.destinationPath` | Destination path for backups (e.g. s3://bucket/path/). | `string` | `s3://bucket/path/to/folder/` |
| `backup.endpointURL` | S3 endpoint URL for uploads. | `string` | `http://minio-gateway-service:9000` |
| `backup.s3AccessKey` | Access key for S3 authentication. | `string` | `""` |
| `backup.s3SecretKey` | Secret key for S3 authentication. | `string` | `""` |
### Bootstrap (recovery) parameters
| Name | Description | Type | Value |
| ------------------------ | --------------------------------------------------------- | -------- | ------- |
| `bootstrap` | Bootstrap configuration. | `object` | `{}` |
| `bootstrap.enabled` | Whether to restore from a backup. | `bool` | `false` |
| `bootstrap.recoveryTime` | Timestamp for point-in-time recovery; empty means latest. | `string` | `""` |
| `bootstrap.backupName` | Name of backup to restore from. | `string` | `""` |

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -0,0 +1,5 @@
# MongoDB version mapping (major version -> Percona image tag)
# Auto-generated by hack/update-versions.sh - do not edit manually
"v8": "8.0.17-6"
"v7": "7.0.28-15"
"v6": "6.0.25-20"

View File

@@ -0,0 +1,125 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
MONGODB_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
VALUES_FILE="${MONGODB_DIR}/values.yaml"
VERSIONS_FILE="${MONGODB_DIR}/files/versions.yaml"
# Supported major versions (newest first)
SUPPORTED_MAJOR_VERSIONS="8 7 6"
echo "Supported major versions: $SUPPORTED_MAJOR_VERSIONS"
# Check if skopeo is installed
if ! command -v skopeo &> /dev/null; then
echo "Error: skopeo is not installed. Please install skopeo and try again." >&2
exit 1
fi
# Check if jq is installed
if ! command -v jq &> /dev/null; then
echo "Error: jq is not installed. Please install jq and try again." >&2
exit 1
fi
# Get available image tags from Percona registry
echo "Fetching available image tags from registry..."
AVAILABLE_TAGS=$(skopeo list-tags docker://percona/percona-server-mongodb | jq -r '.Tags[]' | grep -E '^[0-9]+\.[0-9]+\.[0-9]+-[0-9]+$' | sort -V)
if [ -z "$AVAILABLE_TAGS" ]; then
echo "Error: Could not fetch available image tags" >&2
exit 1
fi
# Build versions map: major version -> latest tag
declare -A VERSION_MAP
MAJOR_VERSIONS=()
for major_version in $SUPPORTED_MAJOR_VERSIONS; do
# Find all tags that match this major version
matching_tags=$(echo "$AVAILABLE_TAGS" | grep "^${major_version}\\.")
if [ -n "$matching_tags" ]; then
# Get the latest tag for this major version
latest_tag=$(echo "$matching_tags" | tail -n1)
VERSION_MAP["v${major_version}"]="${latest_tag}"
MAJOR_VERSIONS+=("v${major_version}")
echo "Found version: v${major_version} -> ${latest_tag}"
fi
done
if [ ${#MAJOR_VERSIONS[@]} -eq 0 ]; then
echo "Error: No matching versions found" >&2
exit 1
fi
echo "Major versions to add: ${MAJOR_VERSIONS[*]}"
# Create/update versions.yaml file
echo "Updating $VERSIONS_FILE..."
{
echo "# MongoDB version mapping (major version -> Percona image tag)"
echo "# Auto-generated by hack/update-versions.sh - do not edit manually"
for major_ver in "${MAJOR_VERSIONS[@]}"; do
echo "\"${major_ver}\": \"${VERSION_MAP[$major_ver]}\""
done
} > "$VERSIONS_FILE"
echo "Successfully updated $VERSIONS_FILE"
# Update values.yaml - enum with major versions only
TEMP_FILE=$(mktemp)
trap 'rm -f "$TEMP_FILE" "${TEMP_FILE}.tmp"' EXIT
# Build new version section
NEW_VERSION_SECTION="## @enum {string} Version"
for major_ver in "${MAJOR_VERSIONS[@]}"; do
NEW_VERSION_SECTION="${NEW_VERSION_SECTION}
## @value $major_ver"
done
NEW_VERSION_SECTION="${NEW_VERSION_SECTION}
## @param {Version} version - MongoDB major version to deploy.
version: ${MAJOR_VERSIONS[0]}"
# Check if version section already exists
if grep -q "^## @enum {string} Version" "$VALUES_FILE"; then
# Version section exists, update it using awk
echo "Updating existing version section in $VALUES_FILE..."
# Use awk to replace the section from "## @enum {string} Version" to "version: " (inclusive)
awk -v new_section="$NEW_VERSION_SECTION" '
/^## @enum {string} Version/ {
in_section = 1
print new_section
next
}
in_section && /^version: / {
in_section = 0
next
}
in_section {
next
}
{ print }
' "$VALUES_FILE" > "$TEMP_FILE.tmp"
mv "$TEMP_FILE.tmp" "$VALUES_FILE"
else
# Version section doesn't exist, insert it before Sharding section
echo "Inserting new version section in $VALUES_FILE..."
awk -v new_section="$NEW_VERSION_SECTION" '
/^## @section Sharding configuration/ {
print new_section
print ""
}
{ print }
' "$VALUES_FILE" > "$TEMP_FILE.tmp"
mv "$TEMP_FILE.tmp" "$VALUES_FILE"
fi
echo "Successfully updated $VALUES_FILE with major versions: ${MAJOR_VERSIONS[*]}"

View File

@@ -0,0 +1,13 @@
<svg width="144" height="144" viewBox="0 0 144 144" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="144" height="144" rx="24" fill="url(#paint0_linear_mongodb)"/>
<path d="M72 24C72 24 72 24 72 24C72 24 58 40 58 62C58 84 72 120 72 120C72 120 86 84 86 62C86 40 72 24 72 24Z" fill="#00ED64"/>
<path d="M72 120C72 120 86 84 86 62C86 40 72 24 72 24" stroke="#00684A" stroke-width="4" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M72 24C72 24 58 40 58 62C58 84 72 120 72 120" stroke="#001E2B" stroke-width="4" stroke-linecap="round" stroke-linejoin="round"/>
<rect x="69" y="108" width="6" height="16" rx="2" fill="#00684A"/>
<defs>
<linearGradient id="paint0_linear_mongodb" x1="140" y1="130.5" x2="4" y2="9.49999" gradientUnits="userSpaceOnUse">
<stop stop-color="#001E2B"/>
<stop offset="1" stop-color="#023430"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 871 B

View File

View File

@@ -0,0 +1,12 @@
{{/*
MongoDB version mapping
*/}}
{{- define "mongodb.versionMap" -}}
{{- $versions := .Files.Get "files/versions.yaml" | fromYaml -}}
{{- $version := .Values.version -}}
{{- if hasKey $versions $version -}}
{{- index $versions $version -}}
{{- else -}}
{{- fail (printf "Unsupported MongoDB version: %s. Supported versions: %s" $version (keys $versions | sortAlpha | join ", ")) -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,11 @@
{{- if or .Values.backup.enabled .Values.bootstrap.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-s3-creds
type: Opaque
stringData:
AWS_ACCESS_KEY_ID: {{ required "backup.s3AccessKey is required when backup or bootstrap is enabled" .Values.backup.s3AccessKey | quote }}
AWS_SECRET_ACCESS_KEY: {{ required "backup.s3SecretKey is required when backup or bootstrap is enabled" .Values.backup.s3SecretKey | quote }}
{{- end }}

View File

@@ -0,0 +1,34 @@
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
{{- $operatorSecret := lookup "v1" "Secret" .Release.Namespace (printf "internal-%s-users" .Release.Name) }}
{{- $password := "" }}
{{- if and $operatorSecret (hasKey $operatorSecret.data "MONGODB_DATABASE_ADMIN_PASSWORD") }}
{{- $password = index $operatorSecret.data "MONGODB_DATABASE_ADMIN_PASSWORD" | b64dec }}
{{- end }}
---
# Dashboard credentials - lookup from operator-created secret
# Operator creates secret named "internal-<release>-users" with system user passwords
# Note: On first install, password/uri will be empty until operator creates the secret.
# Run 'helm upgrade' after MongoDB is ready to populate credentials.
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-credentials
type: Opaque
stringData:
username: databaseAdmin
password: {{ $password | quote }}
{{- if .Values.sharding }}
host: {{ .Release.Name }}-mongos.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}
{{- else }}
host: {{ .Release.Name }}-rs0.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}
{{- end }}
port: "27017"
{{- if $password }}
{{- if .Values.sharding }}
uri: mongodb://databaseAdmin:{{ $password | urlquery }}@{{ .Release.Name }}-mongos.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}:27017/admin
{{- else }}
uri: mongodb://databaseAdmin:{{ $password | urlquery }}@{{ .Release.Name }}-rs0.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}:27017/admin?replicaSet=rs0
{{- end }}
{{- else }}
uri: ""
{{- end }}

View File

@@ -0,0 +1,39 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-dashboard-resources
rules:
- apiGroups:
- ""
resources:
- services
resourceNames:
- {{ .Release.Name }}-rs0
- {{ .Release.Name }}-mongos
- {{ .Release.Name }}-external
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- {{ .Release.Name }}-credentials
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,24 @@
{{- if .Values.external }}
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-external
spec:
type: LoadBalancer
externalTrafficPolicy: Local
{{- if (include "cozy-lib.network.disableLoadBalancerNodePorts" $ | fromYaml) }}
allocateLoadBalancerNodePorts: false
{{- end }}
ports:
- name: mongodb
port: 27017
selector:
app.kubernetes.io/name: percona-server-mongodb
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.sharding }}
app.kubernetes.io/component: mongos
{{- else }}
app.kubernetes.io/component: mongod
app.kubernetes.io/replset: rs0
{{- end }}
{{- end }}

View File

@@ -0,0 +1,173 @@
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
---
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
name: {{ .Release.Name }}
spec:
crVersion: 1.21.1
clusterServiceDNSSuffix: svc.{{ $clusterDomain }}
pause: false
unmanaged: false
image: percona/percona-server-mongodb:{{ include "mongodb.versionMap" $ }}
imagePullPolicy: IfNotPresent
{{- if lt (int .Values.replicas) 3 }}
unsafeFlags:
replsetSize: true
{{- end }}
updateStrategy: SmartUpdate
upgradeOptions:
apply: disabled
pmm:
enabled: false
image: percona/pmm-client:2.44.1
serverHost: ""
sharding:
enabled: {{ .Values.sharding | default false }}
balancer:
enabled: true
{{- if .Values.sharding }}
configsvrReplSet:
size: {{ .Values.shardingConfig.configServers }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resourcesPreset .Values.resources $) | nindent 8 }}
volumeSpec:
persistentVolumeClaim:
{{- with .Values.storageClass }}
storageClassName: {{ . }}
{{- end }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.shardingConfig.configServerSize }}
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
podDisruptionBudget:
maxUnavailable: 1
mongos:
size: {{ .Values.shardingConfig.mongos }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resourcesPreset .Values.resources $) | nindent 8 }}
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
podDisruptionBudget:
maxUnavailable: 1
expose:
exposeType: ClusterIP
{{- end }}
replsets:
{{- if .Values.sharding }}
{{- range .Values.shardingConfig.shards }}
- name: {{ .name }}
size: {{ .replicas }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list $.Values.resourcesPreset $.Values.resources $) | nindent 8 }}
volumeSpec:
persistentVolumeClaim:
{{- with $.Values.storageClass }}
storageClassName: {{ . }}
{{- end }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .size }}
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
podDisruptionBudget:
maxUnavailable: 1
{{- end }}
{{- else }}
- name: rs0
size: {{ .Values.replicas }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resourcesPreset .Values.resources $) | nindent 8 }}
volumeSpec:
persistentVolumeClaim:
{{- with .Values.storageClass }}
storageClassName: {{ . }}
{{- end }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.size }}
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
{{- end }}
{{- if .Values.users }}
users:
{{- range $username, $user := .Values.users }}
{{- if not $user.roles }}
{{- fail (printf "users.%s.roles is required and cannot be empty" $username) }}
{{- end }}
- name: {{ $username }}
db: {{ $user.db }}
passwordSecretRef:
name: {{ $.Release.Name }}-user-{{ $username }}
key: password
roles:
{{- range $user.roles }}
- name: {{ .name }}
db: {{ .db }}
{{- end }}
{{- end }}
{{- end }}
backup:
enabled: {{ .Values.backup.enabled | default false }}
image: percona/percona-backup-mongodb:2.11.0
{{- if .Values.backup.enabled }}
storages:
s3-storage:
type: s3
s3:
bucket: {{ .Values.backup.destinationPath | trimPrefix "s3://" | regexFind "^[^/]+" }}
prefix: {{ .Values.backup.destinationPath | trimPrefix "s3://" | splitList "/" | rest | join "/" }}
endpointUrl: {{ .Values.backup.endpointURL }}
credentialsSecret: {{ .Release.Name }}-s3-creds
insecureSkipTLSVerify: false
forcePathStyle: true
tasks:
- name: daily-backup
enabled: true
schedule: {{ .Values.backup.schedule | quote }}
keep: {{ .Values.backup.retentionPolicy | trimSuffix "d" | int }}
storageName: s3-storage
type: logical
compressionType: gzip
pitr:
enabled: true
{{- end }}
---
# WorkloadMonitor tracks data-bearing mongod pods only (not config servers or mongos routers)
# The selector filters by component=mongod, so we only count shard replicas
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ .Release.Name }}
spec:
{{- if .Values.sharding }}
{{- $totalReplicas := 0 }}
{{- range .Values.shardingConfig.shards }}
{{- $totalReplicas = add $totalReplicas .replicas }}
{{- end }}
replicas: {{ $totalReplicas }}
{{- else }}
replicas: {{ .Values.replicas }}
{{- end }}
minReplicas: 1
kind: mongodb
type: mongodb
selector:
app.kubernetes.io/name: percona-server-mongodb
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: mongod
version: {{ .Chart.Version }}

View File

@@ -0,0 +1,37 @@
{{- if .Values.bootstrap.enabled }}
{{- if not .Values.bootstrap.backupName }}
{{- fail "bootstrap.backupName is required when bootstrap.enabled is true" }}
{{- end }}
{{- if not .Values.backup.destinationPath }}
{{- fail "backup.destinationPath is required when bootstrap.enabled is true" }}
{{- end }}
{{- if not .Values.backup.endpointURL }}
{{- fail "backup.endpointURL is required when bootstrap.enabled is true" }}
{{- end }}
{{- if not .Values.backup.s3AccessKey }}
{{- fail "backup.s3AccessKey is required when bootstrap.enabled is true" }}
{{- end }}
{{- if not .Values.backup.s3SecretKey }}
{{- fail "backup.s3SecretKey is required when bootstrap.enabled is true" }}
{{- end }}
---
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBRestore
metadata:
name: {{ .Release.Name }}-restore
spec:
clusterName: {{ .Release.Name }}
{{- if .Values.bootstrap.recoveryTime }}
pitr:
type: date
date: {{ .Values.bootstrap.recoveryTime | quote }}
{{- end }}
backupSource:
type: logical
destination: {{ .Values.backup.destinationPath | trimSuffix "/" }}/{{ .Values.bootstrap.backupName }}
s3:
credentialsSecret: {{ .Release.Name }}-s3-creds
endpointUrl: {{ .Values.backup.endpointURL }}
insecureSkipTLSVerify: false
forcePathStyle: true
{{- end }}

View File

@@ -0,0 +1,17 @@
{{- range $username, $user := .Values.users }}
{{- $existingSecret := lookup "v1" "Secret" $.Release.Namespace (printf "%s-user-%s" $.Release.Name $username) }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ $.Release.Name }}-user-{{ $username }}
type: Opaque
stringData:
{{- if $user.password }}
password: {{ $user.password | quote }}
{{- else if and $existingSecret (hasKey $existingSecret.data "password") }}
password: {{ index $existingSecret.data "password" | b64dec | quote }}
{{- else }}
password: {{ randAlphaNum 16 | quote }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,112 @@
suite: backup secret tests
templates:
- templates/backup-secret.yaml
tests:
# Not rendered when both backup and bootstrap disabled
- it: does not render when backup and bootstrap disabled
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: false
bootstrap:
enabled: false
asserts:
- hasDocuments:
count: 0
# Rendered when backup enabled
- it: renders when backup enabled
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: true
s3AccessKey: "AKIAIOSFODNN7EXAMPLE"
s3SecretKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
asserts:
- hasDocuments:
count: 1
- isKind:
of: Secret
# Rendered when bootstrap enabled (for restore)
- it: renders when bootstrap enabled
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: false
s3AccessKey: "AKIAIOSFODNN7EXAMPLE"
s3SecretKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
bootstrap:
enabled: true
asserts:
- hasDocuments:
count: 1
# Secret name
- it: uses correct secret name
release:
name: mydb
namespace: tenant-test
set:
backup:
enabled: true
s3AccessKey: "accesskey"
s3SecretKey: "secretkey"
asserts:
- equal:
path: metadata.name
value: mydb-s3-creds
# Contains AWS credentials
- it: contains AWS credentials
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: true
s3AccessKey: "MYACCESSKEY"
s3SecretKey: "MYSECRETKEY"
asserts:
- equal:
path: stringData.AWS_ACCESS_KEY_ID
value: "MYACCESSKEY"
- equal:
path: stringData.AWS_SECRET_ACCESS_KEY
value: "MYSECRETKEY"
# Fails without s3AccessKey
- it: fails when s3AccessKey missing
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: true
s3AccessKey: ""
s3SecretKey: "secretkey"
asserts:
- failedTemplate:
errorMessage: "backup.s3AccessKey is required when backup or bootstrap is enabled"
# Fails without s3SecretKey
- it: fails when s3SecretKey missing
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: true
s3AccessKey: "accesskey"
s3SecretKey: ""
asserts:
- failedTemplate:
errorMessage: "backup.s3SecretKey is required when backup or bootstrap is enabled"

View File

@@ -0,0 +1,132 @@
suite: credentials tests
templates:
- templates/credentials.yaml
tests:
# Basic rendering
- it: always renders a Secret
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- hasDocuments:
count: 1
- isKind:
of: Secret
- equal:
path: metadata.name
value: test-mongodb-credentials
- equal:
path: type
value: Opaque
# Username is always databaseAdmin
- it: sets username to databaseAdmin
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: stringData.username
value: databaseAdmin
# Port is always 27017
- it: sets port to 27017
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: stringData.port
value: "27017"
# Host for replica set mode
- it: uses rs0 service for replica set mode
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
asserts:
- equal:
path: stringData.host
value: test-mongodb-rs0.tenant-test.svc.cozy.local
# Host for sharded mode
- it: uses mongos service for sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
asserts:
- equal:
path: stringData.host
value: test-mongodb-mongos.tenant-test.svc.cozy.local
# Custom cluster domain
- it: uses custom cluster domain
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: custom.domain
sharding: false
asserts:
- equal:
path: stringData.host
value: test-mongodb-rs0.tenant-test.svc.custom.domain
# Default cluster domain when not set
- it: defaults to cozy.local when cluster domain not set
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster: {}
sharding: false
asserts:
- equal:
path: stringData.host
value: test-mongodb-rs0.tenant-test.svc.cozy.local
# Password empty without operator secret (lookup returns nil in tests)
- it: has empty password on first install
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: stringData.password
value: ""
# URI empty without password
- it: has empty uri when password not available
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: stringData.uri
value: ""

View File

@@ -0,0 +1,106 @@
suite: dashboard resourcemap tests
templates:
- templates/dashboard-resourcemap.yaml
tests:
# Always renders Role and RoleBinding
- it: renders Role and RoleBinding
release:
name: test-mongodb
namespace: tenant-test
asserts:
- hasDocuments:
count: 2
- isKind:
of: Role
documentIndex: 0
- isKind:
of: RoleBinding
documentIndex: 1
# Role naming
- it: uses correct Role name
release:
name: mydb
namespace: tenant-test
asserts:
- equal:
path: metadata.name
value: mydb-dashboard-resources
documentIndex: 0
# RoleBinding naming
- it: uses correct RoleBinding name
release:
name: mydb
namespace: tenant-test
asserts:
- equal:
path: metadata.name
value: mydb-dashboard-resources
documentIndex: 1
# Role grants access to services
- it: grants access to MongoDB services
release:
name: test-mongodb
namespace: tenant-test
asserts:
- contains:
path: rules[0].resourceNames
content: test-mongodb-rs0
documentIndex: 0
- contains:
path: rules[0].resourceNames
content: test-mongodb-mongos
documentIndex: 0
- contains:
path: rules[0].resourceNames
content: test-mongodb-external
documentIndex: 0
# Role grants access to credentials secret
- it: grants access to credentials secret
release:
name: test-mongodb
namespace: tenant-test
asserts:
- contains:
path: rules[1].resourceNames
content: test-mongodb-credentials
documentIndex: 0
# Role grants access to workloadmonitor
- it: grants access to WorkloadMonitor
release:
name: test-mongodb
namespace: tenant-test
asserts:
- contains:
path: rules[2].resourceNames
content: test-mongodb
documentIndex: 0
- equal:
path: rules[2].apiGroups[0]
value: cozystack.io
documentIndex: 0
# RoleBinding references correct Role
- it: RoleBinding references correct Role
release:
name: test-mongodb
namespace: tenant-test
asserts:
- equal:
path: roleRef.kind
value: Role
documentIndex: 1
- equal:
path: roleRef.name
value: test-mongodb-dashboard-resources
documentIndex: 1
- equal:
path: roleRef.apiGroup
value: rbac.authorization.k8s.io
documentIndex: 1

View File

@@ -0,0 +1,154 @@
suite: external service tests
templates:
- templates/external-svc.yaml
tests:
###################
# Rendering #
###################
- it: does not render when external is false
release:
name: test-mongodb
namespace: tenant-test
set:
external: false
asserts:
- hasDocuments:
count: 0
- it: renders LoadBalancer service when external is true
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
asserts:
- hasDocuments:
count: 1
- isKind:
of: Service
###################
# Service config #
###################
- it: uses correct service name
release:
name: mydb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: metadata.name
value: mydb-external
- it: sets LoadBalancer type
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: spec.type
value: LoadBalancer
- it: sets externalTrafficPolicy to Local
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: spec.externalTrafficPolicy
value: Local
- it: exposes MongoDB port 27017
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: spec.ports[0].name
value: mongodb
- equal:
path: spec.ports[0].port
value: 27017
###########################
# Common selector labels #
###########################
- it: sets app.kubernetes.io/name selector
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: spec.selector["app.kubernetes.io/name"]
value: percona-server-mongodb
- it: sets app.kubernetes.io/instance selector
release:
name: mydb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: spec.selector["app.kubernetes.io/instance"]
value: mydb
###########################
# Replica set mode #
###########################
- it: selects mongod for replica set mode
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
sharding: false
asserts:
- equal:
path: spec.selector["app.kubernetes.io/component"]
value: mongod
- equal:
path: spec.selector["app.kubernetes.io/replset"]
value: rs0
###########################
# Sharded mode #
###########################
- it: selects mongos for sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
sharding: true
asserts:
- equal:
path: spec.selector["app.kubernetes.io/component"]
value: mongos
- it: does not set replset selector for sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
sharding: true
asserts:
- notExists:
path: spec.selector["app.kubernetes.io/replset"]

View File

@@ -0,0 +1,703 @@
suite: mongodb CR tests
templates:
- templates/mongodb.yaml
tests:
###################
# Basic rendering #
###################
- it: renders PerconaServerMongoDB and WorkloadMonitor
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- hasDocuments:
count: 2
- isKind:
of: PerconaServerMongoDB
documentIndex: 0
- isKind:
of: WorkloadMonitor
documentIndex: 1
- it: sets correct CR name
release:
name: my-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: metadata.name
value: my-mongodb
documentIndex: 0
##################
# CR Version #
##################
- it: sets crVersion to 1.21.1
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: spec.crVersion
value: "1.21.1"
documentIndex: 0
#####################
# Cluster DNS #
#####################
- it: sets clusterServiceDNSSuffix from cluster config
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: custom.local
asserts:
- equal:
path: spec.clusterServiceDNSSuffix
value: svc.custom.local
documentIndex: 0
- it: defaults clusterServiceDNSSuffix to cozy.local
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster: {}
asserts:
- equal:
path: spec.clusterServiceDNSSuffix
value: svc.cozy.local
documentIndex: 0
##################
# Unsafe flags #
##################
- it: enables unsafeFlags when replicas is 1
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
replicas: 1
asserts:
- equal:
path: spec.unsafeFlags.replsetSize
value: true
documentIndex: 0
- it: enables unsafeFlags when replicas is 2
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
replicas: 2
asserts:
- equal:
path: spec.unsafeFlags.replsetSize
value: true
documentIndex: 0
- it: does not set unsafeFlags when replicas is 3
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
replicas: 3
asserts:
- notExists:
path: spec.unsafeFlags
documentIndex: 0
- it: does not set unsafeFlags when replicas is 5
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
replicas: 5
asserts:
- notExists:
path: spec.unsafeFlags
documentIndex: 0
###########################
# Replica Set Mode #
###########################
- it: configures replica set rs0 in non-sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
replicas: 3
asserts:
- equal:
path: spec.sharding.enabled
value: false
documentIndex: 0
- equal:
path: spec.replsets[0].name
value: rs0
documentIndex: 0
- equal:
path: spec.replsets[0].size
value: 3
documentIndex: 0
- it: sets storage size for replica set
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
size: 20Gi
asserts:
- equal:
path: spec.replsets[0].volumeSpec.persistentVolumeClaim.resources.requests.storage
value: 20Gi
documentIndex: 0
- it: sets storageClass when provided
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
storageClass: fast-ssd
asserts:
- equal:
path: spec.replsets[0].volumeSpec.persistentVolumeClaim.storageClassName
value: fast-ssd
documentIndex: 0
- it: does not set storageClass when empty
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
storageClass: ""
asserts:
- notExists:
path: spec.replsets[0].volumeSpec.persistentVolumeClaim.storageClassName
documentIndex: 0
###########################
# Sharded Cluster Mode #
###########################
- it: enables sharding when configured
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
shardingConfig:
configServers: 3
configServerSize: 3Gi
mongos: 2
shards:
- name: rs0
replicas: 3
size: 10Gi
asserts:
- equal:
path: spec.sharding.enabled
value: true
documentIndex: 0
- equal:
path: spec.sharding.balancer.enabled
value: true
documentIndex: 0
- it: configures config servers
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
shardingConfig:
configServers: 5
configServerSize: 5Gi
mongos: 2
shards:
- name: rs0
replicas: 3
size: 10Gi
asserts:
- equal:
path: spec.sharding.configsvrReplSet.size
value: 5
documentIndex: 0
- equal:
path: spec.sharding.configsvrReplSet.volumeSpec.persistentVolumeClaim.resources.requests.storage
value: 5Gi
documentIndex: 0
- it: configures mongos routers
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
shardingConfig:
configServers: 3
configServerSize: 3Gi
mongos: 4
shards:
- name: rs0
replicas: 3
size: 10Gi
asserts:
- equal:
path: spec.sharding.mongos.size
value: 4
documentIndex: 0
- equal:
path: spec.sharding.mongos.expose.exposeType
value: ClusterIP
documentIndex: 0
- it: configures multiple shards
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
shardingConfig:
configServers: 3
configServerSize: 3Gi
mongos: 2
shards:
- name: shard1
replicas: 3
size: 50Gi
- name: shard2
replicas: 5
size: 100Gi
asserts:
- equal:
path: spec.replsets[0].name
value: shard1
documentIndex: 0
- equal:
path: spec.replsets[0].size
value: 3
documentIndex: 0
- equal:
path: spec.replsets[0].volumeSpec.persistentVolumeClaim.resources.requests.storage
value: 50Gi
documentIndex: 0
- equal:
path: spec.replsets[1].name
value: shard2
documentIndex: 0
- equal:
path: spec.replsets[1].size
value: 5
documentIndex: 0
- equal:
path: spec.replsets[1].volumeSpec.persistentVolumeClaim.resources.requests.storage
value: 100Gi
documentIndex: 0
###########################
# Users configuration #
###########################
- it: does not include users section when no users defined
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
users: {}
asserts:
- notExists:
path: spec.users
documentIndex: 0
- it: configures users when defined
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
users:
appuser:
db: appdb
roles:
- name: readWrite
db: appdb
asserts:
- exists:
path: spec.users
documentIndex: 0
- equal:
path: spec.users[0].name
value: appuser
documentIndex: 0
- equal:
path: spec.users[0].db
value: appdb
documentIndex: 0
- equal:
path: spec.users[0].passwordSecretRef.name
value: test-mongodb-user-appuser
documentIndex: 0
- equal:
path: spec.users[0].passwordSecretRef.key
value: password
documentIndex: 0
- it: configures user roles
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
users:
admin:
db: admin
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
asserts:
- equal:
path: spec.users[0].roles[0].name
value: clusterAdmin
documentIndex: 0
- equal:
path: spec.users[0].roles[0].db
value: admin
documentIndex: 0
- equal:
path: spec.users[0].roles[1].name
value: userAdminAnyDatabase
documentIndex: 0
- it: fails when user has empty roles
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
users:
myuser:
db: mydb
roles: []
asserts:
- failedTemplate:
errorMessage: "users.myuser.roles is required and cannot be empty"
###########################
# Backup configuration #
###########################
- it: disables backup when not enabled
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: false
asserts:
- equal:
path: spec.backup.enabled
value: false
documentIndex: 0
- notExists:
path: spec.backup.storages
documentIndex: 0
- it: configures backup when enabled
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
schedule: "0 3 * * *"
retentionPolicy: 14d
destinationPath: "s3://mybucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.enabled
value: true
documentIndex: 0
- equal:
path: spec.backup.storages.s3-storage.type
value: s3
documentIndex: 0
- it: parses bucket from destinationPath
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
destinationPath: "s3://my-backup-bucket/mongodb/prod/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.storages.s3-storage.s3.bucket
value: my-backup-bucket
documentIndex: 0
- it: parses prefix from destinationPath
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
destinationPath: "s3://bucket/path/to/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.storages.s3-storage.s3.prefix
value: path/to/backups/
documentIndex: 0
- it: sets backup retention from retentionPolicy
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
retentionPolicy: 30d
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.tasks[0].keep
value: 30
documentIndex: 0
- it: sets backup schedule
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
schedule: "0 4 * * *"
retentionPolicy: 7d
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.tasks[0].schedule
value: "0 4 * * *"
documentIndex: 0
- it: enables PITR when backup enabled
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.pitr.enabled
value: true
documentIndex: 0
- it: references s3-creds secret for backup
release:
name: mydb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.storages.s3-storage.s3.credentialsSecret
value: mydb-s3-creds
documentIndex: 0
###########################
# WorkloadMonitor #
###########################
- it: creates WorkloadMonitor with correct metadata
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: metadata.name
value: test-mongodb
documentIndex: 1
- equal:
path: spec.kind
value: mongodb
documentIndex: 1
- equal:
path: spec.type
value: mongodb
documentIndex: 1
- it: sets replicas from values in non-sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
replicas: 5
asserts:
- equal:
path: spec.replicas
value: 5
documentIndex: 1
- it: calculates total replicas in sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
shardingConfig:
configServers: 3
configServerSize: 3Gi
mongos: 2
shards:
- name: rs0
replicas: 3
size: 10Gi
- name: rs1
replicas: 5
size: 10Gi
- name: rs2
replicas: 2
size: 10Gi
asserts:
- equal:
path: spec.replicas
value: 10
documentIndex: 1
- it: sets minReplicas to 1
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: spec.minReplicas
value: 1
documentIndex: 1
- it: sets correct selector labels
release:
name: mydb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: spec.selector["app.kubernetes.io/name"]
value: percona-server-mongodb
documentIndex: 1
- equal:
path: spec.selector["app.kubernetes.io/instance"]
value: mydb
documentIndex: 1
- equal:
path: spec.selector["app.kubernetes.io/component"]
value: mongod
documentIndex: 1

View File

@@ -0,0 +1,349 @@
suite: restore tests
templates:
- templates/restore.yaml
tests:
#####################
# Rendering #
#####################
- it: does not render when bootstrap is disabled
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: false
asserts:
- hasDocuments:
count: 0
- it: renders PerconaServerMongoDBRestore CR when enabled
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "my-backup-2025-01-07"
backup:
destinationPath: "s3://bucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- hasDocuments:
count: 1
- isKind:
of: PerconaServerMongoDBRestore
#####################
# Validation #
#####################
- it: fails when backupName is missing
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: ""
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- failedTemplate:
errorMessage: "bootstrap.backupName is required when bootstrap.enabled is true"
- it: fails when destinationPath is missing
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "my-backup"
backup:
destinationPath: ""
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- failedTemplate:
errorMessage: "backup.destinationPath is required when bootstrap.enabled is true"
- it: fails when endpointURL is missing
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "my-backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: ""
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- failedTemplate:
errorMessage: "backup.endpointURL is required when bootstrap.enabled is true"
#####################
# CR metadata #
#####################
- it: uses correct restore CR name
release:
name: mydb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup-2025"
backup:
destinationPath: "s3://bucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: metadata.name
value: mydb-restore
- it: references correct cluster name
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup-2025"
backup:
destinationPath: "s3://bucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.clusterName
value: test-mongodb
#####################
# Backup source #
#####################
- it: sets backupSource type to logical
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup-2025"
backup:
destinationPath: "s3://bucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.type
value: logical
- it: constructs destination from destinationPath and backupName
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "daily-backup-2025-01-07"
backup:
destinationPath: "s3://mybucket/mongodb/prod/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.destination
value: s3://mybucket/mongodb/prod/daily-backup-2025-01-07
- it: trims trailing slash from destinationPath
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.destination
value: s3://bucket/path/backup
#####################
# S3 configuration #
#####################
- it: references s3-creds secret
release:
name: mydb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.s3.credentialsSecret
value: mydb-s3-creds
- it: sets S3 endpoint URL
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "https://s3.amazonaws.com"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.s3.endpointUrl
value: "https://s3.amazonaws.com"
- it: disables insecureSkipTLSVerify
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.s3.insecureSkipTLSVerify
value: false
- it: enables forcePathStyle
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.s3.forcePathStyle
value: true
#####################
# PITR #
#####################
- it: does not set pitr when recoveryTime not specified
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- notExists:
path: spec.pitr
- it: configures PITR when recoveryTime is set
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "my-backup"
recoveryTime: "2025-01-07 14:30:00"
backup:
destinationPath: "s3://bucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.pitr.type
value: date
- equal:
path: spec.pitr.date
value: "2025-01-07 14:30:00"
#####################
# S3 credentials #
#####################
- it: fails when s3AccessKey is missing
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: ""
s3SecretKey: "secret"
asserts:
- failedTemplate:
errorMessage: "backup.s3AccessKey is required when bootstrap.enabled is true"
- it: fails when s3SecretKey is missing
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: ""
asserts:
- failedTemplate:
errorMessage: "backup.s3SecretKey is required when bootstrap.enabled is true"

View File

@@ -0,0 +1,98 @@
suite: user secrets tests
templates:
- templates/user-secrets.yaml
tests:
# No users configured
- it: does not render when no users defined
release:
name: test-mongodb
namespace: tenant-test
set:
users: {}
asserts:
- hasDocuments:
count: 0
# Single user
- it: creates secret for single user
release:
name: test-mongodb
namespace: tenant-test
set:
users:
myuser:
db: mydb
roles:
- name: readWrite
db: mydb
asserts:
- hasDocuments:
count: 1
- isKind:
of: Secret
- equal:
path: metadata.name
value: test-mongodb-user-myuser
- equal:
path: type
value: Opaque
- exists:
path: stringData.password
# Multiple users
- it: creates separate secrets for multiple users
release:
name: test-mongodb
namespace: tenant-test
set:
users:
user1:
db: db1
roles:
- name: readWrite
db: db1
user2:
db: db2
roles:
- name: dbAdmin
db: db2
asserts:
- hasDocuments:
count: 2
# User with explicit password
- it: uses explicit password when provided
release:
name: test-mongodb
namespace: tenant-test
set:
users:
myuser:
password: "mysecretpassword"
db: mydb
roles:
- name: readWrite
db: mydb
asserts:
- equal:
path: stringData.password
value: "mysecretpassword"
# Secret naming convention
- it: follows naming convention release-user-username
release:
name: prod-db
namespace: tenant-prod
set:
users:
admin:
db: admin
roles:
- name: clusterAdmin
db: admin
asserts:
- equal:
path: metadata.name
value: prod-db-user-admin

View File

@@ -0,0 +1,288 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"backup": {
"description": "Backup configuration.",
"type": "object",
"default": {},
"required": [
"enabled"
],
"properties": {
"destinationPath": {
"description": "Destination path for backups (e.g. s3://bucket/path/).",
"type": "string",
"default": "s3://bucket/path/to/folder/"
},
"enabled": {
"description": "Enable regular backups.",
"type": "boolean",
"default": false
},
"endpointURL": {
"description": "S3 endpoint URL for uploads.",
"type": "string",
"default": "http://minio-gateway-service:9000"
},
"retentionPolicy": {
"description": "Retention policy (e.g. \"30d\").",
"type": "string",
"default": "30d"
},
"s3AccessKey": {
"description": "Access key for S3 authentication.",
"type": "string",
"default": ""
},
"s3SecretKey": {
"description": "Secret key for S3 authentication.",
"type": "string",
"default": ""
},
"schedule": {
"description": "Cron schedule for automated backups.",
"type": "string",
"default": "0 2 * * *"
}
}
},
"bootstrap": {
"description": "Bootstrap configuration.",
"type": "object",
"default": {},
"required": [
"backupName",
"enabled"
],
"properties": {
"backupName": {
"description": "Name of backup to restore from.",
"type": "string",
"default": ""
},
"enabled": {
"description": "Whether to restore from a backup.",
"type": "boolean",
"default": false
},
"recoveryTime": {
"description": "Timestamp for point-in-time recovery; empty means latest.",
"type": "string",
"default": ""
}
}
},
"external": {
"description": "Enable external access from outside the cluster.",
"type": "boolean",
"default": false
},
"replicas": {
"description": "Number of MongoDB replicas in replica set.",
"type": "integer",
"default": 3
},
"resources": {
"description": "Explicit CPU and memory configuration for each MongoDB replica. When omitted, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"default": {},
"properties": {
"cpu": {
"description": "CPU available to each replica.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory (RAM) available to each replica.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted.",
"type": "string",
"default": "small",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"sharding": {
"description": "Enable sharded cluster mode. When disabled, deploys a replica set.",
"type": "boolean",
"default": false
},
"shardingConfig": {
"description": "Configuration for sharded cluster mode.",
"type": "object",
"default": {},
"required": [
"configServerSize",
"configServers",
"mongos"
],
"properties": {
"configServerSize": {
"description": "PVC size for config servers.",
"default": "3Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"configServers": {
"description": "Number of config server replicas.",
"type": "integer",
"default": 3
},
"mongos": {
"description": "Number of mongos router replicas.",
"type": "integer",
"default": 2
},
"shards": {
"description": "List of shard configurations.",
"type": "array",
"default": [
{
"name": "rs0",
"replicas": 3,
"size": "10Gi"
}
],
"items": {
"type": "object",
"required": [
"name",
"replicas",
"size"
],
"properties": {
"name": {
"description": "Shard name.",
"type": "string"
},
"replicas": {
"description": "Number of replicas in this shard.",
"type": "integer"
},
"size": {
"description": "PVC size for this shard.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
}
}
}
},
"size": {
"description": "Persistent Volume Claim size available for application data.",
"default": "10Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"storageClass": {
"description": "StorageClass used to store the data.",
"type": "string",
"default": ""
},
"users": {
"description": "Custom MongoDB users configuration map.",
"type": "object",
"default": {},
"additionalProperties": {
"type": "object",
"required": [
"db"
],
"properties": {
"db": {
"description": "Database to authenticate against.",
"type": "string"
},
"password": {
"description": "Password for the user (auto-generated if omitted).",
"type": "string"
},
"roles": {
"description": "List of MongoDB roles with database scope.",
"type": "array",
"items": {
"type": "object",
"required": [
"db",
"name"
],
"properties": {
"db": {
"description": "Database the role applies to.",
"type": "string"
},
"name": {
"description": "Role name (e.g., readWrite, dbAdmin, clusterAdmin).",
"type": "string"
}
}
}
}
}
}
},
"version": {
"description": "MongoDB major version to deploy.",
"type": "string",
"default": "v8",
"enum": [
"v8",
"v7",
"v6"
]
}
}
}

View File

@@ -0,0 +1,134 @@
##
## @section Common parameters
##
## @typedef {struct} Resources - Explicit CPU and memory configuration for each MongoDB replica.
## @field {quantity} [cpu] - CPU available to each replica.
## @field {quantity} [memory] - Memory (RAM) available to each replica.
## @enum {string} ResourcesPreset - Default sizing preset.
## @value nano
## @value micro
## @value small
## @value medium
## @value large
## @value xlarge
## @value 2xlarge
## @param {int} replicas - Number of MongoDB replicas in replica set.
replicas: 3
## @param {Resources} [resources] - Explicit CPU and memory configuration for each MongoDB replica. When omitted, the preset defined in `resourcesPreset` is applied.
resources: {}
## @param {ResourcesPreset} resourcesPreset="small" - Default sizing preset used when `resources` is omitted.
resourcesPreset: "small"
## @param {quantity} size - Persistent Volume Claim size available for application data.
size: 10Gi
## @param {string} storageClass - StorageClass used to store the data.
storageClass: ""
## @param {bool} external - Enable external access from outside the cluster.
external: false
##
## @enum {string} Version
## @value v8
## @value v7
## @value v6
## @param {Version} version - MongoDB major version to deploy.
version: v8
##
## @section Sharding configuration
##
## @param {bool} sharding - Enable sharded cluster mode. When disabled, deploys a replica set.
sharding: false
## @typedef {struct} ShardingConfig - Sharded cluster configuration.
## @field {int} configServers - Number of config server replicas.
## @field {quantity} configServerSize - PVC size for config servers.
## @field {int} mongos - Number of mongos router replicas.
## @field {[]Shard} shards - List of shard configurations.
## @typedef {struct} Shard - Individual shard configuration.
## @field {string} name - Shard name.
## @field {int} replicas - Number of replicas in this shard.
## @field {quantity} size - PVC size for this shard.
## @param {ShardingConfig} shardingConfig - Configuration for sharded cluster mode.
shardingConfig:
configServers: 3
configServerSize: 3Gi
mongos: 2
shards:
- name: rs0
replicas: 3
size: 10Gi
##
## @section Users configuration
##
## @typedef {struct} Role - MongoDB role configuration.
## @field {string} name - Role name (e.g., readWrite, dbAdmin, clusterAdmin).
## @field {string} db - Database the role applies to.
## @typedef {struct} User - User configuration.
## @field {string} [password] - Password for the user (auto-generated if omitted).
## @field {string} db - Database to authenticate against.
## @field {[]Role} roles - List of MongoDB roles with database scope.
## @param {map[string]User} users - Custom MongoDB users configuration map.
users: {}
## Example:
## users:
## myuser:
## db: mydb
## roles:
## - name: readWrite
## db: mydb
## - name: dbAdmin
## db: mydb
##
## @section Backup parameters
##
## @typedef {struct} Backup - Backup configuration.
## @field {bool} enabled - Enable regular backups.
## @field {string} [schedule] - Cron schedule for automated backups.
## @field {string} [retentionPolicy] - Retention policy (e.g. "30d").
## @field {string} [destinationPath] - Destination path for backups (e.g. s3://bucket/path/).
## @field {string} [endpointURL] - S3 endpoint URL for uploads.
## @field {string} [s3AccessKey] - Access key for S3 authentication.
## @field {string} [s3SecretKey] - Secret key for S3 authentication.
## @param {Backup} backup - Backup configuration.
backup:
enabled: false
schedule: "0 2 * * *"
retentionPolicy: 30d
destinationPath: "s3://bucket/path/to/folder/"
endpointURL: "http://minio-gateway-service:9000"
s3AccessKey: ""
s3SecretKey: ""
##
## @section Bootstrap (recovery) parameters
##
## @typedef {struct} Bootstrap - Bootstrap configuration for restoring a database cluster from a backup.
## @field {bool} enabled - Whether to restore from a backup.
## @field {string} [recoveryTime] - Timestamp for point-in-time recovery; empty means latest.
## @field {string} backupName - Name of backup to restore from.
## @param {Bootstrap} bootstrap - Bootstrap configuration.
bootstrap:
enabled: false
recoveryTime: ""
backupName: ""

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/mariadb-backup:0.0.0@sha256:aca403030ff5d831415d72367866fdf291fab73ee2cfddbe4c93c2915a316ab1
ghcr.io/cozystack/cozystack/mariadb-backup:0.0.0@sha256:0ddbbec0568dcb9fbc317cd9cc654e826dbe88ba3f184fa9b6b58aacb93b4570

View File

@@ -231,7 +231,6 @@ rules:
- get
- list
- watch
- delete
- apiGroups: ["kubevirt.io"]
resources:
- virtualmachines
@@ -330,7 +329,6 @@ rules:
- get
- list
- watch
- delete
- apiGroups: ["kubevirt.io"]
resources:
- virtualmachines

View File

@@ -70,6 +70,29 @@ Generate a stable UUID for cloud-init re-initialization upon upgrade.
{{- $uuid }}
{{- end }}
{{/*
Domain resources (cpu, memory) as a JSON object.
Used in vm.yaml for rendering and in the update hook for merge patches.
*/}}
{{- define "virtual-machine.domainResources" -}}
{{- $result := dict -}}
{{- if or .Values.cpuModel (and .Values.resources .Values.resources.cpu .Values.resources.sockets) -}}
{{- $cpu := dict -}}
{{- if and .Values.resources .Values.resources.cpu .Values.resources.sockets -}}
{{- $_ := set $cpu "cores" (.Values.resources.cpu | int64) -}}
{{- $_ := set $cpu "sockets" (.Values.resources.sockets | int64) -}}
{{- end -}}
{{- if .Values.cpuModel -}}
{{- $_ := set $cpu "model" .Values.cpuModel -}}
{{- end -}}
{{- $_ := set $result "cpu" $cpu -}}
{{- end -}}
{{- if and .Values.resources .Values.resources.memory -}}
{{- $_ := set $result "resources" (dict "requests" (dict "memory" .Values.resources.memory)) -}}
{{- end -}}
{{- $result | toJson -}}
{{- end -}}
{{/*
Node Affinity for Windows VMs
*/}}

View File

@@ -3,22 +3,32 @@
{{- $existingVM := lookup "kubevirt.io/v1" "VirtualMachine" $namespace $vmName -}}
{{- $existingPVC := lookup "v1" "PersistentVolumeClaim" $namespace $vmName -}}
{{- $existingService := lookup "v1" "Service" $namespace $vmName -}}
{{- $instanceType := .Values.instanceType | default "" -}}
{{- $instanceProfile := .Values.instanceProfile | default "" -}}
{{- $desiredStorage := .Values.systemDisk.storage | default "" -}}
{{- $desiredServiceType := ternary "LoadBalancer" "ClusterIP" .Values.external -}}
{{- $needUpdateType := false -}}
{{- $needUpdateProfile := false -}}
{{- $needResizePVC := false -}}
{{- $needRecreateService := false -}}
{{- $needRemoveInstanceType := false -}}
{{- $needRemoveCustomResources := false -}}
{{- if and $existingVM $instanceType -}}
{{- $existingHasInstanceType := and $existingVM $existingVM.spec.instancetype -}}
{{- if and $existingHasInstanceType (not $instanceType) -}}
{{- $needRemoveInstanceType = true -}}
{{- else if and $existingHasInstanceType $instanceType -}}
{{- if not (eq $existingVM.spec.instancetype.name $instanceType) -}}
{{- $needUpdateType = true -}}
{{- end -}}
{{- else if and $existingVM (not $existingHasInstanceType) $instanceType -}}
{{- $needRemoveCustomResources = true -}}
{{- end -}}
{{- if and $existingVM $instanceProfile -}}
{{- if and $existingVM $existingVM.spec.preference $instanceProfile -}}
{{- if not (eq $existingVM.spec.preference.name $instanceProfile) -}}
{{- $needUpdateProfile = true -}}
{{- end -}}
@@ -35,7 +45,14 @@
{{- end -}}
{{- end -}}
{{- if or $needUpdateType $needUpdateProfile $needResizePVC }}
{{- if $existingService -}}
{{- $currentServiceType := $existingService.spec.type -}}
{{- if ne $currentServiceType $desiredServiceType -}}
{{- $needRecreateService = true -}}
{{- end -}}
{{- end -}}
{{- if or $needUpdateType $needUpdateProfile $needResizePVC $needRecreateService $needRemoveInstanceType $needRemoveCustomResources }}
apiVersion: batch/v1
kind: Job
metadata:
@@ -80,12 +97,31 @@ spec:
-p '{"spec":{"preference":{"name": "{{ $instanceProfile }}", "revisionName": null}}}'
{{- end }}
{{- if $needRemoveInstanceType }}
echo "Removing instancetype from VM (switching to custom resources)..."
kubectl patch virtualmachines.kubevirt.io {{ $vmName }} -n {{ $namespace }} \
--type merge \
-p '{"spec":{"instancetype":null{{- if not $instanceProfile }},"preference":null{{- end }},"template":{"spec":{"domain":{{ include "virtual-machine.domainResources" . }}}}}}'
{{- end }}
{{- if $needRemoveCustomResources }}
echo "Removing custom CPU/memory from domain (switching to instancetype)..."
kubectl patch virtualmachines.kubevirt.io {{ $vmName }} -n {{ $namespace }} \
--type merge \
-p '{"spec":{"instancetype":{"name":"{{ $instanceType }}","revisionName":null},"template":{"spec":{"domain":{"cpu":null,"resources":null}}}}}'
{{- end }}
{{- if $needResizePVC }}
echo "Patching PVC for storage resize..."
kubectl patch pvc {{ $vmName }} -n {{ $namespace }} \
--type merge \
-p '{"spec":{"resources":{"requests":{"storage":"{{ $desiredStorage }}"}}}}'
{{- end }}
{{- if $needRecreateService }}
echo "Removing Service..."
kubectl delete service --cascade=orphan -n {{ $namespace }} {{ $vmName }}
{{- end }}
---
apiVersion: v1
kind: ServiceAccount
@@ -111,6 +147,10 @@ rules:
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["patch", "get", "list", "watch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["{{ $vmName }}"]
verbs: ["delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@@ -4,6 +4,9 @@
{{- if and .Values.instanceProfile (not (lookup "instancetype.kubevirt.io/v1beta1" "VirtualMachineClusterPreference" "" .Values.instanceProfile)) }}
{{- fail (printf "Specified profile not exists in cluster: %s" .Values.instanceProfile) }}
{{- end }}
{{- if and (not .Values.instanceType) (not (and .Values.resources .Values.resources.cpu .Values.resources.sockets .Values.resources.memory)) }}
{{- fail "Either instanceType or resources (cpu, sockets, memory) must be specified" }}
{{- end }}
apiVersion: kubevirt.io/v1
kind: VirtualMachine
@@ -67,15 +70,12 @@ spec:
{{- include "virtual-machine.labels" . | nindent 8 }}
spec:
domain:
{{- if and .Values.resources .Values.resources.cpu .Values.resources.sockets }}
cpu:
cores: {{ .Values.resources.cpu }}
sockets: {{ .Values.resources.sockets }}
{{- $domainRes := include "virtual-machine.domainResources" . | fromJson -}}
{{- with $domainRes.cpu }}
cpu: {{- . | toYaml | nindent 10 }}
{{- end }}
{{- if and .Values.resources .Values.resources.memory }}
resources:
requests:
memory: {{ .Values.resources.memory | quote }}
{{- with $domainRes.resources }}
resources: {{- . | toYaml | nindent 10 }}
{{- end }}
firmware:
uuid: {{ include "virtual-machine.stableUuid" . }}

View File

@@ -28,7 +28,7 @@ RUN go mod download
FROM alpine:3.22
RUN wget -O- https://github.com/cozystack/cozyhr/raw/refs/heads/main/hack/install.sh | sh -s -- -v 1.5.0
RUN wget -O- https://github.com/cozystack/cozyhr/raw/refs/heads/main/hack/install.sh | sh -s -- -v 1.6.1
RUN apk add --no-cache make kubectl helm coreutils git jq openssl

View File

@@ -1,2 +1,2 @@
cozystack:
image: ghcr.io/cozystack/cozystack/installer:v0.40.0@sha256:0f62cc82d7d5782485ea8345dfba5db7de55e2b1a38095d0ab5a334af7755ea9
image: ghcr.io/cozystack/cozystack/installer:v0.41.11@sha256:ba9271deb2f6ac29dd067a1277a4b3c33504a045c375957a2175deaee6fdfec3

View File

@@ -27,7 +27,7 @@ releases:
dependsOn: [cilium]
- name: cozy-proxy
releaseName: cozystack
releaseName: cozy-proxy
chart: cozy-cozy-proxy
namespace: cozy-system
optional: true

View File

@@ -66,7 +66,7 @@ releases:
dependsOn: [cilium,kubeovn]
- name: cozy-proxy
releaseName: cozystack
releaseName: cozy-proxy
chart: cozy-cozy-proxy
namespace: cozy-system
dependsOn: [cilium,kubeovn,multus]
@@ -208,6 +208,12 @@ releases:
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: mongodb-rd
releaseName: mongodb-rd
chart: mongodb-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: seaweedfs-rd
releaseName: seaweedfs-rd
chart: seaweedfs-rd
@@ -411,6 +417,12 @@ releases:
namespace: cozy-redis-operator
dependsOn: [cilium,kubeovn,multus]
- name: mongodb-operator
releaseName: mongodb-operator
chart: cozy-mongodb-operator
namespace: cozy-mongodb-operator
dependsOn: [cilium,kubeovn,cert-manager,victoria-metrics-operator]
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator

View File

@@ -0,0 +1,27 @@
---
apiVersion: cozystack.io/v1alpha1
kind: PackageSource
metadata:
name: cozystack.mongodb-application
spec:
sourceRef:
kind: OCIRepository
name: cozystack-packages
namespace: cozy-system
path: /
variants:
- name: default
dependsOn:
- cozystack.networking
libraries:
- name: cozy-lib
path: library/cozy-lib
components:
- name: mongodb
path: apps/mongodb
libraries: ["cozy-lib"]
- name: mongodb-rd
path: system/mongodb-rd
install:
namespace: cozy-system
releaseName: mongodb-rd

View File

@@ -0,0 +1,23 @@
---
apiVersion: cozystack.io/v1alpha1
kind: PackageSource
metadata:
name: cozystack.mongodb-operator
spec:
sourceRef:
kind: OCIRepository
name: cozystack-packages
namespace: cozy-system
path: /
variants:
- name: default
dependsOn:
- cozystack.networking
- cozystack.prometheus-operator-crds
- cozystack.cert-manager
components:
- name: mongodb-operator
path: system/mongodb-operator
install:
namespace: cozy-mongodb-operator
releaseName: mongodb-operator

View File

@@ -1,2 +1,2 @@
assets:
image: ghcr.io/cozystack/cozystack/cozystack-assets:v0.40.0@sha256:b643f04707bcea32a152b2df3270907ac743d168e586630cd70f90020f0b0a12
image: ghcr.io/cozystack/cozystack/cozystack-assets:v0.41.11@sha256:04ca6ac7ac72f4a4d975a33436dc401abf457eb27a7e59f32a333f0b689a11e3

View File

@@ -3,24 +3,24 @@
arch: amd64
platform: metal
secureboot: false
version: v1.11.3
version: v1.11.6
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
imageRef: "ghcr.io/siderolabs/installer:v1.11.6"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/amd-ucode:20251125@sha256:aa2c684933d28cf10ef785f0d94f91d6d098e164374114648867cf81c2b585fe
- imageRef: ghcr.io/siderolabs/amdgpu:20251125-v1.11.6@sha256:4161f6de768d921a236aee8b06ee3eb18c4b93cfc178e3ae0f756e3a40929a93
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20251125@sha256:a7665b8c96cbdcf15dff6e39495ec389f576adf8a68ecfe20d6760884656d14c
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20251125@sha256:01458f60448e166eeb641ee989b941725cbe6759e10afe2251c6d1b1ca5ba1b7
- imageRef: ghcr.io/siderolabs/i915:20251125-v1.11.6@sha256:6896f63864d8ceed4a65f00304c1b26d90224127b37312d2e4bf33df14778e84
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:ba7a22ab69dfc8070d52fb70f58955257f0b586419f63573fb1e4618f57790eb
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20251125@sha256:485ef0a0b58328ded511e9a76014c353da24bb147c8b2929068827e5f9a22326
- imageRef: ghcr.io/siderolabs/drbd:9.2.16-v1.11.6@sha256:f87f92f25e203a3d49ced3d5bfe9d19278c309b157a5c049ff8c1fd53cf77707
- imageRef: ghcr.io/siderolabs/zfs:2.3.5-v1.11.6@sha256:29122979dce73dfd5c00d0a376de50905a9c4af99e0f18f91d77df2f6bdd0265
output:
kind: initramfs
imageOptions: {}

View File

@@ -3,24 +3,24 @@
arch: amd64
platform: metal
secureboot: false
version: v1.11.3
version: v1.11.6
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
imageRef: "ghcr.io/siderolabs/installer:v1.11.6"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/amd-ucode:20251125@sha256:aa2c684933d28cf10ef785f0d94f91d6d098e164374114648867cf81c2b585fe
- imageRef: ghcr.io/siderolabs/amdgpu:20251125-v1.11.6@sha256:4161f6de768d921a236aee8b06ee3eb18c4b93cfc178e3ae0f756e3a40929a93
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20251125@sha256:a7665b8c96cbdcf15dff6e39495ec389f576adf8a68ecfe20d6760884656d14c
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20251125@sha256:01458f60448e166eeb641ee989b941725cbe6759e10afe2251c6d1b1ca5ba1b7
- imageRef: ghcr.io/siderolabs/i915:20251125-v1.11.6@sha256:6896f63864d8ceed4a65f00304c1b26d90224127b37312d2e4bf33df14778e84
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:ba7a22ab69dfc8070d52fb70f58955257f0b586419f63573fb1e4618f57790eb
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20251125@sha256:485ef0a0b58328ded511e9a76014c353da24bb147c8b2929068827e5f9a22326
- imageRef: ghcr.io/siderolabs/drbd:9.2.16-v1.11.6@sha256:f87f92f25e203a3d49ced3d5bfe9d19278c309b157a5c049ff8c1fd53cf77707
- imageRef: ghcr.io/siderolabs/zfs:2.3.5-v1.11.6@sha256:29122979dce73dfd5c00d0a376de50905a9c4af99e0f18f91d77df2f6bdd0265
output:
kind: installer
imageOptions: {}

View File

@@ -3,24 +3,24 @@
arch: amd64
platform: metal
secureboot: false
version: v1.11.3
version: v1.11.6
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
imageRef: "ghcr.io/siderolabs/installer:v1.11.6"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/amd-ucode:20251125@sha256:aa2c684933d28cf10ef785f0d94f91d6d098e164374114648867cf81c2b585fe
- imageRef: ghcr.io/siderolabs/amdgpu:20251125-v1.11.6@sha256:4161f6de768d921a236aee8b06ee3eb18c4b93cfc178e3ae0f756e3a40929a93
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20251125@sha256:a7665b8c96cbdcf15dff6e39495ec389f576adf8a68ecfe20d6760884656d14c
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20251125@sha256:01458f60448e166eeb641ee989b941725cbe6759e10afe2251c6d1b1ca5ba1b7
- imageRef: ghcr.io/siderolabs/i915:20251125-v1.11.6@sha256:6896f63864d8ceed4a65f00304c1b26d90224127b37312d2e4bf33df14778e84
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:ba7a22ab69dfc8070d52fb70f58955257f0b586419f63573fb1e4618f57790eb
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20251125@sha256:485ef0a0b58328ded511e9a76014c353da24bb147c8b2929068827e5f9a22326
- imageRef: ghcr.io/siderolabs/drbd:9.2.16-v1.11.6@sha256:f87f92f25e203a3d49ced3d5bfe9d19278c309b157a5c049ff8c1fd53cf77707
- imageRef: ghcr.io/siderolabs/zfs:2.3.5-v1.11.6@sha256:29122979dce73dfd5c00d0a376de50905a9c4af99e0f18f91d77df2f6bdd0265
output:
kind: iso
imageOptions: {}

View File

@@ -3,24 +3,24 @@
arch: amd64
platform: metal
secureboot: false
version: v1.11.3
version: v1.11.6
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
imageRef: "ghcr.io/siderolabs/installer:v1.11.6"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/amd-ucode:20251125@sha256:aa2c684933d28cf10ef785f0d94f91d6d098e164374114648867cf81c2b585fe
- imageRef: ghcr.io/siderolabs/amdgpu:20251125-v1.11.6@sha256:4161f6de768d921a236aee8b06ee3eb18c4b93cfc178e3ae0f756e3a40929a93
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20251125@sha256:a7665b8c96cbdcf15dff6e39495ec389f576adf8a68ecfe20d6760884656d14c
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20251125@sha256:01458f60448e166eeb641ee989b941725cbe6759e10afe2251c6d1b1ca5ba1b7
- imageRef: ghcr.io/siderolabs/i915:20251125-v1.11.6@sha256:6896f63864d8ceed4a65f00304c1b26d90224127b37312d2e4bf33df14778e84
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:ba7a22ab69dfc8070d52fb70f58955257f0b586419f63573fb1e4618f57790eb
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20251125@sha256:485ef0a0b58328ded511e9a76014c353da24bb147c8b2929068827e5f9a22326
- imageRef: ghcr.io/siderolabs/drbd:9.2.16-v1.11.6@sha256:f87f92f25e203a3d49ced3d5bfe9d19278c309b157a5c049ff8c1fd53cf77707
- imageRef: ghcr.io/siderolabs/zfs:2.3.5-v1.11.6@sha256:29122979dce73dfd5c00d0a376de50905a9c4af99e0f18f91d77df2f6bdd0265
output:
kind: kernel
imageOptions: {}

View File

@@ -3,24 +3,24 @@
arch: amd64
platform: metal
secureboot: false
version: v1.11.3
version: v1.11.6
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
imageRef: "ghcr.io/siderolabs/installer:v1.11.6"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/amd-ucode:20251125@sha256:aa2c684933d28cf10ef785f0d94f91d6d098e164374114648867cf81c2b585fe
- imageRef: ghcr.io/siderolabs/amdgpu:20251125-v1.11.6@sha256:4161f6de768d921a236aee8b06ee3eb18c4b93cfc178e3ae0f756e3a40929a93
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20251125@sha256:a7665b8c96cbdcf15dff6e39495ec389f576adf8a68ecfe20d6760884656d14c
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20251125@sha256:01458f60448e166eeb641ee989b941725cbe6759e10afe2251c6d1b1ca5ba1b7
- imageRef: ghcr.io/siderolabs/i915:20251125-v1.11.6@sha256:6896f63864d8ceed4a65f00304c1b26d90224127b37312d2e4bf33df14778e84
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:ba7a22ab69dfc8070d52fb70f58955257f0b586419f63573fb1e4618f57790eb
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20251125@sha256:485ef0a0b58328ded511e9a76014c353da24bb147c8b2929068827e5f9a22326
- imageRef: ghcr.io/siderolabs/drbd:9.2.16-v1.11.6@sha256:f87f92f25e203a3d49ced3d5bfe9d19278c309b157a5c049ff8c1fd53cf77707
- imageRef: ghcr.io/siderolabs/zfs:2.3.5-v1.11.6@sha256:29122979dce73dfd5c00d0a376de50905a9c4af99e0f18f91d77df2f6bdd0265
output:
kind: image
imageOptions: { diskSize: 1306525696, diskFormat: raw }

View File

@@ -3,24 +3,24 @@
arch: amd64
platform: nocloud
secureboot: false
version: v1.11.3
version: v1.11.6
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.11.3"
imageRef: "ghcr.io/siderolabs/installer:v1.11.6"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250917@sha256:ff11ee9f1565d9f9b095a3dc41fb7962b211169b2ef05d658a488398cb98e2d2
- imageRef: ghcr.io/siderolabs/amdgpu:20250917-v1.11.3@sha256:527b694ddbc4b40e9529d736bfe9874cc786773aa5a1070bbefe77feb9a8a304
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250917@sha256:ac6aaaa0d3312e72279a5cde7de0d71fb61774aa2f97a4e56dd914a9f1dde4d1
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250917@sha256:c25225c371e81485c64f339864ede410b560f07eb0fc2702a73315e977a6323d
- imageRef: ghcr.io/siderolabs/i915:20250917-v1.11.3@sha256:e8db985ff2ef702d5f3989b0138e1b9dd5ac5e885a3adefa5b42ee6fa32b7027
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:31142ac037235e6779eea9f638e6399080a1f09e7c323ffa30b37488004057a5
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250917@sha256:7094e5db6931a1b68240416b65ddc0f3b546bd9b8520e3cfb1ddebcbfc83e890
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.11.3@sha256:4393756875751e2664a04e96c1ccff84c99958ca819dd93b46b82ad8f3b4be67
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.11.3@sha256:3c0b34a760914980ac234e66f130d829e428018e46420b7bca33219b1cc2dd87
- imageRef: ghcr.io/siderolabs/amd-ucode:20251125@sha256:aa2c684933d28cf10ef785f0d94f91d6d098e164374114648867cf81c2b585fe
- imageRef: ghcr.io/siderolabs/amdgpu:20251125-v1.11.6@sha256:4161f6de768d921a236aee8b06ee3eb18c4b93cfc178e3ae0f756e3a40929a93
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20251125@sha256:a7665b8c96cbdcf15dff6e39495ec389f576adf8a68ecfe20d6760884656d14c
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20251125@sha256:01458f60448e166eeb641ee989b941725cbe6759e10afe2251c6d1b1ca5ba1b7
- imageRef: ghcr.io/siderolabs/i915:20251125-v1.11.6@sha256:6896f63864d8ceed4a65f00304c1b26d90224127b37312d2e4bf33df14778e84
- imageRef: ghcr.io/siderolabs/intel-ucode:20250812@sha256:ba7a22ab69dfc8070d52fb70f58955257f0b586419f63573fb1e4618f57790eb
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20251125@sha256:485ef0a0b58328ded511e9a76014c353da24bb147c8b2929068827e5f9a22326
- imageRef: ghcr.io/siderolabs/drbd:9.2.16-v1.11.6@sha256:f87f92f25e203a3d49ced3d5bfe9d19278c309b157a5c049ff8c1fd53cf77707
- imageRef: ghcr.io/siderolabs/zfs:2.3.5-v1.11.6@sha256:29122979dce73dfd5c00d0a376de50905a9c4af99e0f18f91d77df2f6bdd0265
output:
kind: image
imageOptions: { diskSize: 1306525696, diskFormat: raw }

View File

@@ -3,7 +3,7 @@ FROM ubuntu:22.04
ARG KUBECTL_VERSION=1.33.2
ARG TALOSCTL_VERSION=1.10.4
ARG HELM_VERSION=3.18.3
ARG COZYHR_VERSION=1.5.0
ARG COZYHR_VERSION=1.6.1
ARG TARGETOS
ARG TARGETARCH

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.40.0@sha256:78a556a059411cac89249c2e22596223128a1c74fbfcac4787fd74e4a66f66a6
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.41.11@sha256:0eae9f519669667d60b160ebb93c127843c470ad9ca3447fceaa54604503a7ba

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/matchbox:v0.40.0@sha256:653cec80b50dd9e6d7110cd20efbdeaac324ba2c638f14e122abf8e6bd436d83
ghcr.io/cozystack/cozystack/matchbox:v0.41.11@sha256:d11c034f1475d40e83f94a7f51a21082203c72346fe6a35fc931de976c0546c2

View File

@@ -46,6 +46,15 @@ spec:
- name: metrics
containerPort: 2381
protocol: TCP
startupProbe:
failureThreshold: 300
periodSeconds: 5
livenessProbe:
failureThreshold: 10
periodSeconds: 10
readinessProbe:
failureThreshold: 3
periodSeconds: 5
{{- with .Values.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list . $) | nindent 10 }}
{{- end }}

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v0.40.0@sha256:2d1833c78c35b697a3634d4b3be9a3218edae95a77583e9e121c10a92e7433ec
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v0.41.11@sha256:2a3595cd88b30af55b2000d3ca204899beecef0012b0e0402754c3914aad1f7f

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:ecb140d026ed72660306953a7eec140d7ac81e79544d5bbf1aba5f62aa5f8b69
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:1f03fde12124b94b646532e3ebdebf62b8d87e42e0aa5576cd07c4559ce66403

View File

@@ -79,7 +79,7 @@ annotations:
Cilium Gateway Class Config\n description: |\n CiliumGatewayClassConfig defines
a configuration for Gateway API GatewayClass.\n"
apiVersion: v2
appVersion: 1.18.5
appVersion: 1.18.6
description: eBPF-based Networking, Security, and Observability
home: https://cilium.io/
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@main/Documentation/images/logo-solo.svg
@@ -95,4 +95,4 @@ kubeVersion: '>= 1.21.0-0'
name: cilium
sources:
- https://github.com/cilium/cilium
version: 1.18.5
version: 1.18.6

View File

@@ -1,6 +1,6 @@
# cilium
![Version: 1.18.5](https://img.shields.io/badge/Version-1.18.5-informational?style=flat-square) ![AppVersion: 1.18.5](https://img.shields.io/badge/AppVersion-1.18.5-informational?style=flat-square)
![Version: 1.18.6](https://img.shields.io/badge/Version-1.18.6-informational?style=flat-square) ![AppVersion: 1.18.6](https://img.shields.io/badge/AppVersion-1.18.6-informational?style=flat-square)
Cilium is open source software for providing and transparently securing
network connectivity and loadbalancing between application workloads such as
@@ -85,7 +85,7 @@ contributors across the globe, there is almost always someone available to help.
| authentication.mutual.spire.install.agent.tolerations | list | `[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"},{"key":"CriticalAddonsOnly","operator":"Exists"}]` | SPIRE agent tolerations configuration By default it follows the same tolerations as the agent itself to allow the Cilium agent on this node to connect to SPIRE. ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
| authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true |
| authentication.mutual.spire.install.existingNamespace | bool | `false` | SPIRE namespace already exists. Set to true if Helm should not create, manage, and import the SPIRE namespace. |
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:d80cd694d3e9467884fcb94b8ca1e20437d8a501096cdf367a5a1918a34fc2fd","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:2383baad1860bbe9d8a7a843775048fd07d8afe292b94bd876df64a69aae7cb1","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
| authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into |
| authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration |
| authentication.mutual.spire.install.server.annotations | object | `{}` | SPIRE server annotations |
@@ -205,7 +205,7 @@ contributors across the globe, there is almost always someone available to help.
| clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. |
| clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. |
| clustermesh.apiserver.healthPort | int | `9880` | TCP port for the clustermesh-apiserver health API. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:952f07c30390847e4d9dfaa19a76c4eca946251ffbc4f6459946570f93ee72f1","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.18.5","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:8ee142912a0e261850c0802d9256ddbe3729e1cd35c6bea2d93077f334c3cf3b","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.18.6","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.kvstoremesh.enabled | bool | `true` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance (deprecated - KVStoreMesh will always be enabled once the option is removed). |
| clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. |
| clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. |
@@ -394,7 +394,7 @@ contributors across the globe, there is almost always someone available to help.
| envoy.httpRetryCount | int | `3` | Maximum number of retries for each HTTP request |
| envoy.httpUpstreamLingerTimeout | string | `nil` | Time in seconds to block Envoy worker thread while an upstream HTTP connection is closing. If set to 0, the connection is closed immediately (with TCP RST). If set to -1, the connection is closed asynchronously in the background. |
| envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s |
| envoy.image | object | `{"digest":"sha256:3108521821c6922695ff1f6ef24b09026c94b195283f8bfbfc0fa49356a156e1","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.34.12-1765374555-6a93b0bbba8d6dc75b651cbafeedb062b2997716","useDigest":true}` | Envoy container image. |
| envoy.image | object | `{"digest":"sha256:81398e449f2d3d0a6a70527e4f641aaa685d3156bea0bb30712fae3fd8822b86","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.35.9-1767794330-db497dd19e346b39d81d7b5c0dedf6c812bcc5c9","useDigest":true}` | Envoy container image. |
| envoy.initialFetchTimeoutSeconds | int | `30` | Time in seconds after which the initial fetch on an xDS stream is considered timed out |
| envoy.livenessProbe.enabled | bool | `true` | Enable liveness probe for cilium-envoy |
| envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
@@ -535,7 +535,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. |
| hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay |
| hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay |
| hubble.relay.image | object | `{"digest":"sha256:17212962c92ff52384f94e407ffe3698714fcbd35c7575f67f24032d6224e446","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.18.5","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.image | object | `{"digest":"sha256:fb6135e34c31e5f175cb5e75f86cea52ef2ff12b49bcefb7088ed93f5009eb8e","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.18.6","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. |
| hubble.relay.listenPort | string | `"4245"` | Port to listen to. |
| hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
@@ -647,7 +647,7 @@ contributors across the globe, there is almost always someone available to help.
| identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd`, `kvstore` or `doublewrite-readkvstore` / `doublewrite-readcrd` for migrating between identity backends). |
| identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. |
| identityManagementMode | string | `"agent"` | Control whether CiliumIdentities are created by the agent ("agent"), the operator ("operator") or both ("both"). "Both" should be used only to migrate between "agent" and "operator". Operator-managed identities is a beta feature. |
| image | object | `{"digest":"sha256:2c92fb05962a346eaf0ce11b912ba434dc10bd54b9989e970416681f4a069628","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.18.5","useDigest":true}` | Agent container image. |
| image | object | `{"digest":"sha256:42ec562a5ff6c8a860c0639f5a7611685e253fd9eb2d2fcdade693724c9166a4","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.18.6","useDigest":true}` | Agent container image. |
| imagePullSecrets | list | `[]` | Configure image pull secrets for pulling container images |
| ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set |
| ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. |
@@ -793,7 +793,7 @@ contributors across the globe, there is almost always someone available to help.
| operator.hostNetwork | bool | `true` | HostNetwork setting |
| operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. |
| operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. |
| operator.image | object | `{"alibabacloudDigest":"sha256:2e60f635495eb2837296ced5475875c281a05765d5ddd644a05e126bbb080b3c","awsDigest":"sha256:7608025d8b727a10f21d924d8e4f40beb176cefd690320433452816ad8776f52","azureDigest":"sha256:126667e000267f893cb81042bf8a710ad2f219619eb9ce06e8949333bd325ac6","genericDigest":"sha256:36c3f6f14c8ced7f45b40b0a927639894b44269dd653f9528e7a0dc363a4eb99","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.18.5","useDigest":true}` | cilium-operator image. |
| operator.image | object | `{"alibabacloudDigest":"sha256:212c4cbe27da3772bcb952b8f8cbaa0b0eef72488b52edf90ad2b32072a3ca4c","awsDigest":"sha256:47dbc1a5bd483fec170dab7fb0bf2cca3585a4893675b0324d41d97bac8be5eb","azureDigest":"sha256:a57aff47aeb32eccfedaa2a49d1af984d996d6d6de79609c232e0c4cf9ce97a1","genericDigest":"sha256:34a827ce9ed021c8adf8f0feca131f53b3c54a3ef529053d871d0347ec4d69af","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.18.6","useDigest":true}` | cilium-operator image. |
| operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. |
| operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods |
@@ -842,11 +842,11 @@ contributors across the globe, there is almost always someone available to help.
| preflight.affinity | object | `{"podAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-preflight |
| preflight.annotations | object | `{}` | Annotations to be added to all top-level preflight objects (resources under templates/cilium-preflight) |
| preflight.enabled | bool | `false` | Enable Cilium pre-flight resources (required for upgrade) |
| preflight.envoy.image | object | `{"digest":"sha256:3108521821c6922695ff1f6ef24b09026c94b195283f8bfbfc0fa49356a156e1","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.34.12-1765374555-6a93b0bbba8d6dc75b651cbafeedb062b2997716","useDigest":true}` | Envoy pre-flight image. |
| preflight.envoy.image | object | `{"digest":"sha256:81398e449f2d3d0a6a70527e4f641aaa685d3156bea0bb30712fae3fd8822b86","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.35.9-1767794330-db497dd19e346b39d81d7b5c0dedf6c812bcc5c9","useDigest":true}` | Envoy pre-flight image. |
| preflight.extraEnv | list | `[]` | Additional preflight environment variables. |
| preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. |
| preflight.extraVolumes | list | `[]` | Additional preflight volumes. |
| preflight.image | object | `{"digest":"sha256:2c92fb05962a346eaf0ce11b912ba434dc10bd54b9989e970416681f4a069628","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.18.5","useDigest":true}` | Cilium pre-flight image. |
| preflight.image | object | `{"digest":"sha256:42ec562a5ff6c8a860c0639f5a7611685e253fd9eb2d2fcdade693724c9166a4","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.18.6","useDigest":true}` | Cilium pre-flight image. |
| preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods |
| preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |

View File

@@ -1,5 +1,5 @@
{{- $envoyDS := eq (include "envoyDaemonSetEnabled" .) "true" -}}
{{- if $envoyDS }}
{{- if (and $envoyDS (not .Values.preflight.enabled)) }}
---
apiVersion: v1

View File

@@ -213,9 +213,6 @@ spec:
- name: envoy-artifacts
mountPath: /var/run/cilium/envoy/artifacts
readOnly: true
- name: envoy-config
mountPath: /var/run/cilium/envoy/
readOnly: true
{{- with .Values.preflight.resources }}
resources:
{{- toYaml . | trim | nindent 12 }}
@@ -280,14 +277,6 @@ spec:
hostPath:
path: "{{ .Values.daemon.runPath }}/envoy/artifacts"
type: DirectoryOrCreate
- name: envoy-config
configMap:
name: {{ .Values.envoy.bootstrapConfigMap | default "cilium-envoy-config" | quote }}
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
items:
- key: bootstrap-config.json
path: bootstrap-config.json
{{- end }}
{{- with .Values.preflight.extraVolumes }}
{{- toYaml . | nindent 6 }}

View File

@@ -219,10 +219,10 @@ image:
# @schema
override: ~
repository: "quay.io/cilium/cilium"
tag: "v1.18.5"
tag: "v1.18.6"
pullPolicy: "IfNotPresent"
# cilium-digest
digest: sha256:2c92fb05962a346eaf0ce11b912ba434dc10bd54b9989e970416681f4a069628
digest: sha256:42ec562a5ff6c8a860c0639f5a7611685e253fd9eb2d2fcdade693724c9166a4
useDigest: true
# -- Scheduling configurations for cilium pods
scheduling:
@@ -1503,9 +1503,9 @@ hubble:
# @schema
override: ~
repository: "quay.io/cilium/hubble-relay"
tag: "v1.18.5"
tag: "v1.18.6"
# hubble-relay-digest
digest: sha256:17212962c92ff52384f94e407ffe3698714fcbd35c7575f67f24032d6224e446
digest: sha256:fb6135e34c31e5f175cb5e75f86cea52ef2ff12b49bcefb7088ed93f5009eb8e
useDigest: true
pullPolicy: "IfNotPresent"
# -- Specifies the resources for the hubble-relay pods
@@ -2465,9 +2465,9 @@ envoy:
# @schema
override: ~
repository: "quay.io/cilium/cilium-envoy"
tag: "v1.34.12-1765374555-6a93b0bbba8d6dc75b651cbafeedb062b2997716"
tag: "v1.35.9-1767794330-db497dd19e346b39d81d7b5c0dedf6c812bcc5c9"
pullPolicy: "IfNotPresent"
digest: "sha256:3108521821c6922695ff1f6ef24b09026c94b195283f8bfbfc0fa49356a156e1"
digest: "sha256:81398e449f2d3d0a6a70527e4f641aaa685d3156bea0bb30712fae3fd8822b86"
useDigest: true
# -- Additional containers added to the cilium Envoy DaemonSet.
extraContainers: []
@@ -2841,15 +2841,15 @@ operator:
# @schema
override: ~
repository: "quay.io/cilium/operator"
tag: "v1.18.5"
tag: "v1.18.6"
# operator-generic-digest
genericDigest: sha256:36c3f6f14c8ced7f45b40b0a927639894b44269dd653f9528e7a0dc363a4eb99
genericDigest: sha256:34a827ce9ed021c8adf8f0feca131f53b3c54a3ef529053d871d0347ec4d69af
# operator-azure-digest
azureDigest: sha256:126667e000267f893cb81042bf8a710ad2f219619eb9ce06e8949333bd325ac6
azureDigest: sha256:a57aff47aeb32eccfedaa2a49d1af984d996d6d6de79609c232e0c4cf9ce97a1
# operator-aws-digest
awsDigest: sha256:7608025d8b727a10f21d924d8e4f40beb176cefd690320433452816ad8776f52
awsDigest: sha256:47dbc1a5bd483fec170dab7fb0bf2cca3585a4893675b0324d41d97bac8be5eb
# operator-alibabacloud-digest
alibabacloudDigest: sha256:2e60f635495eb2837296ced5475875c281a05765d5ddd644a05e126bbb080b3c
alibabacloudDigest: sha256:212c4cbe27da3772bcb952b8f8cbaa0b0eef72488b52edf90ad2b32072a3ca4c
useDigest: true
pullPolicy: "IfNotPresent"
suffix: ""
@@ -3148,9 +3148,9 @@ preflight:
# @schema
override: ~
repository: "quay.io/cilium/cilium"
tag: "v1.18.5"
tag: "v1.18.6"
# cilium-digest
digest: sha256:2c92fb05962a346eaf0ce11b912ba434dc10bd54b9989e970416681f4a069628
digest: sha256:42ec562a5ff6c8a860c0639f5a7611685e253fd9eb2d2fcdade693724c9166a4
useDigest: true
pullPolicy: "IfNotPresent"
envoy:
@@ -3161,9 +3161,9 @@ preflight:
# @schema
override: ~
repository: "quay.io/cilium/cilium-envoy"
tag: "v1.34.12-1765374555-6a93b0bbba8d6dc75b651cbafeedb062b2997716"
tag: "v1.35.9-1767794330-db497dd19e346b39d81d7b5c0dedf6c812bcc5c9"
pullPolicy: "IfNotPresent"
digest: "sha256:3108521821c6922695ff1f6ef24b09026c94b195283f8bfbfc0fa49356a156e1"
digest: "sha256:81398e449f2d3d0a6a70527e4f641aaa685d3156bea0bb30712fae3fd8822b86"
useDigest: true
# -- The priority class to use for the preflight pod.
priorityClassName: ""
@@ -3317,9 +3317,9 @@ clustermesh:
# @schema
override: ~
repository: "quay.io/cilium/clustermesh-apiserver"
tag: "v1.18.5"
tag: "v1.18.6"
# clustermesh-apiserver-digest
digest: sha256:952f07c30390847e4d9dfaa19a76c4eca946251ffbc4f6459946570f93ee72f1
digest: sha256:8ee142912a0e261850c0802d9256ddbe3729e1cd35c6bea2d93077f334c3cf3b
useDigest: true
pullPolicy: "IfNotPresent"
# -- TCP port for the clustermesh-apiserver health API.
@@ -3849,7 +3849,7 @@ authentication:
override: ~
repository: "docker.io/library/busybox"
tag: "1.37.0"
digest: "sha256:d80cd694d3e9467884fcb94b8ca1e20437d8a501096cdf367a5a1918a34fc2fd"
digest: "sha256:2383baad1860bbe9d8a7a843775048fd07d8afe292b94bd876df64a69aae7cb1"
useDigest: true
pullPolicy: "IfNotPresent"
# SPIRE agent configuration

View File

@@ -1,2 +1,2 @@
ARG VERSION=v1.18.5
ARG VERSION=v1.18.6
FROM quay.io/cilium/cilium:${VERSION}

View File

@@ -15,8 +15,8 @@ cilium:
mode: "kubernetes"
image:
repository: ghcr.io/cozystack/cozystack/cilium
tag: 1.18.5
digest: "sha256:c14a0bcb1a1531c72725b3a4c40aefa2bcd5c129810bf58ea8e37d3fcff2a326"
tag: 1.18.6
digest: "sha256:4f4585f8adc3b8becd15d3999f3900a4d3d650f2ab7f85ca8c661f3807113d01"
envoy:
enabled: false
rollOutCiliumPods: true

View File

@@ -6,3 +6,6 @@ coredns:
k8sAppLabelOverride: kube-dns
service:
name: kube-dns
serviceAccount:
create: true
name: kube-dns

View File

@@ -1,5 +1,5 @@
cozystackAPI:
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.40.0@sha256:408885b509d80ca30a85416f87d07112bc7f070374e71e80b64818fbb24ad1ee
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.41.11@sha256:3a8cb618f140c60eb2a5afd3f07a5ec7e638ab4cd949ea0913abc372703a2d82
localK8sAPIEndpoint:
enabled: true
replicas: 2

View File

@@ -1,6 +1,6 @@
cozystackController:
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.40.0@sha256:90cbb2f3fa2d30f9b9c9f623463ef11203a2b8d2105a52166aac566d2b984e59
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.41.11@sha256:8f1c725989e32706293afaea195d110d7690b06ad2e52742fce2bbe9f71cbe48
debug: false
disableTelemetry: false
cozystackVersion: "v0.40.0"
cozystackVersion: "v0.41.11"
cozystackAPIKind: "DaemonSet"

View File

@@ -3,6 +3,21 @@ module token-proxy
go 1.24.0
require (
github.com/golang-jwt/jwt/v5 v5.3.0
github.com/gorilla/securecookie v1.1.2
github.com/lestrrat-go/httprc/v3 v3.0.2
github.com/lestrrat-go/jwx/v3 v3.0.13
)
require (
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/goccy/go-json v0.10.3 // indirect
github.com/lestrrat-go/blackmagic v1.0.4 // indirect
github.com/lestrrat-go/dsig v1.0.0 // indirect
github.com/lestrrat-go/dsig-secp256k1 v1.0.0 // indirect
github.com/lestrrat-go/httpcc v1.0.1 // indirect
github.com/lestrrat-go/option/v2 v2.0.0 // indirect
github.com/segmentio/asm v1.2.1 // indirect
github.com/valyala/fastjson v1.6.7 // indirect
golang.org/x/crypto v0.46.0 // indirect
golang.org/x/sys v0.39.0 // indirect
)

View File

@@ -1,6 +1,43 @@
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
github.com/goccy/go-json v0.10.3 h1:KZ5WoDbxAIgm2HNbYckL0se1fHD6rz5j4ywS6ebzDqA=
github.com/goccy/go-json v0.10.3/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/gorilla/securecookie v1.1.2 h1:YCIWL56dvtr73r6715mJs5ZvhtnY73hBvEF8kXD8ePA=
github.com/gorilla/securecookie v1.1.2/go.mod h1:NfCASbcHqRSY+3a8tlWJwsQap2VX5pwzwo4h3eOamfo=
github.com/lestrrat-go/blackmagic v1.0.4 h1:IwQibdnf8l2KoO+qC3uT4OaTWsW7tuRQXy9TRN9QanA=
github.com/lestrrat-go/blackmagic v1.0.4/go.mod h1:6AWFyKNNj0zEXQYfTMPfZrAXUWUfTIZ5ECEUEJaijtw=
github.com/lestrrat-go/dsig v1.0.0 h1:OE09s2r9Z81kxzJYRn07TFM9XA4akrUdoMwr0L8xj38=
github.com/lestrrat-go/dsig v1.0.0/go.mod h1:dEgoOYYEJvW6XGbLasr8TFcAxoWrKlbQvmJgCR0qkDo=
github.com/lestrrat-go/dsig-secp256k1 v1.0.0 h1:JpDe4Aybfl0soBvoVwjqDbp+9S1Y2OM7gcrVVMFPOzY=
github.com/lestrrat-go/dsig-secp256k1 v1.0.0/go.mod h1:CxUgAhssb8FToqbL8NjSPoGQlnO4w3LG1P0qPWQm/NU=
github.com/lestrrat-go/httpcc v1.0.1 h1:ydWCStUeJLkpYyjLDHihupbn2tYmZ7m22BGkcvZZrIE=
github.com/lestrrat-go/httpcc v1.0.1/go.mod h1:qiltp3Mt56+55GPVCbTdM9MlqhvzyuL6W/NMDA8vA5E=
github.com/lestrrat-go/httprc/v3 v3.0.2 h1:7u4HUaD0NQbf2/n5+fyp+T10hNCsAnwKfqn4A4Baif0=
github.com/lestrrat-go/httprc/v3 v3.0.2/go.mod h1:mSMtkZW92Z98M5YoNNztbRGxbXHql7tSitCvaxvo9l0=
github.com/lestrrat-go/jwx/v3 v3.0.13 h1:AdHKiPIYeCSnOJtvdpipPg/0SuFh9rdkN+HF3O0VdSk=
github.com/lestrrat-go/jwx/v3 v3.0.13/go.mod h1:2m0PV1A9tM4b/jVLMx8rh6rBl7F6WGb3EG2hufN9OQU=
github.com/lestrrat-go/option/v2 v2.0.0 h1:XxrcaJESE1fokHy3FpaQ/cXW8ZsIdWcdFzzLOcID3Ss=
github.com/lestrrat-go/option/v2 v2.0.0/go.mod h1:oSySsmzMoR0iRzCDCaUfsCzxQHUEuhOViQObyy7S6Vg=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/segmentio/asm v1.2.1 h1:DTNbBqs57ioxAD4PrArqftgypG4/qNpXoJx8TVXxPR0=
github.com/segmentio/asm v1.2.1/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/valyala/fastjson v1.6.7 h1:ZE4tRy0CIkh+qDc5McjatheGX2czdn8slQjomexVpBM=
github.com/valyala/fastjson v1.6.7/go.mod h1:CLCAqky6SMuOcxStkYQvblddUtoRxhYMGLrsQns1aXY=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -1,6 +1,9 @@
package main
import (
"context"
"crypto/tls"
"crypto/x509"
"encoding/base64"
"encoding/json"
"flag"
@@ -13,10 +16,13 @@ import (
"os"
"path"
"strings"
"sync"
"time"
"github.com/golang-jwt/jwt/v5"
"github.com/gorilla/securecookie"
"github.com/lestrrat-go/httprc/v3"
"github.com/lestrrat-go/jwx/v3/jwk"
"github.com/lestrrat-go/jwx/v3/jwt"
)
/* ----------------------------- flags ------------------------------------ */
@@ -26,7 +32,9 @@ var (
cookieName, cookieSecretB64 string
cookieSecure bool
cookieRefresh time.Duration
tokenCheckURL string
jwksURL string
saTokenPath string
saCACertPath string
)
func init() {
@@ -38,7 +46,70 @@ func init() {
flag.StringVar(&cookieSecretB64, "cookie-secret", "", "Base64-encoded cookie secret")
flag.BoolVar(&cookieSecure, "cookie-secure", false, "Set Secure flag on cookie")
flag.DurationVar(&cookieRefresh, "cookie-refresh", 0, "Cookie refresh interval (e.g. 1h)")
flag.StringVar(&tokenCheckURL, "token-check-url", "", "URL for external token validation")
flag.StringVar(&jwksURL, "jwks-url", "https://kubernetes.default.svc/openid/v1/jwks", "JWKS URL for token verification")
flag.StringVar(&saTokenPath, "sa-token-path", "/var/run/secrets/kubernetes.io/serviceaccount/token", "Path to service account token")
flag.StringVar(&saCACertPath, "sa-ca-cert-path", "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt", "Path to service account CA certificate")
flag.Parse()
// Initialize jwkCache
ctx := context.Background()
// Load CA certificate
caCert, err := os.ReadFile(saCACertPath)
if err != nil {
jwkCacheErr := fmt.Errorf("failed to read CA cert: %w", err)
panic(jwkCacheErr)
}
caCertPool := x509.NewCertPool()
if !caCertPool.AppendCertsFromPEM(caCert) {
jwkCacheErr := fmt.Errorf("failed to parse CA cert")
panic(jwkCacheErr)
}
// Create transport with SA token injection
transport := &saTokenTransport{
base: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: caCertPool,
},
},
tokenPath: saTokenPath,
}
transport.startRefresh(ctx, 5*time.Minute)
httpClient := &http.Client{
Transport: transport,
Timeout: 10 * time.Second,
}
// Create httprc client with custom HTTP client
httprcClient := httprc.NewClient(
httprc.WithHTTPClient(httpClient),
)
// Create JWK cache
jwkCache, err = jwk.NewCache(ctx, httprcClient)
if err != nil {
jwkCacheErr := fmt.Errorf("failed to create JWK cache: %w", err)
panic(jwkCacheErr)
}
// Register the JWKS URL with refresh settings
if err := jwkCache.Register(ctx, jwksURL,
jwk.WithMinInterval(5*time.Minute),
jwk.WithMaxInterval(15*time.Minute),
); err != nil {
jwkCacheErr := fmt.Errorf("failed to register JWKS URL: %w", err)
panic(jwkCacheErr)
}
// Perform initial fetch to ensure the JWKS is available
if _, err := jwkCache.Refresh(ctx, jwksURL); err != nil {
jwkCacheErr := fmt.Errorf("failed to fetch initial JWKS: %w", err)
panic(jwkCacheErr)
}
log.Printf("JWK cache initialized with JWKS URL: %s", jwksURL)
}
/* ----------------------------- templates -------------------------------- */
@@ -117,42 +188,94 @@ var loginTmpl = template.Must(template.New("login").Parse(`
</body>
</html>`))
/* ----------------------------- helpers ---------------------------------- */
/* ----------------------------- JWK cache -------------------------------- */
func decodeJWT(raw string) jwt.MapClaims {
if raw == "" {
return jwt.MapClaims{}
}
tkn, _, err := new(jwt.Parser).ParseUnverified(raw, jwt.MapClaims{})
if err != nil || tkn == nil {
return jwt.MapClaims{}
}
if c, ok := tkn.Claims.(jwt.MapClaims); ok {
return c
}
return jwt.MapClaims{}
var (
jwkCache *jwk.Cache
)
// saTokenTransport adds the service account token to requests and refreshes it periodically.
type saTokenTransport struct {
base http.RoundTripper
tokenPath string
mu sync.RWMutex
token string
}
func externalTokenCheck(raw string) error {
if tokenCheckURL == "" {
func (t *saTokenTransport) RoundTrip(req *http.Request) (*http.Response, error) {
t.mu.RLock()
token := t.token
t.mu.RUnlock()
if token != "" {
req = req.Clone(req.Context())
req.Header.Set("Authorization", "Bearer "+token)
}
return t.base.RoundTrip(req)
}
func (t *saTokenTransport) refreshToken() {
data, err := os.ReadFile(t.tokenPath)
if err != nil {
log.Printf("warning: failed to read SA token: %v", err)
return
}
t.mu.Lock()
t.token = string(data)
t.mu.Unlock()
}
func (t *saTokenTransport) startRefresh(ctx context.Context, interval time.Duration) {
t.refreshToken()
go func() {
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
t.refreshToken()
}
}
}()
}
/* ----------------------------- helpers ---------------------------------- */
// verifyAndParseJWT verifies the token signature and returns the parsed token.
func verifyAndParseJWT(ctx context.Context, raw string) (jwt.Token, error) {
if raw == "" {
return nil, fmt.Errorf("empty token")
}
keySet, err := jwkCache.Lookup(ctx, jwksURL)
if err != nil {
return nil, fmt.Errorf("failed to get JWKS: %w", err)
}
token, err := jwt.Parse([]byte(raw), jwt.WithKeySet(keySet))
if err != nil {
return nil, fmt.Errorf("failed to verify token: %w", err)
}
return token, nil
}
// getClaim extracts a claim value from a verified token.
func getClaim(token jwt.Token, key string) any {
if token == nil {
return nil
}
req, _ := http.NewRequest(http.MethodGet, tokenCheckURL, nil)
req.Header.Set("Authorization", "Bearer "+raw)
cli := &http.Client{Timeout: 5 * time.Second}
resp, err := cli.Do(req)
if err != nil {
return err
var val any
if err := token.Get(key, &val); err != nil {
return nil
}
resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("status %d", resp.StatusCode)
}
return nil
return val
}
func encodeSession(sc *securecookie.SecureCookie, token string, exp, issued int64) (string, error) {
v := map[string]interface{}{
v := map[string]any{
"access_token": token,
"expires": exp,
"issued": issued,
@@ -166,7 +289,6 @@ func encodeSession(sc *securecookie.SecureCookie, token string, exp, issued int6
/* ----------------------------- main ------------------------------------- */
func main() {
flag.Parse()
if upstream == "" {
log.Fatal("--upstream is required")
}
@@ -214,7 +336,11 @@ func main() {
}{Action: signIn, Err: "Token required"})
return
}
if err := externalTokenCheck(token); err != nil {
// Verify token signature using JWKS
verifiedToken, err := verifyAndParseJWT(r.Context(), token)
if err != nil {
log.Printf("token verification failed: %v", err)
_ = loginTmpl.Execute(w, struct {
Action string
Err string
@@ -223,9 +349,8 @@ func main() {
}
exp := time.Now().Add(24 * time.Hour).Unix()
claims := decodeJWT(token)
if v, ok := claims["exp"].(float64); ok {
exp = int64(v)
if expTime, ok := verifiedToken.Expiration(); ok && !expTime.IsZero() {
exp = expTime.Unix()
}
session, _ := encodeSession(sc, token, exp, time.Now().Unix())
http.SetCookie(w, &http.Cookie{
@@ -264,7 +389,7 @@ func main() {
return
}
var token string
var sess map[string]interface{}
var sess map[string]any
if sc != nil {
if err := sc.Decode(cookieName, c.Value, &sess); err != nil {
http.Error(w, "unauthorized", http.StatusUnauthorized)
@@ -273,19 +398,25 @@ func main() {
token, _ = sess["access_token"].(string)
} else {
token = c.Value
sess = map[string]interface{}{
sess = map[string]any{
"expires": time.Now().Add(24 * time.Hour).Unix(),
"issued": time.Now().Unix(),
}
}
claims := decodeJWT(token)
out := map[string]interface{}{
// Re-verify the token to ensure it's still valid
verifiedToken, err := verifyAndParseJWT(r.Context(), token)
if err != nil {
http.Error(w, "unauthorized", http.StatusUnauthorized)
return
}
out := map[string]any{
"token": token,
"sub": claims["sub"],
"email": claims["email"],
"preferred_username": claims["preferred_username"],
"groups": claims["groups"],
"sub": getClaim(verifiedToken, "sub"),
"email": getClaim(verifiedToken, "email"),
"preferred_username": getClaim(verifiedToken, "preferred_username"),
"groups": getClaim(verifiedToken, "groups"),
"expires": sess["expires"],
"issued": sess["issued"],
"cookie_refresh_enable": cookieRefresh > 0,
@@ -303,7 +434,7 @@ func main() {
return
}
var token string
var sess map[string]interface{}
var sess map[string]any
if sc != nil {
if err := sc.Decode(cookieName, c.Value, &sess); err != nil {
http.Redirect(w, r, signIn, http.StatusFound)
@@ -312,7 +443,7 @@ func main() {
token, _ = sess["access_token"].(string)
} else {
token = c.Value
sess = map[string]interface{}{
sess = map[string]any{
"expires": time.Now().Add(24 * time.Hour).Unix(),
"issued": time.Now().Unix(),
}

View File

@@ -1,6 +1,6 @@
{{- $brandingConfig := .Values._cluster.branding | default dict }}
{{- $tenantText := "v0.40.0" }}
{{- $tenantText := "v0.41.11" }}
{{- $footerText := "Cozystack" }}
{{- $titleText := "Cozystack Dashboard" }}
{{- $logoText := "" }}

View File

@@ -2,3 +2,30 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: incloud-web-gatekeeper
{{- $oidcEnabled := index .Values._cluster "oidc-enabled" }}
{{- if ne $oidcEnabled "true" }}
---
# ClusterRole to allow token-proxy to fetch JWKS for JWT verification
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: incloud-web-gatekeeper-jwks
rules:
- nonResourceURLs:
- /openid/v1/jwks
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: incloud-web-gatekeeper-jwks
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: incloud-web-gatekeeper-jwks
subjects:
- kind: ServiceAccount
name: incloud-web-gatekeeper
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -64,6 +64,7 @@ spec:
- --cookie-secure=true
- --cookie-secret=$(OAUTH2_PROXY_COOKIE_SECRET)
- --skip-provider-button
- --scope=openid email profile offline_access
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: dashboard
@@ -88,7 +89,6 @@ spec:
- --cookie-name=kc-access
- --cookie-secure=true
- --cookie-secret=$(TOKEN_PROXY_COOKIE_SECRET)
- --token-check-url=http://incloud-web-nginx.{{ .Release.Namespace }}.svc:8080/api/clusters/default/k8s/apis/core.cozystack.io/v1alpha1/tenantnamespaces
env:
- name: TOKEN_PROXY_COOKIE_SECRET
valueFrom:

View File

@@ -66,6 +66,12 @@ spec:
defaultClientScopes:
- groups
- kubernetes-client
optionalClientScopes:
- offline_access
attributes:
post.logout.redirect.uris: "+"
client.session.idle.timeout: "86400"
client.session.max.lifespan: "604800"
redirectUris:
- "https://dashboard.{{ $host }}/oauth2/callback/*"
{{- range $i, $v := $extraRedirectUris }}

View File

@@ -63,6 +63,13 @@ spec:
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
startupProbe:
httpGet:
path: /healthcheck
port: 64231
scheme: HTTP
failureThreshold: 30
periodSeconds: 2
name: bff
ports:
- containerPort: 64231
@@ -183,6 +190,13 @@ spec:
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
startupProbe:
httpGet:
path: /healthcheck
port: 8080
scheme: HTTP
failureThreshold: 30
periodSeconds: 2
name: web
ports:
- containerPort: 8080

View File

@@ -1,6 +1,6 @@
openapiUI:
image: ghcr.io/cozystack/cozystack/openapi-ui:v0.40.0@sha256:665d5553d445c71d6007f440841943167a33e403cdca447b510b6f919e16a657
image: ghcr.io/cozystack/cozystack/openapi-ui:v0.41.11@sha256:87dfcda3aaaade114e099a3bd8fbb4479a20a761d60849dd2fe47ba245db7cb8
openapiUIK8sBff:
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v0.40.0@sha256:fda379dce49c2cd8cb8d7d2a1d8ec6f7bedb3419c058c4355ecdece1c1e937f4
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v0.41.11@sha256:0ee55b703839497b7d8264000c3f39c3688b550de1047eb754577523c810fa79
tokenProxy:
image: ghcr.io/cozystack/cozystack/token-proxy:v0.40.0@sha256:4fc8a11f8a1a81aa0774ae2b1ed2e05d36d0b3ef1e37979cc4994e65114d93ae
image: ghcr.io/cozystack/cozystack/token-proxy:v0.41.11@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc

View File

@@ -38,8 +38,8 @@
| kubeRbacProxy.args[2] | string | `"--logtostderr=true"` | |
| kubeRbacProxy.args[3] | string | `"--v=0"` | |
| kubeRbacProxy.image.pullPolicy | string | `"IfNotPresent"` | Image pull policy |
| kubeRbacProxy.image.repository | string | `"gcr.io/kubebuilder/kube-rbac-proxy"` | Image repository |
| kubeRbacProxy.image.tag | string | `"v0.16.0"` | Version of image |
| kubeRbacProxy.image.repository | string | `"quay.io/brancz/kube-rbac-proxy"` | Image repository |
| kubeRbacProxy.image.tag | string | `"v0.18.1"` | Version of image |
| kubeRbacProxy.livenessProbe | object | `{}` | https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ |
| kubeRbacProxy.readinessProbe | object | `{}` | https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ |
| kubeRbacProxy.resources | object | `{"limits":{"cpu":"250m","memory":"128Mi"},"requests":{"cpu":"100m","memory":"64Mi"}}` | ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |

View File

@@ -98,13 +98,13 @@ kubeRbacProxy:
image:
# -- Image repository
repository: gcr.io/kubebuilder/kube-rbac-proxy
repository: quay.io/brancz/kube-rbac-proxy
# -- Image pull policy
pullPolicy: IfNotPresent
# -- Version of image
tag: v0.16.0
tag: v0.18.1
args:
- --secure-listen-address=0.0.0.0:8443

View File

@@ -0,0 +1,31 @@
diff --git a/internal/builders/controlplane/deployment.go b/internal/builders/controlplane/deployment.go
index e7f0b88..d67a851 100644
--- a/internal/builders/controlplane/deployment.go
+++ b/internal/builders/controlplane/deployment.go
@@ -376,7 +376,7 @@ func (d Deployment) buildScheduler(podSpec *corev1.PodSpec, tenantControlPlane k
TimeoutSeconds: 1,
PeriodSeconds: 10,
SuccessThreshold: 1,
- FailureThreshold: 3,
+ FailureThreshold: 30,
}
switch {
@@ -469,7 +469,7 @@ func (d Deployment) buildControllerManager(podSpec *corev1.PodSpec, tenantContro
TimeoutSeconds: 1,
PeriodSeconds: 10,
SuccessThreshold: 1,
- FailureThreshold: 3,
+ FailureThreshold: 30,
}
switch {
case tenantControlPlane.Spec.ControlPlane.Deployment.Resources == nil:
@@ -600,7 +600,7 @@ func (d Deployment) buildKubeAPIServer(podSpec *corev1.PodSpec, tenantControlPla
TimeoutSeconds: 1,
PeriodSeconds: 10,
SuccessThreshold: 1,
- FailureThreshold: 3,
+ FailureThreshold: 30,
}
podSpec.Containers[index].ImagePullPolicy = corev1.PullAlways
// Volume mounts

View File

@@ -3,7 +3,7 @@ kamaji:
deploy: false
image:
pullPolicy: IfNotPresent
tag: v0.40.0@sha256:4588de4380fb70c29c4a762fb19a9bbe210e68bc5ff67035c752c44daf319bfc
tag: v0.41.11@sha256:9ac09f817c67de652bacedcdc0390cd343401879b6c1a1c28131a0f109af3804
repository: ghcr.io/cozystack/cozystack/kamaji
resources:
limits:
@@ -13,4 +13,4 @@ kamaji:
cpu: 100m
memory: 100Mi
extraArgs:
- --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v0.40.0@sha256:4588de4380fb70c29c4a762fb19a9bbe210e68bc5ff67035c752c44daf319bfc
- --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v0.41.11@sha256:9ac09f817c67de652bacedcdc0390cd343401879b6c1a1c28131a0f109af3804

View File

@@ -5,12 +5,6 @@
{{- $existingKubeappsSecret := lookup "v1" "Secret" .Release.Namespace "kubeapps-client" }}
{{- $existingAuthConfig := lookup "v1" "Secret" "cozy-dashboard" "kubeapps-auth-config" }}
{{- $brandingConfig := .Values._cluster.branding | default dict }}
{{ $branding := "" }}
{{- if $brandingConfig }}
{{- $branding = $brandingConfig.branding }}
{{- end }}
---
apiVersion: v1.edp.epam.com/v1alpha1
@@ -32,9 +26,15 @@ metadata:
spec:
realmName: cozy
clusterKeycloakRef: keycloak-cozy
{{- if $branding }}
displayHtmlName: {{ $branding }}
displayName: {{ $branding }}
{{- if $brandingConfig }}
{{- if hasKey $brandingConfig "brandName" }}
displayName: {{ $brandingConfig.brandName }}
{{- end }}
{{- if hasKey $brandingConfig "brandHtmlName" }}
displayHtmlName: {{ $brandingConfig.brandHtmlName }}
{{- else if hasKey $brandingConfig "branding" }}
displayHtmlName: {{ $brandingConfig.branding }}
{{- end }}
{{- end }}
---

View File

@@ -1,4 +1,4 @@
portSecurity: true
routes: ""
image: ghcr.io/cozystack/cozystack/kubeovn-plunger:v0.40.0@sha256:e1e47c30b2eef93497163e15d94fbddca40e416769705557881d23dd537c5591
image: ghcr.io/cozystack/cozystack/kubeovn-plunger:v0.41.11@sha256:50dcf0aa177d8b88949d15cdbbb225f4ac06677048111b5d8ff4910d6ec97d11
ovnCentralName: ovn-central

View File

@@ -1,3 +1,3 @@
portSecurity: true
routes: ""
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v0.40.0@sha256:e18f9fd679e38f65362a8d0042f25468272f6d081136ad47027168d8e7e07a4a
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v0.41.11@sha256:e18f9fd679e38f65362a8d0042f25468272f6d081136ad47027168d8e7e07a4a

View File

@@ -1,5 +1,3 @@
KUBEOVN_TAG=v0.40.0
export NAME=kubeovn
export NAMESPACE=cozy-$(NAME)
@@ -8,6 +6,6 @@ include ../../../scripts/package.mk
update:
rm -rf charts values.yaml Chart.yaml
tag=$(KUBEOVN_TAG) && \
curl -sSL https://github.com/cozystack/kubeovn/archive/refs/tags/$${tag}.tar.gz | \
tar xzvf - --strip 2 kubeovn-$${tag#*v}/chart
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/cozystack/kubeovn-chart | awk -F'[/^]' 'END{print $$3}') && \
curl -sSL https://github.com/cozystack/kubeovn-chart/archive/refs/tags/$${tag}.tar.gz | \
tar xzvf - --strip 2 kubeovn-chart-$${tag#*v}/chart

View File

@@ -15,12 +15,12 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: v1.14.25
version: v1.15.3
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.14.25"
appVersion: "1.15.3"
kubeVersion: ">= 1.29.0-0"

View File

@@ -69,7 +69,9 @@ Number of master nodes
{{- $imageVersion := (index $ds.spec.template.spec.containers 0).image | splitList ":" | last | trimPrefix "v" -}}
{{- $versionRegex := `^(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)` -}}
{{- if and (ne $newChartVersion $chartVersion) (regexMatch $versionRegex $imageVersion) -}}
{{- if regexFind $versionRegex $imageVersion | semverCompare ">= 1.13.0" -}}
{{- if regexFind $versionRegex $imageVersion | semverCompare ">= 1.15.0" -}}
25.03
{{- else if regexFind $versionRegex $imageVersion | semverCompare ">= 1.13.0" -}}
24.03
{{- else if regexFind $versionRegex $imageVersion | semverCompare ">= 1.12.0" -}}
22.12

View File

@@ -122,6 +122,7 @@ spec:
limits:
cpu: {{ index .Values "ovn-central" "limits" "cpu" }}
memory: {{ index .Values "ovn-central" "limits" "memory" }}
ephemeral-storage: {{ index .Values "ovn-central" "limits" "ephemeral-storage" }}
volumeMounts:
- mountPath: /var/run/ovn
name: host-run-ovn

View File

@@ -101,6 +101,7 @@ spec:
- --pod-nic-type={{- .Values.networking.POD_NIC_TYPE }}
- --enable-lb={{- .Values.func.ENABLE_LB }}
- --enable-np={{- .Values.func.ENABLE_NP }}
- --np-enforcement={{- .Values.func.NP_ENFORCEMENT }}
- --enable-eip-snat={{- .Values.networking.ENABLE_EIP_SNAT }}
- --enable-external-vpc={{- .Values.func.ENABLE_EXTERNAL_VPC }}
- --enable-ecmp={{- .Values.networking.ENABLE_ECMP }}
@@ -117,11 +118,14 @@ spec:
- --secure-serving={{- .Values.func.SECURE_SERVING }}
- --enable-ovn-ipsec={{- .Values.func.ENABLE_OVN_IPSEC }}
- --enable-anp={{- .Values.func.ENABLE_ANP }}
- --enable-dns-name-resolver={{- .Values.func.ENABLE_DNS_NAME_RESOLVER }}
- --ovsdb-con-timeout={{- .Values.func.OVSDB_CON_TIMEOUT }}
- --ovsdb-inactivity-timeout={{- .Values.func.OVSDB_INACTIVITY_TIMEOUT }}
- --enable-live-migration-optimize={{- .Values.func.ENABLE_LIVE_MIGRATION_OPTIMIZE }}
- --enable-ovn-lb-prefer-local={{- .Values.func.ENABLE_OVN_LB_PREFER_LOCAL }}
- --image={{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
- --skip-conntrack-dst-cidrs={{- .Values.networking.SKIP_CONNTRACK_DST_CIDRS }}
- --non-primary-cni-mode={{- .Values.cni_conf.NON_PRIMARY_CNI }}
securityContext:
runAsUser: {{ include "kubeovn.runAsUser" . }}
privileged: false
@@ -140,11 +144,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBE_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBE_NODE_NAME
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
@@ -194,6 +194,7 @@ spec:
limits:
cpu: {{ index .Values "kube-ovn-controller" "limits" "cpu" }}
memory: {{ index .Values "kube-ovn-controller" "limits" "memory" }}
ephemeral-storage: {{ index .Values "kube-ovn-controller" "limits" "ephemeral-storage" }}
nodeSelector:
kubernetes.io/os: "linux"
volumes:

View File

@@ -100,6 +100,7 @@ spec:
limits:
cpu: 3
memory: 1Gi
ephemeral-storage: 1Gi
volumeMounts:
- mountPath: /var/run/ovn
name: host-run-ovn

Some files were not shown because too many files have changed in this diff Show More