Compare commits

..

73 Commits

Author SHA1 Message Date
Andrei Kvapil
b1dac3c3c9 Release v0.41.11 (#2185)
This PR prepares the release `v0.41.11`.
2026-03-10 21:21:40 +01:00
cozystack-bot
ab9643c35e Prepare release v0.41.11
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-03-10 11:48:01 +00:00
Andrei Kvapil
c720bde0e9 fix(etcd-operator): replace deprecated kube-rbac-proxy image (#2181)
## Summary
- Replace deprecated `gcr.io/kubebuilder/kube-rbac-proxy:v0.16.0` with
`quay.io/brancz/kube-rbac-proxy:v0.18.1` in the vendored etcd-operator
chart
- The GCR-hosted image became unavailable after March 18, 2025

Fixes #2172 #488

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated proxy component to v0.18.1 with configuration changes for
improved stability and compatibility.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-03-10 12:38:58 +01:00
Andrei Kvapil
c7b2f60d18 Release v0.41.10 (#2139)
This PR prepares the release `v0.41.10`.
2026-03-04 00:24:11 +01:00
cozystack-bot
2a766df6e0 Prepare release v0.41.10
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-03-03 01:36:20 +00:00
Andrei Kvapil
d2ac669b29 fix(platform): correct cozy-proxy releaseName to avoid conflict with installer (#2127)
## What this PR does

Fixes cozy-proxy `releaseName` from `cozystack` to `cozy-proxy` in
paas-full and
distro-full bundles.

The cozy-proxy component was incorrectly configured with `releaseName:
cozystack`,
which is the same name used by the installer helm release. During
upgrade to v1.0,
the cozy-proxy HelmRelease reconciles and overwrites the installer
release, deleting
the cozystack-operator deployment.

### Release note

```release-note
[platform] Fix cozy-proxy releaseName collision with installer that caused operator deletion during v1.0 upgrade
```
2026-03-02 12:57:26 +01:00
Andrei Kvapil
e7bfa9b138 fix(platform): correct cozy-proxy releaseName to avoid conflict with installer
The cozy-proxy component was incorrectly configured with
releaseName: cozystack, which collides with the installer helm release
name. This causes the cozy-proxy HelmRelease to overwrite the installer
release during upgrade to v1.0, deleting the cozystack-operator.

Change releaseName from "cozystack" to "cozy-proxy" in both paas-full
and distro-full bundles.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-03-02 12:55:22 +01:00
Andrei Kvapil
d5a5d31354 Release v0.41.9 (#2078)
This PR prepares the release `v0.41.9`.
2026-02-21 21:48:10 +01:00
cozystack-bot
dd67bd56c4 Prepare release v0.41.9
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-21 01:37:37 +00:00
Andrei Kvapil
513b2e20df Update Kube-OVN to v1.15.3
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-20 10:51:09 +01:00
Andrei Kvapil
8d8f7defd7 fix(cozystack-basics) Deny resourcequotas deletion for tenant admin (#2076)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
Fixed cozy:tenant:admin:base ClusterRole to deny deletion of tenant ResourceQuotas for the tenant admin and superadmin
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

* **Bug Fixes**
* Removed resource quota management permissions from tenant admin role
to reduce unnecessary administrative access.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-20 10:28:12 +01:00
Andrei Kvapil
7bcc3a3d01 Release v0.41.8 (#2029)
This PR prepares the release `v0.41.8`.
2026-02-11 17:09:01 +01:00
cozystack-bot
ff10d684da Prepare release v0.41.8
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-11 11:20:45 +00:00
Andrei Kvapil
dfb280d091 [Backport release-0.41] [dashboard] Add startupProbe to prevent container restarts on slow hardware (#2014)
# Description
Backport of #1996 to `release-0.41`.
2026-02-10 12:30:39 +01:00
Andrei Kvapil
32b1bc843a [Backport release-0.41] [vm] allow switching between instancetype and custom resources (#2013)
# Description
Backport of #2008 to `release-0.41`.
2026-02-10 12:30:11 +01:00
Andrei Kvapil
2c87a83949 [Backport release-0.41] feat(kubernetes): auto-enable Gateway API support in cert-manager (#2012)
# Description
Backport of #1997 to `release-0.41`.
2026-02-10 12:29:57 +01:00
Andrei Kvapil
a53df5eb90 fix(dashboard): add startupProbe to prevent container restarts on slow hardware
Kubelet kills bff and web containers on slow hardware because the
livenessProbe only allows 33 seconds for startup. Add startupProbe
with failureThreshold=30 and periodSeconds=2, giving containers up
to 60 seconds to start before livenessProbe kicks in.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 330cbe70d4)
2026-02-10 11:22:41 +00:00
Kirill Ilin
b212dc02f3 [vm] add validation for resources
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 13d848efc3)
2026-02-10 11:20:54 +00:00
Kirill Ilin
ec50052ea4 [vm] allow switching between instancetype and custom resources
Implemented by upgrade hook atomically patching VM resource

Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit cf2c6bc15f)
2026-02-10 11:20:54 +00:00
Andrei Kvapil
9b61d1318c feat(kubernetes): auto-enable Gateway API support in cert-manager
When the Gateway API addon is enabled, automatically configure
cert-manager with enableGatewayAPI: true. Uses the same default
values + mergeOverwrite pattern as Cilium for consistency.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 90ac6de475)
2026-02-10 11:20:04 +00:00
Andrei Kvapil
1c3a5f721c Release v0.41.7 (#1995)
This PR prepares the release `v0.41.7`.
2026-02-06 08:56:31 +01:00
cozystack-bot
6274f91c74 Prepare release v0.41.7
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-06 01:40:42 +00:00
Andrei Kvapil
f347b4fd70 [Backport release-0.41] fix(postgres-operator): correct PromQL syntax in CNPGClusterOffline alert (#1989)
# Description
Backport of #1981 to `release-0.41`.
2026-02-05 20:34:33 +01:00
mattia-eleuteri
40d51f4f92 fix(postgres-operator): correct PromQL syntax in CNPGClusterOffline alert
Remove extra closing parenthesis in the CNPGClusterOffline alert expression
that causes vmalert pods to crash with "bad prometheus expr" error.

Signed-off-by: mattia-eleuteri <mattia@hidora.io>
(cherry picked from commit 2cb299e602)
2026-02-05 19:25:36 +00:00
Andrei Kvapil
38c73ae3bd [Backport release-0.41] [dashboard] Verify JWT token (#1983)
# Description
Backport of #1980 to `release-0.41`.
2026-02-05 09:39:31 +01:00
Timofei Larkin
0496a1b0e8 [dashboard] Verify JWT token
## What this PR does

When OIDC is disabled, the dashboard's token-proxy now properly
validates bearer tokens against the k8s API's JWKS url.

### Release note

```release-note
[dashboard] Verify bearer tokens against the issuer's JWKS url.
```

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
(cherry picked from commit 23e399bd9a)
2026-02-04 15:11:15 +00:00
Andrei Kvapil
b49a6d1152 Release v0.41.6 (#1979)
This PR prepares the release `v0.41.6`.
2026-02-04 04:02:38 +01:00
cozystack-bot
0dac208d43 Prepare release v0.41.6
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-04 01:41:27 +00:00
Andrei Kvapil
30adc52ce3 [Backport release-0.41] fix coredns serviceaccount to match kubernetes bootstrap rbac (#1978)
# Description
Backport of #1958 to `release-0.41`.
2026-02-04 02:04:27 +01:00
mattia-eleuteri
044dae0d1e fix coredns serviceaccount to match kubernetes bootstrap rbac
The Kubernetes bootstrap creates a ClusterRoleBinding 'system:kube-dns'
that references ServiceAccount 'kube-dns' in 'kube-system'. However,
the coredns chart was using the 'default' ServiceAccount because
serviceAccount.create was not enabled.

This caused CoreDNS pods to fail with 'Failed to watch' errors after
restarts, as they lacked RBAC permissions to watch the Kubernetes API.

Configure the chart to create the 'kube-dns' ServiceAccount, which
matches the expected binding from Kubernetes bootstrap.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Signed-off-by: mattia-eleuteri <mattia@hidora.io>
(cherry picked from commit 7320edd71d)
2026-02-04 01:02:13 +00:00
Andrei Kvapil
26e083a71e [Backport release-0.41] [1.0][branding] Separate values for keycloak (#1963)
# Description
Backport of #1947 to `release-0.41`.
2026-02-03 13:05:54 +01:00
Andrei Kvapil
8468711545 [Backport release-0.41] [vm] allow changing field external after creation (#1962)
# Description
Backport of #1956 to `release-0.41`.
2026-02-03 13:05:44 +01:00
Andrei Kvapil
462ab1bdcb Release v0.41.5 (#1936)
This PR prepares the release `v0.41.5`.
2026-02-03 08:35:52 +01:00
cozystack-bot
a3821162af Prepare release v0.41.5
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-03 01:39:08 +00:00
Andrei Kvapil
0838bafdb9 [Backport release-0.41] fix manifests for kubernetes deployment (#1945)
# Description
Backport of #1943 to `release-0.41`.
2026-02-02 22:08:03 +01:00
Andrei Kvapil
9723992410 [0.41][branding] Separate values for keycloak (#1946)
## What this PR does
Adds separate values to keycloak branding.

### Release note
```release-note
Added separate values to keycloak branding
```
2026-02-02 22:07:08 +01:00
nbykov0
3b904d83a8 [branding] Separate values for keycloak
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
(cherry picked from commit 8a034c58b1)
2026-02-02 21:06:57 +00:00
Kirill Ilin
96b801b06b [vm] allow changing field external after creation
Service will be recreated

Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 3a8e8fc290)
2026-02-02 21:05:22 +00:00
Andrei Kvapil
4048234b9d [Backport release-0.41] Add instance profile label to workload monitor (#1957)
# Description
Backport of #1954 to `release-0.41`.
2026-02-02 22:04:28 +01:00
Timofei Larkin
b8d32fb894 Apply suggestion from @gemini-code-assist[bot]
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
(cherry picked from commit 09cd9e05c3)
2026-02-02 14:15:16 +00:00
Matthieu ROBIN
eae630ffb5 Update internal/controller/workloadmonitor_controller.go
Co-authored-by: Timofei Larkin <lllamnyp@gmail.com>
Signed-off-by: Matthieu ROBIN <info@matthieurobin.com>
(cherry picked from commit 3f59ce4876)
2026-02-02 14:15:16 +00:00
Matthieu ROBIN
c514d7525b Add instance profile label to workload monitor
Signed-off-by: Matthieu ROBIN <info@matthieurobin.com>
(cherry picked from commit 1e8da1fca4)
2026-02-02 14:15:16 +00:00
nbykov0
d0bb00f3cd [branding] Separate values for keycloak
Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2026-02-02 12:51:05 +03:00
IvanHunters
6db4bb15d2 fix manifests for kubernetes deployment
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
(cherry picked from commit 281715b365)
2026-02-02 09:33:24 +00:00
Andrei Kvapil
4f3502456f [Backport release-0.41] [dashboard] Add resource quota usage to tenant details page (#1932)
# Description
Backport of #1929 to `release-0.41`.
2026-01-29 10:31:39 +01:00
Andrei Kvapil
8d803cd619 [Backport release-0.41] [dashboard] Add "Edit" button to all resources (#1931)
# Description
Backport of #1928 to `release-0.41`.
2026-01-29 10:31:28 +01:00
Kirill Ilin
7ebcc0d264 [dashboard] Add resource quota usage to tenant info resource
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 9e63bd533c)
2026-01-29 09:30:42 +00:00
Kirill Ilin
21e7183375 [dashboard] Add "Edit" button to all resources
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit a56fc00c5c)
2026-01-29 09:29:12 +00:00
Andrei Kvapil
760c732ed6 Release v0.41.4 (#1926)
This PR prepares the release `v0.41.4`.
2026-01-29 10:08:40 +01:00
cozystack-bot
c7e54262f1 Prepare release v0.41.4
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-29 01:39:21 +00:00
Andrei Kvapil
6d772811dd Update cozyhr v1.6.1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-27 23:10:34 +01:00
Andrei Kvapil
c8dbad7dc9 Release v0.41.3 (#1916)
This PR prepares the release `v0.41.3`.
2026-01-27 18:56:59 +01:00
Andrei Kvapil
f54a1e6911 [Backport release-0.41] [dashboard] Improve dashboard session params (#1920)
# Description
Backport of #1913 to `release-0.41`.
2026-01-27 18:56:38 +01:00
Timofei Larkin
ca3d5e1a5b [dashboard] Improve dashboard session params
## What this PR does

This patch enables the `offline_access` scope for the dashbord keycloak
client, so that users get a refresh token which gatekeeper can use to
automatically refresh an expiring access token. Also session timeouts
were increased.

### Release note

```release-note
[dashboard] Increase session timeouts, add the offline_access scope,
enable refresh tokens to improve the overall user experience when
working with the dashboard.
```

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
(cherry picked from commit 7cebafbafd)
2026-01-27 17:18:43 +00:00
cozystack-bot
e67aed0e6c Prepare release v0.41.3
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-27 01:37:01 +00:00
Andrei Kvapil
0129f20ae4 [Backport release-0.41] [kubernetes] show Service and Ingress resources for kubernetes app in… (#1915)
# Description
Backport of #1912 to `release-0.41`.
2026-01-26 18:36:14 +01:00
Andrei Kvapil
58e2b4c632 [Backport release-0.41] [dashboard] Fix filtering on Pods tab for Service (#1914)
# Description
Backport of #1909 to `release-0.41`.
2026-01-26 18:36:05 +01:00
Kirill Ilin
056a0d801e [kubernetes] show Service and Ingress resources for kubernetes app in dashboard
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit befbdf0964)
2026-01-26 17:35:10 +00:00
Kirill Ilin
f8b2aa8343 [dashboard] Fix filtering on Pods tab for Service
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit ee759dd11e)
2026-01-26 17:33:29 +00:00
Andrei Kvapil
f1f1ff5681 Release v0.41.2 (#1907)
This PR prepares the release `v0.41.2`.
2026-01-23 09:07:49 +01:00
cozystack-bot
d4ffce1ff6 Prepare release v0.41.2
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-23 01:34:38 +00:00
Andrei Kvapil
a24174a0c3 [Backport release-0.41] [monitoring-agents] Set minReplicas to 1 for VPA for VMAgent (#1905)
# Description
Backport of #1894 to `release-0.41`.
2026-01-22 23:18:11 +01:00
Kirill Ilin
d91f1f1882 [monitoring-agents] Set minReplicas to 1 for VPA for VMAgent
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 207a5171f0)
2026-01-22 22:17:25 +00:00
Andrei Kvapil
e38b7a0afd [Backport release-0.41] [mongodb] Remove user-configurable images from MongoDB chart (#1904)
# Description
Backport of #1901 to `release-0.41`.
2026-01-22 23:16:34 +01:00
Andrei Kvapil
247e19252f [mongodb] Remove user-configurable images from MongoDB chart
Remove the ability for users to specify custom container images
(images.pmm and images.backup) in the MongoDB application values.

This is a security hardening measure - allowing users to specify
arbitrary container images could lead to running malicious or
compromised images, supply chain attacks, or privilege escalation.

The images are now hardcoded in the template:
- percona/pmm-client:2.44.1
- percona/percona-backup-mongodb:2.11.0

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit beb6e1a0ba)
2026-01-22 22:15:29 +00:00
Andrei Kvapil
211d57b17a Release v0.41.1 (#1900)
This PR prepares the release `v0.41.1`.
2026-01-22 09:14:27 +01:00
cozystack-bot
6d3f5bbc60 Prepare release v0.41.1
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-22 01:34:35 +00:00
Andrei Kvapil
4532603bbd [Backport release-0.41] [kubernetes] Add enum validation for IngressNginx exposeMethod (#1897)
# Description
Backport of #1895 to `release-0.41`.
2026-01-21 13:55:36 +01:00
Kirill Ilin
73cd3edbb7 [kubernetes] Add enum validation for IngressNginx exposeMethod
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
(cherry picked from commit 0b95a72fa3)
2026-01-21 12:55:19 +00:00
Andrei Kvapil
ea466820fc Release v0.41.0 (#1889)
This PR prepares the release `v0.41.0`.
2026-01-20 03:32:28 +01:00
cozystack-bot
31422aba38 Prepare release v0.41.0
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-01-20 01:21:24 +00:00
Andrei Kvapil
8f6a09bca5 [Backport release-0.41][apps] Add MongoDB managed application (#1881)
# Description
Backport of #1822 to `release-0.40`.
2026-01-20 02:15:08 +01:00
Aleksei Sviridkin
6ce60917c7 feat(apps): add MongoDB managed application
Add MongoDB managed service based on Percona Operator for MongoDB with:

- Replica set mode (default) and sharded cluster mode
- Configurable replicas, storage, and resource presets
- Custom users with role-based access control
- S3-compatible backup with PITR support
- Bootstrap/restore from backup
- External access support
- WorkloadMonitor integration for dashboard
- Comprehensive helm-unittest test coverage (91 tests)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-01-19 13:45:57 +01:00
116 changed files with 30185 additions and 126 deletions

View File

@@ -0,0 +1,31 @@
#!/usr/bin/env bats
@test "Create DB MongoDB" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: MongoDB
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 1
storageClass: ""
resourcesPreset: "nano"
backup:
enabled: false
EOF
sleep 5
# Wait for HelmRelease
kubectl -n tenant-test wait hr mongodb-$name --timeout=60s --for=condition=ready
# Wait for MongoDB service (port 27017)
timeout 120 sh -ec "until kubectl -n tenant-test get svc mongodb-$name-rs0 -o jsonpath='{.spec.ports[0].port}' | grep -q '27017'; do sleep 10; done"
# Wait for endpoints
timeout 180 sh -ec "until kubectl -n tenant-test get endpoints mongodb-$name-rs0 -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
# Wait for StatefulSet replicas
kubectl -n tenant-test wait statefulset.apps/mongodb-$name-rs0 --timeout=300s --for=jsonpath='{.status.replicas}'=1
# Cleanup
kubectl -n tenant-test delete mongodbs.apps.cozystack.io $name
}

View File

@@ -34,9 +34,6 @@ func (m *Manager) ensureCustomColumnsOverride(ctx context.Context, crd *cozyv1al
obj.SetName(name)
href := fmt.Sprintf("/openapi-ui/{2}/{reqsJsonPath[0]['.metadata.namespace']['-']}/factory/%s/{reqsJsonPath[0]['.metadata.name']['-']}", detailsSegment)
if g == "apps.cozystack.io" && kind == "Tenant" && plural == "tenants" {
href = "/openapi-ui/{2}/{reqsJsonPath[0]['.status.namespace']['-']}/api-table/core.cozystack.io/v1alpha1/tenantmodules"
}
desired := map[string]any{
"spec": map[string]any{

View File

@@ -174,6 +174,48 @@ func detailsTab(kind, endpoint, schemaJSON string, keysOrder [][]string) map[str
}),
)
}
if kind == "Info" {
rightColStack = append(rightColStack,
antdFlexVertical("resource-quotas-block", 4, []any{
antdText("resource-quotas-label", true, "Resource Quotas", map[string]any{
"fontSize": float64(20),
"marginBottom": float64(12),
}),
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "resource-quotas-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-resource-quotas",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/resourcequotas",
"pathToItems": []any{`items`},
},
},
}),
)
}
if kind == "Tenant" {
rightColStack = append(rightColStack,
antdFlexVertical("resource-quotas-block", 4, []any{
antdText("resource-quotas-label", true, "Resource Quotas", map[string]any{
"fontSize": float64(20),
"marginBottom": float64(12),
}),
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "resource-quotas-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-resource-quotas",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/resourcequotas",
"pathToItems": []any{`items`},
},
},
}),
)
}
return map[string]any{
"key": "details",

View File

@@ -189,6 +189,14 @@ func CreateAllCustomColumnsOverrides() []*dashboardv1alpha1.CustomColumnsOverrid
createStringColumn("Values", "_flatMapData_Value"),
}),
// Factory resource quotas
createCustomColumnsOverride("factory-resource-quotas", []any{
createFlatMapColumn("Data", ".spec.hard"),
createStringColumn("Resource", "_flatMapData_Key"),
createStringColumn("Hard", "_flatMapData_Value"),
createStringColumn("Used", ".status.used['{_flatMapData_Key}']"),
}),
// Factory ingress details rules
createCustomColumnsOverride("factory-kube-ingress-details-rules", []any{
createStringColumn("Host", ".host"),
@@ -1120,7 +1128,7 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-node-details-/v1/pods",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/pods",
"labelsSelectorFull": map[string]any{
"labelSelectorFull": map[string]any{
"pathToLabels": ".spec.selector",
"reqIndex": 0,
},

View File

@@ -102,6 +102,22 @@ func antdFlex(id string, gap float64, children []any) map[string]any {
}
}
func antdFlexSpaceBetween(id string, children []any) map[string]any {
if id == "" {
id = generateContainerID("auto", "flex")
}
return map[string]any{
"type": "antdFlex",
"data": map[string]any{
"id": id,
"align": "center",
"justify": "space-between",
},
"children": children,
}
}
func antdFlexVertical(id string, gap float64, children []any) map[string]any {
// Auto-generate ID if not provided
if id == "" {

View File

@@ -237,9 +237,16 @@ func createUnifiedFactory(config UnifiedResourceConfig, tabs []any, urlsToFetch
"lineHeight": "24px",
})
header := antdFlex(generateContainerID("header", "row"), float64(6), []any{
badge,
nameText,
header := antdFlexSpaceBetween(generateContainerID("header", "row"), []any{
antdFlex(generateContainerID("header", "title-text"), float64(6), []any{
badge,
nameText,
}),
antdLink(generateLinkID("header", "edit"),
"Edit",
fmt.Sprintf("/openapi-ui/{2}/{3}/forms/apis/{reqsJsonPath[0]['.apiVersion']['-']}/%s/{reqsJsonPath[0]['.metadata.name']['-']}",
config.Plural),
),
})
// Add marginBottom style to header

View File

@@ -467,5 +467,8 @@ func (r *WorkloadMonitorReconciler) getWorkloadMetadata(obj client.Object) map[s
if instanceType, ok := annotations["kubevirt.io/cluster-instancetype-name"]; ok {
labels["workloads.cozystack.io/kubevirt-vmi-instance-type"] = instanceType
}
if instanceProfile, ok := annotations["kubevirt.io/cluster-instanceprofile-name"]; ok {
labels["workloads.cozystack.io/kubevirt-vmi-instance-profile"] = instanceProfile
}
return labels
}

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.0.0@sha256:9e34fd50393b418d9516aadb488067a3a63675b045811beb1c0afc9c61e149e8
ghcr.io/cozystack/cozystack/nginx-cache:0.0.0@sha256:cb25e40cb665b8bbeee8cb1ec39da4c9a7452ef3f2f371912bbc0d1b1e2d40a8

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.0.0@sha256:598331326f0c2aac420187f0cc3a49fedcb22ed5de4afe50c6ccf8e05d9fa537
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.0.0@sha256:3753b735b0315bee90de54cb25cfebc63bd2cc90ad11ca4fdc0e70439abd5096

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:0b208ed506dd8c453426761d93ec3d42c9d1b791ba6c91b01c6386dcb1b02442
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:bb5b17044969e663c3b391f7274883735c0ffe05a9523988469bdf2974de2dea

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.33@sha256:71a74ca30f75967bae309be2758f19aa3d37c60b19426b9b622ff1c33a80362f
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.33@sha256:9d4ad080ef729e0f9f1f5919cb85c0c9b6dc772a22d52046b2de9ccba3772715

View File

@@ -10,3 +10,8 @@ data:
enableEPSController: true
selectorless: true
namespace: {{ .Release.Namespace }}
infraLabels:
apps.cozystack.io/application.group: apps.cozystack.io
apps.cozystack.io/application.kind: Kubernetes
apps.cozystack.io/application.name: {{ .Release.Name | trimPrefix "kubernetes-" }}
internal.cozystack.io/tenantresource: "true"

View File

@@ -1,3 +1,13 @@
{{- define "cozystack.defaultCertManagerValues" -}}
{{- if $.Values.addons.gatewayAPI.enabled }}
cert-manager:
config:
apiVersion: controller.config.cert-manager.io/v1alpha1
kind: ControllerConfiguration
enableGatewayAPI: true
{{- end }}
{{- end }}
{{- if .Values.addons.certManager.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
@@ -33,11 +43,8 @@ spec:
force: true
remediation:
retries: -1
{{- with .Values.addons.certManager.valuesOverride }}
values:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- toYaml (deepCopy .Values.addons.certManager.valuesOverride | mergeOverwrite (fromYaml (include "cozystack.defaultCertManagerValues" .))) | nindent 4 }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}

View File

@@ -14,6 +14,11 @@ metadata:
}
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
labels:
apps.cozystack.io/application.group: apps.cozystack.io
apps.cozystack.io/application.kind: Kubernetes
apps.cozystack.io/application.name: {{ .Release.Name | trimPrefix "kubernetes-" }}
internal.cozystack.io/tenantresource: "true"
spec:
ingressClassName: "{{ $ingress }}"
rules:
@@ -41,6 +46,11 @@ apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-ingress-nginx
labels:
apps.cozystack.io/application.group: apps.cozystack.io
apps.cozystack.io/application.kind: Kubernetes
apps.cozystack.io/application.name: {{ .Release.Name | trimPrefix "kubernetes-" }}
internal.cozystack.io/tenantresource: "true"
spec:
ports:
- appProtocol: http

View File

@@ -150,7 +150,11 @@
"exposeMethod": {
"description": "Method to expose the controller. Allowed values: `Proxied`, `LoadBalancer`.",
"type": "string",
"default": "Proxied"
"default": "Proxied",
"enum": [
"Proxied",
"LoadBalancer"
]
},
"hosts": {
"description": "Domains routed to this tenant cluster when `exposeMethod` is `Proxied`.",

View File

@@ -76,9 +76,13 @@ host: ""
## @typedef {struct} GatewayAPIAddon - Gateway API addon.
## @field {bool} enabled - Enable Gateway API.
## @enum {string} IngressNginxExposeMethod - Method to expose the controller
## @value Proxied
## @value LoadBalancer
## @typedef {struct} IngressNginxAddon - Ingress-NGINX controller.
## @field {bool} enabled - Enable the controller (requires nodes labeled `ingress-nginx`).
## @field {string} exposeMethod - Method to expose the controller. Allowed values: `Proxied`, `LoadBalancer`.
## @field {IngressNginxExposeMethod} exposeMethod - Method to expose the controller. Allowed values: `Proxied`, `LoadBalancer`.
## @field {[]string} hosts - Domains routed to this tenant cluster when `exposeMethod` is `Proxied`.
## @field {object} valuesOverride - Custom Helm values overrides.

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,7 @@
apiVersion: v2
name: mongodb
description: Managed MongoDB service
icon: /logos/mongodb.svg
type: application
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process
appVersion: "8.0"

View File

@@ -0,0 +1,11 @@
include ../../../scripts/package.mk
.PHONY: generate update
generate:
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
../../../hack/update-crd.sh
update:
hack/update-versions.sh
make generate

View File

@@ -0,0 +1,104 @@
# Managed MongoDB Service
MongoDB is a popular document-oriented NoSQL database known for its flexibility and scalability.
The Managed MongoDB Service provides a self-healing replicated cluster managed by the Percona Operator for MongoDB.
## Deployment Details
This managed service is controlled by the Percona Operator for MongoDB, ensuring efficient management and seamless operation.
- Docs: <https://docs.percona.com/percona-operator-for-mongodb/>
- Github: <https://github.com/percona/percona-server-mongodb-operator>
## Deployment Modes
### Replica Set Mode (default)
By default, MongoDB deploys as a replica set with the specified number of replicas.
This mode is suitable for most use cases requiring high availability.
### Sharded Cluster Mode
Enable `sharding: true` for horizontal scaling across multiple shards.
Each shard is a replica set, and mongos routers handle query routing.
## Notes
### External Access
When `external: true` is enabled:
- **Replica Set mode**: Traffic is load-balanced across all replica set members. This works well for read operations, but write operations require connecting to the primary. MongoDB drivers handle primary discovery automatically using the replica set connection string.
- **Sharded mode**: Traffic is routed through mongos routers, which handle both reads and writes correctly.
### Credentials
On first install, the credentials secret will be empty until the Percona operator initializes the cluster.
Run `helm upgrade` after MongoDB is ready to populate the credentials secret with the actual password.
## Parameters
### Common parameters
| Name | Description | Type | Value |
| ------------------ | --------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
| `replicas` | Number of MongoDB replicas in replica set. | `int` | `3` |
| `resources` | Explicit CPU and memory configuration for each MongoDB replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `size` | Persistent Volume Claim size available for application data. | `quantity` | `10Gi` |
| `storageClass` | StorageClass used to store the data. | `string` | `""` |
| `external` | Enable external access from outside the cluster. | `bool` | `false` |
| `version` | MongoDB major version to deploy. | `string` | `v8` |
### Sharding configuration
| Name | Description | Type | Value |
| ----------------------------------- | ------------------------------------------------------------------ | ---------- | ------- |
| `sharding` | Enable sharded cluster mode. When disabled, deploys a replica set. | `bool` | `false` |
| `shardingConfig` | Configuration for sharded cluster mode. | `object` | `{}` |
| `shardingConfig.configServers` | Number of config server replicas. | `int` | `3` |
| `shardingConfig.configServerSize` | PVC size for config servers. | `quantity` | `3Gi` |
| `shardingConfig.mongos` | Number of mongos router replicas. | `int` | `2` |
| `shardingConfig.shards` | List of shard configurations. | `[]object` | `[...]` |
| `shardingConfig.shards[i].name` | Shard name. | `string` | `""` |
| `shardingConfig.shards[i].replicas` | Number of replicas in this shard. | `int` | `0` |
| `shardingConfig.shards[i].size` | PVC size for this shard. | `quantity` | `""` |
### Users configuration
| Name | Description | Type | Value |
| --------------------------- | --------------------------------------------------- | ------------------- | ----- |
| `users` | Custom MongoDB users configuration map. | `map[string]object` | `{}` |
| `users[name].password` | Password for the user (auto-generated if omitted). | `string` | `""` |
| `users[name].db` | Database to authenticate against. | `string` | `""` |
| `users[name].roles` | List of MongoDB roles with database scope. | `[]object` | `[]` |
| `users[name].roles[i].name` | Role name (e.g., readWrite, dbAdmin, clusterAdmin). | `string` | `""` |
| `users[name].roles[i].db` | Database the role applies to. | `string` | `""` |
### Backup parameters
| Name | Description | Type | Value |
| ------------------------ | ------------------------------------------------------ | -------- | ----------------------------------- |
| `backup` | Backup configuration. | `object` | `{}` |
| `backup.enabled` | Enable regular backups. | `bool` | `false` |
| `backup.schedule` | Cron schedule for automated backups. | `string` | `0 2 * * *` |
| `backup.retentionPolicy` | Retention policy (e.g. "30d"). | `string` | `30d` |
| `backup.destinationPath` | Destination path for backups (e.g. s3://bucket/path/). | `string` | `s3://bucket/path/to/folder/` |
| `backup.endpointURL` | S3 endpoint URL for uploads. | `string` | `http://minio-gateway-service:9000` |
| `backup.s3AccessKey` | Access key for S3 authentication. | `string` | `""` |
| `backup.s3SecretKey` | Secret key for S3 authentication. | `string` | `""` |
### Bootstrap (recovery) parameters
| Name | Description | Type | Value |
| ------------------------ | --------------------------------------------------------- | -------- | ------- |
| `bootstrap` | Bootstrap configuration. | `object` | `{}` |
| `bootstrap.enabled` | Whether to restore from a backup. | `bool` | `false` |
| `bootstrap.recoveryTime` | Timestamp for point-in-time recovery; empty means latest. | `string` | `""` |
| `bootstrap.backupName` | Name of backup to restore from. | `string` | `""` |

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -0,0 +1,5 @@
# MongoDB version mapping (major version -> Percona image tag)
# Auto-generated by hack/update-versions.sh - do not edit manually
"v8": "8.0.17-6"
"v7": "7.0.28-15"
"v6": "6.0.25-20"

View File

@@ -0,0 +1,125 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
MONGODB_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
VALUES_FILE="${MONGODB_DIR}/values.yaml"
VERSIONS_FILE="${MONGODB_DIR}/files/versions.yaml"
# Supported major versions (newest first)
SUPPORTED_MAJOR_VERSIONS="8 7 6"
echo "Supported major versions: $SUPPORTED_MAJOR_VERSIONS"
# Check if skopeo is installed
if ! command -v skopeo &> /dev/null; then
echo "Error: skopeo is not installed. Please install skopeo and try again." >&2
exit 1
fi
# Check if jq is installed
if ! command -v jq &> /dev/null; then
echo "Error: jq is not installed. Please install jq and try again." >&2
exit 1
fi
# Get available image tags from Percona registry
echo "Fetching available image tags from registry..."
AVAILABLE_TAGS=$(skopeo list-tags docker://percona/percona-server-mongodb | jq -r '.Tags[]' | grep -E '^[0-9]+\.[0-9]+\.[0-9]+-[0-9]+$' | sort -V)
if [ -z "$AVAILABLE_TAGS" ]; then
echo "Error: Could not fetch available image tags" >&2
exit 1
fi
# Build versions map: major version -> latest tag
declare -A VERSION_MAP
MAJOR_VERSIONS=()
for major_version in $SUPPORTED_MAJOR_VERSIONS; do
# Find all tags that match this major version
matching_tags=$(echo "$AVAILABLE_TAGS" | grep "^${major_version}\\.")
if [ -n "$matching_tags" ]; then
# Get the latest tag for this major version
latest_tag=$(echo "$matching_tags" | tail -n1)
VERSION_MAP["v${major_version}"]="${latest_tag}"
MAJOR_VERSIONS+=("v${major_version}")
echo "Found version: v${major_version} -> ${latest_tag}"
fi
done
if [ ${#MAJOR_VERSIONS[@]} -eq 0 ]; then
echo "Error: No matching versions found" >&2
exit 1
fi
echo "Major versions to add: ${MAJOR_VERSIONS[*]}"
# Create/update versions.yaml file
echo "Updating $VERSIONS_FILE..."
{
echo "# MongoDB version mapping (major version -> Percona image tag)"
echo "# Auto-generated by hack/update-versions.sh - do not edit manually"
for major_ver in "${MAJOR_VERSIONS[@]}"; do
echo "\"${major_ver}\": \"${VERSION_MAP[$major_ver]}\""
done
} > "$VERSIONS_FILE"
echo "Successfully updated $VERSIONS_FILE"
# Update values.yaml - enum with major versions only
TEMP_FILE=$(mktemp)
trap 'rm -f "$TEMP_FILE" "${TEMP_FILE}.tmp"' EXIT
# Build new version section
NEW_VERSION_SECTION="## @enum {string} Version"
for major_ver in "${MAJOR_VERSIONS[@]}"; do
NEW_VERSION_SECTION="${NEW_VERSION_SECTION}
## @value $major_ver"
done
NEW_VERSION_SECTION="${NEW_VERSION_SECTION}
## @param {Version} version - MongoDB major version to deploy.
version: ${MAJOR_VERSIONS[0]}"
# Check if version section already exists
if grep -q "^## @enum {string} Version" "$VALUES_FILE"; then
# Version section exists, update it using awk
echo "Updating existing version section in $VALUES_FILE..."
# Use awk to replace the section from "## @enum {string} Version" to "version: " (inclusive)
awk -v new_section="$NEW_VERSION_SECTION" '
/^## @enum {string} Version/ {
in_section = 1
print new_section
next
}
in_section && /^version: / {
in_section = 0
next
}
in_section {
next
}
{ print }
' "$VALUES_FILE" > "$TEMP_FILE.tmp"
mv "$TEMP_FILE.tmp" "$VALUES_FILE"
else
# Version section doesn't exist, insert it before Sharding section
echo "Inserting new version section in $VALUES_FILE..."
awk -v new_section="$NEW_VERSION_SECTION" '
/^## @section Sharding configuration/ {
print new_section
print ""
}
{ print }
' "$VALUES_FILE" > "$TEMP_FILE.tmp"
mv "$TEMP_FILE.tmp" "$VALUES_FILE"
fi
echo "Successfully updated $VALUES_FILE with major versions: ${MAJOR_VERSIONS[*]}"

View File

@@ -0,0 +1,13 @@
<svg width="144" height="144" viewBox="0 0 144 144" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="144" height="144" rx="24" fill="url(#paint0_linear_mongodb)"/>
<path d="M72 24C72 24 72 24 72 24C72 24 58 40 58 62C58 84 72 120 72 120C72 120 86 84 86 62C86 40 72 24 72 24Z" fill="#00ED64"/>
<path d="M72 120C72 120 86 84 86 62C86 40 72 24 72 24" stroke="#00684A" stroke-width="4" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M72 24C72 24 58 40 58 62C58 84 72 120 72 120" stroke="#001E2B" stroke-width="4" stroke-linecap="round" stroke-linejoin="round"/>
<rect x="69" y="108" width="6" height="16" rx="2" fill="#00684A"/>
<defs>
<linearGradient id="paint0_linear_mongodb" x1="140" y1="130.5" x2="4" y2="9.49999" gradientUnits="userSpaceOnUse">
<stop stop-color="#001E2B"/>
<stop offset="1" stop-color="#023430"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 871 B

View File

View File

@@ -0,0 +1,12 @@
{{/*
MongoDB version mapping
*/}}
{{- define "mongodb.versionMap" -}}
{{- $versions := .Files.Get "files/versions.yaml" | fromYaml -}}
{{- $version := .Values.version -}}
{{- if hasKey $versions $version -}}
{{- index $versions $version -}}
{{- else -}}
{{- fail (printf "Unsupported MongoDB version: %s. Supported versions: %s" $version (keys $versions | sortAlpha | join ", ")) -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,11 @@
{{- if or .Values.backup.enabled .Values.bootstrap.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-s3-creds
type: Opaque
stringData:
AWS_ACCESS_KEY_ID: {{ required "backup.s3AccessKey is required when backup or bootstrap is enabled" .Values.backup.s3AccessKey | quote }}
AWS_SECRET_ACCESS_KEY: {{ required "backup.s3SecretKey is required when backup or bootstrap is enabled" .Values.backup.s3SecretKey | quote }}
{{- end }}

View File

@@ -0,0 +1,34 @@
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
{{- $operatorSecret := lookup "v1" "Secret" .Release.Namespace (printf "internal-%s-users" .Release.Name) }}
{{- $password := "" }}
{{- if and $operatorSecret (hasKey $operatorSecret.data "MONGODB_DATABASE_ADMIN_PASSWORD") }}
{{- $password = index $operatorSecret.data "MONGODB_DATABASE_ADMIN_PASSWORD" | b64dec }}
{{- end }}
---
# Dashboard credentials - lookup from operator-created secret
# Operator creates secret named "internal-<release>-users" with system user passwords
# Note: On first install, password/uri will be empty until operator creates the secret.
# Run 'helm upgrade' after MongoDB is ready to populate credentials.
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-credentials
type: Opaque
stringData:
username: databaseAdmin
password: {{ $password | quote }}
{{- if .Values.sharding }}
host: {{ .Release.Name }}-mongos.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}
{{- else }}
host: {{ .Release.Name }}-rs0.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}
{{- end }}
port: "27017"
{{- if $password }}
{{- if .Values.sharding }}
uri: mongodb://databaseAdmin:{{ $password | urlquery }}@{{ .Release.Name }}-mongos.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}:27017/admin
{{- else }}
uri: mongodb://databaseAdmin:{{ $password | urlquery }}@{{ .Release.Name }}-rs0.{{ .Release.Namespace }}.svc.{{ $clusterDomain }}:27017/admin?replicaSet=rs0
{{- end }}
{{- else }}
uri: ""
{{- end }}

View File

@@ -0,0 +1,39 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-dashboard-resources
rules:
- apiGroups:
- ""
resources:
- services
resourceNames:
- {{ .Release.Name }}-rs0
- {{ .Release.Name }}-mongos
- {{ .Release.Name }}-external
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- {{ .Release.Name }}-credentials
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,24 @@
{{- if .Values.external }}
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-external
spec:
type: LoadBalancer
externalTrafficPolicy: Local
{{- if (include "cozy-lib.network.disableLoadBalancerNodePorts" $ | fromYaml) }}
allocateLoadBalancerNodePorts: false
{{- end }}
ports:
- name: mongodb
port: 27017
selector:
app.kubernetes.io/name: percona-server-mongodb
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.sharding }}
app.kubernetes.io/component: mongos
{{- else }}
app.kubernetes.io/component: mongod
app.kubernetes.io/replset: rs0
{{- end }}
{{- end }}

View File

@@ -0,0 +1,173 @@
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
---
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDB
metadata:
name: {{ .Release.Name }}
spec:
crVersion: 1.21.1
clusterServiceDNSSuffix: svc.{{ $clusterDomain }}
pause: false
unmanaged: false
image: percona/percona-server-mongodb:{{ include "mongodb.versionMap" $ }}
imagePullPolicy: IfNotPresent
{{- if lt (int .Values.replicas) 3 }}
unsafeFlags:
replsetSize: true
{{- end }}
updateStrategy: SmartUpdate
upgradeOptions:
apply: disabled
pmm:
enabled: false
image: percona/pmm-client:2.44.1
serverHost: ""
sharding:
enabled: {{ .Values.sharding | default false }}
balancer:
enabled: true
{{- if .Values.sharding }}
configsvrReplSet:
size: {{ .Values.shardingConfig.configServers }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resourcesPreset .Values.resources $) | nindent 8 }}
volumeSpec:
persistentVolumeClaim:
{{- with .Values.storageClass }}
storageClassName: {{ . }}
{{- end }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.shardingConfig.configServerSize }}
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
podDisruptionBudget:
maxUnavailable: 1
mongos:
size: {{ .Values.shardingConfig.mongos }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resourcesPreset .Values.resources $) | nindent 8 }}
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
podDisruptionBudget:
maxUnavailable: 1
expose:
exposeType: ClusterIP
{{- end }}
replsets:
{{- if .Values.sharding }}
{{- range .Values.shardingConfig.shards }}
- name: {{ .name }}
size: {{ .replicas }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list $.Values.resourcesPreset $.Values.resources $) | nindent 8 }}
volumeSpec:
persistentVolumeClaim:
{{- with $.Values.storageClass }}
storageClassName: {{ . }}
{{- end }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .size }}
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
podDisruptionBudget:
maxUnavailable: 1
{{- end }}
{{- else }}
- name: rs0
size: {{ .Values.replicas }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resourcesPreset .Values.resources $) | nindent 8 }}
volumeSpec:
persistentVolumeClaim:
{{- with .Values.storageClass }}
storageClassName: {{ . }}
{{- end }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ .Values.size }}
affinity:
antiAffinityTopologyKey: kubernetes.io/hostname
podDisruptionBudget:
maxUnavailable: 1
expose:
enabled: false
{{- end }}
{{- if .Values.users }}
users:
{{- range $username, $user := .Values.users }}
{{- if not $user.roles }}
{{- fail (printf "users.%s.roles is required and cannot be empty" $username) }}
{{- end }}
- name: {{ $username }}
db: {{ $user.db }}
passwordSecretRef:
name: {{ $.Release.Name }}-user-{{ $username }}
key: password
roles:
{{- range $user.roles }}
- name: {{ .name }}
db: {{ .db }}
{{- end }}
{{- end }}
{{- end }}
backup:
enabled: {{ .Values.backup.enabled | default false }}
image: percona/percona-backup-mongodb:2.11.0
{{- if .Values.backup.enabled }}
storages:
s3-storage:
type: s3
s3:
bucket: {{ .Values.backup.destinationPath | trimPrefix "s3://" | regexFind "^[^/]+" }}
prefix: {{ .Values.backup.destinationPath | trimPrefix "s3://" | splitList "/" | rest | join "/" }}
endpointUrl: {{ .Values.backup.endpointURL }}
credentialsSecret: {{ .Release.Name }}-s3-creds
insecureSkipTLSVerify: false
forcePathStyle: true
tasks:
- name: daily-backup
enabled: true
schedule: {{ .Values.backup.schedule | quote }}
keep: {{ .Values.backup.retentionPolicy | trimSuffix "d" | int }}
storageName: s3-storage
type: logical
compressionType: gzip
pitr:
enabled: true
{{- end }}
---
# WorkloadMonitor tracks data-bearing mongod pods only (not config servers or mongos routers)
# The selector filters by component=mongod, so we only count shard replicas
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ .Release.Name }}
spec:
{{- if .Values.sharding }}
{{- $totalReplicas := 0 }}
{{- range .Values.shardingConfig.shards }}
{{- $totalReplicas = add $totalReplicas .replicas }}
{{- end }}
replicas: {{ $totalReplicas }}
{{- else }}
replicas: {{ .Values.replicas }}
{{- end }}
minReplicas: 1
kind: mongodb
type: mongodb
selector:
app.kubernetes.io/name: percona-server-mongodb
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: mongod
version: {{ .Chart.Version }}

View File

@@ -0,0 +1,37 @@
{{- if .Values.bootstrap.enabled }}
{{- if not .Values.bootstrap.backupName }}
{{- fail "bootstrap.backupName is required when bootstrap.enabled is true" }}
{{- end }}
{{- if not .Values.backup.destinationPath }}
{{- fail "backup.destinationPath is required when bootstrap.enabled is true" }}
{{- end }}
{{- if not .Values.backup.endpointURL }}
{{- fail "backup.endpointURL is required when bootstrap.enabled is true" }}
{{- end }}
{{- if not .Values.backup.s3AccessKey }}
{{- fail "backup.s3AccessKey is required when bootstrap.enabled is true" }}
{{- end }}
{{- if not .Values.backup.s3SecretKey }}
{{- fail "backup.s3SecretKey is required when bootstrap.enabled is true" }}
{{- end }}
---
apiVersion: psmdb.percona.com/v1
kind: PerconaServerMongoDBRestore
metadata:
name: {{ .Release.Name }}-restore
spec:
clusterName: {{ .Release.Name }}
{{- if .Values.bootstrap.recoveryTime }}
pitr:
type: date
date: {{ .Values.bootstrap.recoveryTime | quote }}
{{- end }}
backupSource:
type: logical
destination: {{ .Values.backup.destinationPath | trimSuffix "/" }}/{{ .Values.bootstrap.backupName }}
s3:
credentialsSecret: {{ .Release.Name }}-s3-creds
endpointUrl: {{ .Values.backup.endpointURL }}
insecureSkipTLSVerify: false
forcePathStyle: true
{{- end }}

View File

@@ -0,0 +1,17 @@
{{- range $username, $user := .Values.users }}
{{- $existingSecret := lookup "v1" "Secret" $.Release.Namespace (printf "%s-user-%s" $.Release.Name $username) }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ $.Release.Name }}-user-{{ $username }}
type: Opaque
stringData:
{{- if $user.password }}
password: {{ $user.password | quote }}
{{- else if and $existingSecret (hasKey $existingSecret.data "password") }}
password: {{ index $existingSecret.data "password" | b64dec | quote }}
{{- else }}
password: {{ randAlphaNum 16 | quote }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,112 @@
suite: backup secret tests
templates:
- templates/backup-secret.yaml
tests:
# Not rendered when both backup and bootstrap disabled
- it: does not render when backup and bootstrap disabled
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: false
bootstrap:
enabled: false
asserts:
- hasDocuments:
count: 0
# Rendered when backup enabled
- it: renders when backup enabled
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: true
s3AccessKey: "AKIAIOSFODNN7EXAMPLE"
s3SecretKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
asserts:
- hasDocuments:
count: 1
- isKind:
of: Secret
# Rendered when bootstrap enabled (for restore)
- it: renders when bootstrap enabled
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: false
s3AccessKey: "AKIAIOSFODNN7EXAMPLE"
s3SecretKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
bootstrap:
enabled: true
asserts:
- hasDocuments:
count: 1
# Secret name
- it: uses correct secret name
release:
name: mydb
namespace: tenant-test
set:
backup:
enabled: true
s3AccessKey: "accesskey"
s3SecretKey: "secretkey"
asserts:
- equal:
path: metadata.name
value: mydb-s3-creds
# Contains AWS credentials
- it: contains AWS credentials
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: true
s3AccessKey: "MYACCESSKEY"
s3SecretKey: "MYSECRETKEY"
asserts:
- equal:
path: stringData.AWS_ACCESS_KEY_ID
value: "MYACCESSKEY"
- equal:
path: stringData.AWS_SECRET_ACCESS_KEY
value: "MYSECRETKEY"
# Fails without s3AccessKey
- it: fails when s3AccessKey missing
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: true
s3AccessKey: ""
s3SecretKey: "secretkey"
asserts:
- failedTemplate:
errorMessage: "backup.s3AccessKey is required when backup or bootstrap is enabled"
# Fails without s3SecretKey
- it: fails when s3SecretKey missing
release:
name: test-mongodb
namespace: tenant-test
set:
backup:
enabled: true
s3AccessKey: "accesskey"
s3SecretKey: ""
asserts:
- failedTemplate:
errorMessage: "backup.s3SecretKey is required when backup or bootstrap is enabled"

View File

@@ -0,0 +1,132 @@
suite: credentials tests
templates:
- templates/credentials.yaml
tests:
# Basic rendering
- it: always renders a Secret
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- hasDocuments:
count: 1
- isKind:
of: Secret
- equal:
path: metadata.name
value: test-mongodb-credentials
- equal:
path: type
value: Opaque
# Username is always databaseAdmin
- it: sets username to databaseAdmin
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: stringData.username
value: databaseAdmin
# Port is always 27017
- it: sets port to 27017
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: stringData.port
value: "27017"
# Host for replica set mode
- it: uses rs0 service for replica set mode
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
asserts:
- equal:
path: stringData.host
value: test-mongodb-rs0.tenant-test.svc.cozy.local
# Host for sharded mode
- it: uses mongos service for sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
asserts:
- equal:
path: stringData.host
value: test-mongodb-mongos.tenant-test.svc.cozy.local
# Custom cluster domain
- it: uses custom cluster domain
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: custom.domain
sharding: false
asserts:
- equal:
path: stringData.host
value: test-mongodb-rs0.tenant-test.svc.custom.domain
# Default cluster domain when not set
- it: defaults to cozy.local when cluster domain not set
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster: {}
sharding: false
asserts:
- equal:
path: stringData.host
value: test-mongodb-rs0.tenant-test.svc.cozy.local
# Password empty without operator secret (lookup returns nil in tests)
- it: has empty password on first install
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: stringData.password
value: ""
# URI empty without password
- it: has empty uri when password not available
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: stringData.uri
value: ""

View File

@@ -0,0 +1,106 @@
suite: dashboard resourcemap tests
templates:
- templates/dashboard-resourcemap.yaml
tests:
# Always renders Role and RoleBinding
- it: renders Role and RoleBinding
release:
name: test-mongodb
namespace: tenant-test
asserts:
- hasDocuments:
count: 2
- isKind:
of: Role
documentIndex: 0
- isKind:
of: RoleBinding
documentIndex: 1
# Role naming
- it: uses correct Role name
release:
name: mydb
namespace: tenant-test
asserts:
- equal:
path: metadata.name
value: mydb-dashboard-resources
documentIndex: 0
# RoleBinding naming
- it: uses correct RoleBinding name
release:
name: mydb
namespace: tenant-test
asserts:
- equal:
path: metadata.name
value: mydb-dashboard-resources
documentIndex: 1
# Role grants access to services
- it: grants access to MongoDB services
release:
name: test-mongodb
namespace: tenant-test
asserts:
- contains:
path: rules[0].resourceNames
content: test-mongodb-rs0
documentIndex: 0
- contains:
path: rules[0].resourceNames
content: test-mongodb-mongos
documentIndex: 0
- contains:
path: rules[0].resourceNames
content: test-mongodb-external
documentIndex: 0
# Role grants access to credentials secret
- it: grants access to credentials secret
release:
name: test-mongodb
namespace: tenant-test
asserts:
- contains:
path: rules[1].resourceNames
content: test-mongodb-credentials
documentIndex: 0
# Role grants access to workloadmonitor
- it: grants access to WorkloadMonitor
release:
name: test-mongodb
namespace: tenant-test
asserts:
- contains:
path: rules[2].resourceNames
content: test-mongodb
documentIndex: 0
- equal:
path: rules[2].apiGroups[0]
value: cozystack.io
documentIndex: 0
# RoleBinding references correct Role
- it: RoleBinding references correct Role
release:
name: test-mongodb
namespace: tenant-test
asserts:
- equal:
path: roleRef.kind
value: Role
documentIndex: 1
- equal:
path: roleRef.name
value: test-mongodb-dashboard-resources
documentIndex: 1
- equal:
path: roleRef.apiGroup
value: rbac.authorization.k8s.io
documentIndex: 1

View File

@@ -0,0 +1,154 @@
suite: external service tests
templates:
- templates/external-svc.yaml
tests:
###################
# Rendering #
###################
- it: does not render when external is false
release:
name: test-mongodb
namespace: tenant-test
set:
external: false
asserts:
- hasDocuments:
count: 0
- it: renders LoadBalancer service when external is true
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
asserts:
- hasDocuments:
count: 1
- isKind:
of: Service
###################
# Service config #
###################
- it: uses correct service name
release:
name: mydb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: metadata.name
value: mydb-external
- it: sets LoadBalancer type
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: spec.type
value: LoadBalancer
- it: sets externalTrafficPolicy to Local
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: spec.externalTrafficPolicy
value: Local
- it: exposes MongoDB port 27017
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: spec.ports[0].name
value: mongodb
- equal:
path: spec.ports[0].port
value: 27017
###########################
# Common selector labels #
###########################
- it: sets app.kubernetes.io/name selector
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: spec.selector["app.kubernetes.io/name"]
value: percona-server-mongodb
- it: sets app.kubernetes.io/instance selector
release:
name: mydb
namespace: tenant-test
set:
external: true
asserts:
- equal:
path: spec.selector["app.kubernetes.io/instance"]
value: mydb
###########################
# Replica set mode #
###########################
- it: selects mongod for replica set mode
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
sharding: false
asserts:
- equal:
path: spec.selector["app.kubernetes.io/component"]
value: mongod
- equal:
path: spec.selector["app.kubernetes.io/replset"]
value: rs0
###########################
# Sharded mode #
###########################
- it: selects mongos for sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
sharding: true
asserts:
- equal:
path: spec.selector["app.kubernetes.io/component"]
value: mongos
- it: does not set replset selector for sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
external: true
sharding: true
asserts:
- notExists:
path: spec.selector["app.kubernetes.io/replset"]

View File

@@ -0,0 +1,703 @@
suite: mongodb CR tests
templates:
- templates/mongodb.yaml
tests:
###################
# Basic rendering #
###################
- it: renders PerconaServerMongoDB and WorkloadMonitor
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- hasDocuments:
count: 2
- isKind:
of: PerconaServerMongoDB
documentIndex: 0
- isKind:
of: WorkloadMonitor
documentIndex: 1
- it: sets correct CR name
release:
name: my-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: metadata.name
value: my-mongodb
documentIndex: 0
##################
# CR Version #
##################
- it: sets crVersion to 1.21.1
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: spec.crVersion
value: "1.21.1"
documentIndex: 0
#####################
# Cluster DNS #
#####################
- it: sets clusterServiceDNSSuffix from cluster config
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: custom.local
asserts:
- equal:
path: spec.clusterServiceDNSSuffix
value: svc.custom.local
documentIndex: 0
- it: defaults clusterServiceDNSSuffix to cozy.local
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster: {}
asserts:
- equal:
path: spec.clusterServiceDNSSuffix
value: svc.cozy.local
documentIndex: 0
##################
# Unsafe flags #
##################
- it: enables unsafeFlags when replicas is 1
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
replicas: 1
asserts:
- equal:
path: spec.unsafeFlags.replsetSize
value: true
documentIndex: 0
- it: enables unsafeFlags when replicas is 2
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
replicas: 2
asserts:
- equal:
path: spec.unsafeFlags.replsetSize
value: true
documentIndex: 0
- it: does not set unsafeFlags when replicas is 3
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
replicas: 3
asserts:
- notExists:
path: spec.unsafeFlags
documentIndex: 0
- it: does not set unsafeFlags when replicas is 5
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
replicas: 5
asserts:
- notExists:
path: spec.unsafeFlags
documentIndex: 0
###########################
# Replica Set Mode #
###########################
- it: configures replica set rs0 in non-sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
replicas: 3
asserts:
- equal:
path: spec.sharding.enabled
value: false
documentIndex: 0
- equal:
path: spec.replsets[0].name
value: rs0
documentIndex: 0
- equal:
path: spec.replsets[0].size
value: 3
documentIndex: 0
- it: sets storage size for replica set
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
size: 20Gi
asserts:
- equal:
path: spec.replsets[0].volumeSpec.persistentVolumeClaim.resources.requests.storage
value: 20Gi
documentIndex: 0
- it: sets storageClass when provided
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
storageClass: fast-ssd
asserts:
- equal:
path: spec.replsets[0].volumeSpec.persistentVolumeClaim.storageClassName
value: fast-ssd
documentIndex: 0
- it: does not set storageClass when empty
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
storageClass: ""
asserts:
- notExists:
path: spec.replsets[0].volumeSpec.persistentVolumeClaim.storageClassName
documentIndex: 0
###########################
# Sharded Cluster Mode #
###########################
- it: enables sharding when configured
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
shardingConfig:
configServers: 3
configServerSize: 3Gi
mongos: 2
shards:
- name: rs0
replicas: 3
size: 10Gi
asserts:
- equal:
path: spec.sharding.enabled
value: true
documentIndex: 0
- equal:
path: spec.sharding.balancer.enabled
value: true
documentIndex: 0
- it: configures config servers
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
shardingConfig:
configServers: 5
configServerSize: 5Gi
mongos: 2
shards:
- name: rs0
replicas: 3
size: 10Gi
asserts:
- equal:
path: spec.sharding.configsvrReplSet.size
value: 5
documentIndex: 0
- equal:
path: spec.sharding.configsvrReplSet.volumeSpec.persistentVolumeClaim.resources.requests.storage
value: 5Gi
documentIndex: 0
- it: configures mongos routers
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
shardingConfig:
configServers: 3
configServerSize: 3Gi
mongos: 4
shards:
- name: rs0
replicas: 3
size: 10Gi
asserts:
- equal:
path: spec.sharding.mongos.size
value: 4
documentIndex: 0
- equal:
path: spec.sharding.mongos.expose.exposeType
value: ClusterIP
documentIndex: 0
- it: configures multiple shards
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
shardingConfig:
configServers: 3
configServerSize: 3Gi
mongos: 2
shards:
- name: shard1
replicas: 3
size: 50Gi
- name: shard2
replicas: 5
size: 100Gi
asserts:
- equal:
path: spec.replsets[0].name
value: shard1
documentIndex: 0
- equal:
path: spec.replsets[0].size
value: 3
documentIndex: 0
- equal:
path: spec.replsets[0].volumeSpec.persistentVolumeClaim.resources.requests.storage
value: 50Gi
documentIndex: 0
- equal:
path: spec.replsets[1].name
value: shard2
documentIndex: 0
- equal:
path: spec.replsets[1].size
value: 5
documentIndex: 0
- equal:
path: spec.replsets[1].volumeSpec.persistentVolumeClaim.resources.requests.storage
value: 100Gi
documentIndex: 0
###########################
# Users configuration #
###########################
- it: does not include users section when no users defined
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
users: {}
asserts:
- notExists:
path: spec.users
documentIndex: 0
- it: configures users when defined
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
users:
appuser:
db: appdb
roles:
- name: readWrite
db: appdb
asserts:
- exists:
path: spec.users
documentIndex: 0
- equal:
path: spec.users[0].name
value: appuser
documentIndex: 0
- equal:
path: spec.users[0].db
value: appdb
documentIndex: 0
- equal:
path: spec.users[0].passwordSecretRef.name
value: test-mongodb-user-appuser
documentIndex: 0
- equal:
path: spec.users[0].passwordSecretRef.key
value: password
documentIndex: 0
- it: configures user roles
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
users:
admin:
db: admin
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
asserts:
- equal:
path: spec.users[0].roles[0].name
value: clusterAdmin
documentIndex: 0
- equal:
path: spec.users[0].roles[0].db
value: admin
documentIndex: 0
- equal:
path: spec.users[0].roles[1].name
value: userAdminAnyDatabase
documentIndex: 0
- it: fails when user has empty roles
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
users:
myuser:
db: mydb
roles: []
asserts:
- failedTemplate:
errorMessage: "users.myuser.roles is required and cannot be empty"
###########################
# Backup configuration #
###########################
- it: disables backup when not enabled
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: false
asserts:
- equal:
path: spec.backup.enabled
value: false
documentIndex: 0
- notExists:
path: spec.backup.storages
documentIndex: 0
- it: configures backup when enabled
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
schedule: "0 3 * * *"
retentionPolicy: 14d
destinationPath: "s3://mybucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.enabled
value: true
documentIndex: 0
- equal:
path: spec.backup.storages.s3-storage.type
value: s3
documentIndex: 0
- it: parses bucket from destinationPath
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
destinationPath: "s3://my-backup-bucket/mongodb/prod/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.storages.s3-storage.s3.bucket
value: my-backup-bucket
documentIndex: 0
- it: parses prefix from destinationPath
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
destinationPath: "s3://bucket/path/to/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.storages.s3-storage.s3.prefix
value: path/to/backups/
documentIndex: 0
- it: sets backup retention from retentionPolicy
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
retentionPolicy: 30d
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.tasks[0].keep
value: 30
documentIndex: 0
- it: sets backup schedule
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
schedule: "0 4 * * *"
retentionPolicy: 7d
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.tasks[0].schedule
value: "0 4 * * *"
documentIndex: 0
- it: enables PITR when backup enabled
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.pitr.enabled
value: true
documentIndex: 0
- it: references s3-creds secret for backup
release:
name: mydb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
backup:
enabled: true
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backup.storages.s3-storage.s3.credentialsSecret
value: mydb-s3-creds
documentIndex: 0
###########################
# WorkloadMonitor #
###########################
- it: creates WorkloadMonitor with correct metadata
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: metadata.name
value: test-mongodb
documentIndex: 1
- equal:
path: spec.kind
value: mongodb
documentIndex: 1
- equal:
path: spec.type
value: mongodb
documentIndex: 1
- it: sets replicas from values in non-sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: false
replicas: 5
asserts:
- equal:
path: spec.replicas
value: 5
documentIndex: 1
- it: calculates total replicas in sharded mode
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
sharding: true
shardingConfig:
configServers: 3
configServerSize: 3Gi
mongos: 2
shards:
- name: rs0
replicas: 3
size: 10Gi
- name: rs1
replicas: 5
size: 10Gi
- name: rs2
replicas: 2
size: 10Gi
asserts:
- equal:
path: spec.replicas
value: 10
documentIndex: 1
- it: sets minReplicas to 1
release:
name: test-mongodb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: spec.minReplicas
value: 1
documentIndex: 1
- it: sets correct selector labels
release:
name: mydb
namespace: tenant-test
set:
_cluster:
cluster-domain: cozy.local
asserts:
- equal:
path: spec.selector["app.kubernetes.io/name"]
value: percona-server-mongodb
documentIndex: 1
- equal:
path: spec.selector["app.kubernetes.io/instance"]
value: mydb
documentIndex: 1
- equal:
path: spec.selector["app.kubernetes.io/component"]
value: mongod
documentIndex: 1

View File

@@ -0,0 +1,349 @@
suite: restore tests
templates:
- templates/restore.yaml
tests:
#####################
# Rendering #
#####################
- it: does not render when bootstrap is disabled
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: false
asserts:
- hasDocuments:
count: 0
- it: renders PerconaServerMongoDBRestore CR when enabled
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "my-backup-2025-01-07"
backup:
destinationPath: "s3://bucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- hasDocuments:
count: 1
- isKind:
of: PerconaServerMongoDBRestore
#####################
# Validation #
#####################
- it: fails when backupName is missing
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: ""
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- failedTemplate:
errorMessage: "bootstrap.backupName is required when bootstrap.enabled is true"
- it: fails when destinationPath is missing
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "my-backup"
backup:
destinationPath: ""
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- failedTemplate:
errorMessage: "backup.destinationPath is required when bootstrap.enabled is true"
- it: fails when endpointURL is missing
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "my-backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: ""
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- failedTemplate:
errorMessage: "backup.endpointURL is required when bootstrap.enabled is true"
#####################
# CR metadata #
#####################
- it: uses correct restore CR name
release:
name: mydb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup-2025"
backup:
destinationPath: "s3://bucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: metadata.name
value: mydb-restore
- it: references correct cluster name
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup-2025"
backup:
destinationPath: "s3://bucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.clusterName
value: test-mongodb
#####################
# Backup source #
#####################
- it: sets backupSource type to logical
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup-2025"
backup:
destinationPath: "s3://bucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.type
value: logical
- it: constructs destination from destinationPath and backupName
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "daily-backup-2025-01-07"
backup:
destinationPath: "s3://mybucket/mongodb/prod/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.destination
value: s3://mybucket/mongodb/prod/daily-backup-2025-01-07
- it: trims trailing slash from destinationPath
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.destination
value: s3://bucket/path/backup
#####################
# S3 configuration #
#####################
- it: references s3-creds secret
release:
name: mydb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.s3.credentialsSecret
value: mydb-s3-creds
- it: sets S3 endpoint URL
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "https://s3.amazonaws.com"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.s3.endpointUrl
value: "https://s3.amazonaws.com"
- it: disables insecureSkipTLSVerify
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.s3.insecureSkipTLSVerify
value: false
- it: enables forcePathStyle
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.backupSource.s3.forcePathStyle
value: true
#####################
# PITR #
#####################
- it: does not set pitr when recoveryTime not specified
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- notExists:
path: spec.pitr
- it: configures PITR when recoveryTime is set
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "my-backup"
recoveryTime: "2025-01-07 14:30:00"
backup:
destinationPath: "s3://bucket/backups/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: "secret"
asserts:
- equal:
path: spec.pitr.type
value: date
- equal:
path: spec.pitr.date
value: "2025-01-07 14:30:00"
#####################
# S3 credentials #
#####################
- it: fails when s3AccessKey is missing
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: ""
s3SecretKey: "secret"
asserts:
- failedTemplate:
errorMessage: "backup.s3AccessKey is required when bootstrap.enabled is true"
- it: fails when s3SecretKey is missing
release:
name: test-mongodb
namespace: tenant-test
set:
bootstrap:
enabled: true
backupName: "backup"
backup:
destinationPath: "s3://bucket/path/"
endpointURL: "http://minio:9000"
s3AccessKey: "access"
s3SecretKey: ""
asserts:
- failedTemplate:
errorMessage: "backup.s3SecretKey is required when bootstrap.enabled is true"

View File

@@ -0,0 +1,98 @@
suite: user secrets tests
templates:
- templates/user-secrets.yaml
tests:
# No users configured
- it: does not render when no users defined
release:
name: test-mongodb
namespace: tenant-test
set:
users: {}
asserts:
- hasDocuments:
count: 0
# Single user
- it: creates secret for single user
release:
name: test-mongodb
namespace: tenant-test
set:
users:
myuser:
db: mydb
roles:
- name: readWrite
db: mydb
asserts:
- hasDocuments:
count: 1
- isKind:
of: Secret
- equal:
path: metadata.name
value: test-mongodb-user-myuser
- equal:
path: type
value: Opaque
- exists:
path: stringData.password
# Multiple users
- it: creates separate secrets for multiple users
release:
name: test-mongodb
namespace: tenant-test
set:
users:
user1:
db: db1
roles:
- name: readWrite
db: db1
user2:
db: db2
roles:
- name: dbAdmin
db: db2
asserts:
- hasDocuments:
count: 2
# User with explicit password
- it: uses explicit password when provided
release:
name: test-mongodb
namespace: tenant-test
set:
users:
myuser:
password: "mysecretpassword"
db: mydb
roles:
- name: readWrite
db: mydb
asserts:
- equal:
path: stringData.password
value: "mysecretpassword"
# Secret naming convention
- it: follows naming convention release-user-username
release:
name: prod-db
namespace: tenant-prod
set:
users:
admin:
db: admin
roles:
- name: clusterAdmin
db: admin
asserts:
- equal:
path: metadata.name
value: prod-db-user-admin

View File

@@ -0,0 +1,288 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"backup": {
"description": "Backup configuration.",
"type": "object",
"default": {},
"required": [
"enabled"
],
"properties": {
"destinationPath": {
"description": "Destination path for backups (e.g. s3://bucket/path/).",
"type": "string",
"default": "s3://bucket/path/to/folder/"
},
"enabled": {
"description": "Enable regular backups.",
"type": "boolean",
"default": false
},
"endpointURL": {
"description": "S3 endpoint URL for uploads.",
"type": "string",
"default": "http://minio-gateway-service:9000"
},
"retentionPolicy": {
"description": "Retention policy (e.g. \"30d\").",
"type": "string",
"default": "30d"
},
"s3AccessKey": {
"description": "Access key for S3 authentication.",
"type": "string",
"default": ""
},
"s3SecretKey": {
"description": "Secret key for S3 authentication.",
"type": "string",
"default": ""
},
"schedule": {
"description": "Cron schedule for automated backups.",
"type": "string",
"default": "0 2 * * *"
}
}
},
"bootstrap": {
"description": "Bootstrap configuration.",
"type": "object",
"default": {},
"required": [
"backupName",
"enabled"
],
"properties": {
"backupName": {
"description": "Name of backup to restore from.",
"type": "string",
"default": ""
},
"enabled": {
"description": "Whether to restore from a backup.",
"type": "boolean",
"default": false
},
"recoveryTime": {
"description": "Timestamp for point-in-time recovery; empty means latest.",
"type": "string",
"default": ""
}
}
},
"external": {
"description": "Enable external access from outside the cluster.",
"type": "boolean",
"default": false
},
"replicas": {
"description": "Number of MongoDB replicas in replica set.",
"type": "integer",
"default": 3
},
"resources": {
"description": "Explicit CPU and memory configuration for each MongoDB replica. When omitted, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"default": {},
"properties": {
"cpu": {
"description": "CPU available to each replica.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory (RAM) available to each replica.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted.",
"type": "string",
"default": "small",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"sharding": {
"description": "Enable sharded cluster mode. When disabled, deploys a replica set.",
"type": "boolean",
"default": false
},
"shardingConfig": {
"description": "Configuration for sharded cluster mode.",
"type": "object",
"default": {},
"required": [
"configServerSize",
"configServers",
"mongos"
],
"properties": {
"configServerSize": {
"description": "PVC size for config servers.",
"default": "3Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"configServers": {
"description": "Number of config server replicas.",
"type": "integer",
"default": 3
},
"mongos": {
"description": "Number of mongos router replicas.",
"type": "integer",
"default": 2
},
"shards": {
"description": "List of shard configurations.",
"type": "array",
"default": [
{
"name": "rs0",
"replicas": 3,
"size": "10Gi"
}
],
"items": {
"type": "object",
"required": [
"name",
"replicas",
"size"
],
"properties": {
"name": {
"description": "Shard name.",
"type": "string"
},
"replicas": {
"description": "Number of replicas in this shard.",
"type": "integer"
},
"size": {
"description": "PVC size for this shard.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
}
}
}
},
"size": {
"description": "Persistent Volume Claim size available for application data.",
"default": "10Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"storageClass": {
"description": "StorageClass used to store the data.",
"type": "string",
"default": ""
},
"users": {
"description": "Custom MongoDB users configuration map.",
"type": "object",
"default": {},
"additionalProperties": {
"type": "object",
"required": [
"db"
],
"properties": {
"db": {
"description": "Database to authenticate against.",
"type": "string"
},
"password": {
"description": "Password for the user (auto-generated if omitted).",
"type": "string"
},
"roles": {
"description": "List of MongoDB roles with database scope.",
"type": "array",
"items": {
"type": "object",
"required": [
"db",
"name"
],
"properties": {
"db": {
"description": "Database the role applies to.",
"type": "string"
},
"name": {
"description": "Role name (e.g., readWrite, dbAdmin, clusterAdmin).",
"type": "string"
}
}
}
}
}
}
},
"version": {
"description": "MongoDB major version to deploy.",
"type": "string",
"default": "v8",
"enum": [
"v8",
"v7",
"v6"
]
}
}
}

View File

@@ -0,0 +1,134 @@
##
## @section Common parameters
##
## @typedef {struct} Resources - Explicit CPU and memory configuration for each MongoDB replica.
## @field {quantity} [cpu] - CPU available to each replica.
## @field {quantity} [memory] - Memory (RAM) available to each replica.
## @enum {string} ResourcesPreset - Default sizing preset.
## @value nano
## @value micro
## @value small
## @value medium
## @value large
## @value xlarge
## @value 2xlarge
## @param {int} replicas - Number of MongoDB replicas in replica set.
replicas: 3
## @param {Resources} [resources] - Explicit CPU and memory configuration for each MongoDB replica. When omitted, the preset defined in `resourcesPreset` is applied.
resources: {}
## @param {ResourcesPreset} resourcesPreset="small" - Default sizing preset used when `resources` is omitted.
resourcesPreset: "small"
## @param {quantity} size - Persistent Volume Claim size available for application data.
size: 10Gi
## @param {string} storageClass - StorageClass used to store the data.
storageClass: ""
## @param {bool} external - Enable external access from outside the cluster.
external: false
##
## @enum {string} Version
## @value v8
## @value v7
## @value v6
## @param {Version} version - MongoDB major version to deploy.
version: v8
##
## @section Sharding configuration
##
## @param {bool} sharding - Enable sharded cluster mode. When disabled, deploys a replica set.
sharding: false
## @typedef {struct} ShardingConfig - Sharded cluster configuration.
## @field {int} configServers - Number of config server replicas.
## @field {quantity} configServerSize - PVC size for config servers.
## @field {int} mongos - Number of mongos router replicas.
## @field {[]Shard} shards - List of shard configurations.
## @typedef {struct} Shard - Individual shard configuration.
## @field {string} name - Shard name.
## @field {int} replicas - Number of replicas in this shard.
## @field {quantity} size - PVC size for this shard.
## @param {ShardingConfig} shardingConfig - Configuration for sharded cluster mode.
shardingConfig:
configServers: 3
configServerSize: 3Gi
mongos: 2
shards:
- name: rs0
replicas: 3
size: 10Gi
##
## @section Users configuration
##
## @typedef {struct} Role - MongoDB role configuration.
## @field {string} name - Role name (e.g., readWrite, dbAdmin, clusterAdmin).
## @field {string} db - Database the role applies to.
## @typedef {struct} User - User configuration.
## @field {string} [password] - Password for the user (auto-generated if omitted).
## @field {string} db - Database to authenticate against.
## @field {[]Role} roles - List of MongoDB roles with database scope.
## @param {map[string]User} users - Custom MongoDB users configuration map.
users: {}
## Example:
## users:
## myuser:
## db: mydb
## roles:
## - name: readWrite
## db: mydb
## - name: dbAdmin
## db: mydb
##
## @section Backup parameters
##
## @typedef {struct} Backup - Backup configuration.
## @field {bool} enabled - Enable regular backups.
## @field {string} [schedule] - Cron schedule for automated backups.
## @field {string} [retentionPolicy] - Retention policy (e.g. "30d").
## @field {string} [destinationPath] - Destination path for backups (e.g. s3://bucket/path/).
## @field {string} [endpointURL] - S3 endpoint URL for uploads.
## @field {string} [s3AccessKey] - Access key for S3 authentication.
## @field {string} [s3SecretKey] - Secret key for S3 authentication.
## @param {Backup} backup - Backup configuration.
backup:
enabled: false
schedule: "0 2 * * *"
retentionPolicy: 30d
destinationPath: "s3://bucket/path/to/folder/"
endpointURL: "http://minio-gateway-service:9000"
s3AccessKey: ""
s3SecretKey: ""
##
## @section Bootstrap (recovery) parameters
##
## @typedef {struct} Bootstrap - Bootstrap configuration for restoring a database cluster from a backup.
## @field {bool} enabled - Whether to restore from a backup.
## @field {string} [recoveryTime] - Timestamp for point-in-time recovery; empty means latest.
## @field {string} backupName - Name of backup to restore from.
## @param {Bootstrap} bootstrap - Bootstrap configuration.
bootstrap:
enabled: false
recoveryTime: ""
backupName: ""

View File

@@ -231,7 +231,6 @@ rules:
- get
- list
- watch
- delete
- apiGroups: ["kubevirt.io"]
resources:
- virtualmachines
@@ -330,7 +329,6 @@ rules:
- get
- list
- watch
- delete
- apiGroups: ["kubevirt.io"]
resources:
- virtualmachines

View File

@@ -70,6 +70,29 @@ Generate a stable UUID for cloud-init re-initialization upon upgrade.
{{- $uuid }}
{{- end }}
{{/*
Domain resources (cpu, memory) as a JSON object.
Used in vm.yaml for rendering and in the update hook for merge patches.
*/}}
{{- define "virtual-machine.domainResources" -}}
{{- $result := dict -}}
{{- if or .Values.cpuModel (and .Values.resources .Values.resources.cpu .Values.resources.sockets) -}}
{{- $cpu := dict -}}
{{- if and .Values.resources .Values.resources.cpu .Values.resources.sockets -}}
{{- $_ := set $cpu "cores" (.Values.resources.cpu | int64) -}}
{{- $_ := set $cpu "sockets" (.Values.resources.sockets | int64) -}}
{{- end -}}
{{- if .Values.cpuModel -}}
{{- $_ := set $cpu "model" .Values.cpuModel -}}
{{- end -}}
{{- $_ := set $result "cpu" $cpu -}}
{{- end -}}
{{- if and .Values.resources .Values.resources.memory -}}
{{- $_ := set $result "resources" (dict "requests" (dict "memory" .Values.resources.memory)) -}}
{{- end -}}
{{- $result | toJson -}}
{{- end -}}
{{/*
Node Affinity for Windows VMs
*/}}

View File

@@ -3,22 +3,32 @@
{{- $existingVM := lookup "kubevirt.io/v1" "VirtualMachine" $namespace $vmName -}}
{{- $existingPVC := lookup "v1" "PersistentVolumeClaim" $namespace $vmName -}}
{{- $existingService := lookup "v1" "Service" $namespace $vmName -}}
{{- $instanceType := .Values.instanceType | default "" -}}
{{- $instanceProfile := .Values.instanceProfile | default "" -}}
{{- $desiredStorage := .Values.systemDisk.storage | default "" -}}
{{- $desiredServiceType := ternary "LoadBalancer" "ClusterIP" .Values.external -}}
{{- $needUpdateType := false -}}
{{- $needUpdateProfile := false -}}
{{- $needResizePVC := false -}}
{{- $needRecreateService := false -}}
{{- $needRemoveInstanceType := false -}}
{{- $needRemoveCustomResources := false -}}
{{- if and $existingVM $instanceType -}}
{{- $existingHasInstanceType := and $existingVM $existingVM.spec.instancetype -}}
{{- if and $existingHasInstanceType (not $instanceType) -}}
{{- $needRemoveInstanceType = true -}}
{{- else if and $existingHasInstanceType $instanceType -}}
{{- if not (eq $existingVM.spec.instancetype.name $instanceType) -}}
{{- $needUpdateType = true -}}
{{- end -}}
{{- else if and $existingVM (not $existingHasInstanceType) $instanceType -}}
{{- $needRemoveCustomResources = true -}}
{{- end -}}
{{- if and $existingVM $instanceProfile -}}
{{- if and $existingVM $existingVM.spec.preference $instanceProfile -}}
{{- if not (eq $existingVM.spec.preference.name $instanceProfile) -}}
{{- $needUpdateProfile = true -}}
{{- end -}}
@@ -35,7 +45,14 @@
{{- end -}}
{{- end -}}
{{- if or $needUpdateType $needUpdateProfile $needResizePVC }}
{{- if $existingService -}}
{{- $currentServiceType := $existingService.spec.type -}}
{{- if ne $currentServiceType $desiredServiceType -}}
{{- $needRecreateService = true -}}
{{- end -}}
{{- end -}}
{{- if or $needUpdateType $needUpdateProfile $needResizePVC $needRecreateService $needRemoveInstanceType $needRemoveCustomResources }}
apiVersion: batch/v1
kind: Job
metadata:
@@ -80,12 +97,31 @@ spec:
-p '{"spec":{"preference":{"name": "{{ $instanceProfile }}", "revisionName": null}}}'
{{- end }}
{{- if $needRemoveInstanceType }}
echo "Removing instancetype from VM (switching to custom resources)..."
kubectl patch virtualmachines.kubevirt.io {{ $vmName }} -n {{ $namespace }} \
--type merge \
-p '{"spec":{"instancetype":null{{- if not $instanceProfile }},"preference":null{{- end }},"template":{"spec":{"domain":{{ include "virtual-machine.domainResources" . }}}}}}'
{{- end }}
{{- if $needRemoveCustomResources }}
echo "Removing custom CPU/memory from domain (switching to instancetype)..."
kubectl patch virtualmachines.kubevirt.io {{ $vmName }} -n {{ $namespace }} \
--type merge \
-p '{"spec":{"instancetype":{"name":"{{ $instanceType }}","revisionName":null},"template":{"spec":{"domain":{"cpu":null,"resources":null}}}}}'
{{- end }}
{{- if $needResizePVC }}
echo "Patching PVC for storage resize..."
kubectl patch pvc {{ $vmName }} -n {{ $namespace }} \
--type merge \
-p '{"spec":{"resources":{"requests":{"storage":"{{ $desiredStorage }}"}}}}'
{{- end }}
{{- if $needRecreateService }}
echo "Removing Service..."
kubectl delete service --cascade=orphan -n {{ $namespace }} {{ $vmName }}
{{- end }}
---
apiVersion: v1
kind: ServiceAccount
@@ -111,6 +147,10 @@ rules:
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["patch", "get", "list", "watch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["{{ $vmName }}"]
verbs: ["delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@@ -4,6 +4,9 @@
{{- if and .Values.instanceProfile (not (lookup "instancetype.kubevirt.io/v1beta1" "VirtualMachineClusterPreference" "" .Values.instanceProfile)) }}
{{- fail (printf "Specified profile not exists in cluster: %s" .Values.instanceProfile) }}
{{- end }}
{{- if and (not .Values.instanceType) (not (and .Values.resources .Values.resources.cpu .Values.resources.sockets .Values.resources.memory)) }}
{{- fail "Either instanceType or resources (cpu, sockets, memory) must be specified" }}
{{- end }}
apiVersion: kubevirt.io/v1
kind: VirtualMachine
@@ -67,15 +70,12 @@ spec:
{{- include "virtual-machine.labels" . | nindent 8 }}
spec:
domain:
{{- if and .Values.resources .Values.resources.cpu .Values.resources.sockets }}
cpu:
cores: {{ .Values.resources.cpu }}
sockets: {{ .Values.resources.sockets }}
{{- $domainRes := include "virtual-machine.domainResources" . | fromJson -}}
{{- with $domainRes.cpu }}
cpu: {{- . | toYaml | nindent 10 }}
{{- end }}
{{- if and .Values.resources .Values.resources.memory }}
resources:
requests:
memory: {{ .Values.resources.memory | quote }}
{{- with $domainRes.resources }}
resources: {{- . | toYaml | nindent 10 }}
{{- end }}
firmware:
uuid: {{ include "virtual-machine.stableUuid" . }}

View File

@@ -1,2 +1,2 @@
cozystack:
image: ghcr.io/cozystack/cozystack/installer:v0.40.7@sha256:6cf3f2439acab9599c00c788f8aabeaf4b4f515a4534bc21e20c95facb542e27
image: ghcr.io/cozystack/cozystack/installer:v0.41.11@sha256:ba9271deb2f6ac29dd067a1277a4b3c33504a045c375957a2175deaee6fdfec3

View File

@@ -27,7 +27,7 @@ releases:
dependsOn: [cilium]
- name: cozy-proxy
releaseName: cozystack
releaseName: cozy-proxy
chart: cozy-cozy-proxy
namespace: cozy-system
optional: true

View File

@@ -66,7 +66,7 @@ releases:
dependsOn: [cilium,kubeovn]
- name: cozy-proxy
releaseName: cozystack
releaseName: cozy-proxy
chart: cozy-cozy-proxy
namespace: cozy-system
dependsOn: [cilium,kubeovn,multus]
@@ -208,6 +208,12 @@ releases:
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: mongodb-rd
releaseName: mongodb-rd
chart: mongodb-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: seaweedfs-rd
releaseName: seaweedfs-rd
chart: seaweedfs-rd
@@ -411,6 +417,12 @@ releases:
namespace: cozy-redis-operator
dependsOn: [cilium,kubeovn,multus]
- name: mongodb-operator
releaseName: mongodb-operator
chart: cozy-mongodb-operator
namespace: cozy-mongodb-operator
dependsOn: [cilium,kubeovn,cert-manager,victoria-metrics-operator]
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator

View File

@@ -0,0 +1,27 @@
---
apiVersion: cozystack.io/v1alpha1
kind: PackageSource
metadata:
name: cozystack.mongodb-application
spec:
sourceRef:
kind: OCIRepository
name: cozystack-packages
namespace: cozy-system
path: /
variants:
- name: default
dependsOn:
- cozystack.networking
libraries:
- name: cozy-lib
path: library/cozy-lib
components:
- name: mongodb
path: apps/mongodb
libraries: ["cozy-lib"]
- name: mongodb-rd
path: system/mongodb-rd
install:
namespace: cozy-system
releaseName: mongodb-rd

View File

@@ -0,0 +1,23 @@
---
apiVersion: cozystack.io/v1alpha1
kind: PackageSource
metadata:
name: cozystack.mongodb-operator
spec:
sourceRef:
kind: OCIRepository
name: cozystack-packages
namespace: cozy-system
path: /
variants:
- name: default
dependsOn:
- cozystack.networking
- cozystack.prometheus-operator-crds
- cozystack.cert-manager
components:
- name: mongodb-operator
path: system/mongodb-operator
install:
namespace: cozy-mongodb-operator
releaseName: mongodb-operator

View File

@@ -1,2 +1,2 @@
assets:
image: ghcr.io/cozystack/cozystack/cozystack-assets:v0.40.7@sha256:2409c3ce21fa43b7b864564be0a8b7249603b1caa506020db0d13dd7dfa38b8c
image: ghcr.io/cozystack/cozystack/cozystack-assets:v0.41.11@sha256:04ca6ac7ac72f4a4d975a33436dc401abf457eb27a7e59f32a333f0b689a11e3

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.40.7@sha256:eac71ef0de3450fce96255629e77903630c63ade62b81e7055f1a689f92ee153
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.41.11@sha256:0eae9f519669667d60b160ebb93c127843c470ad9ca3447fceaa54604503a7ba

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/matchbox:v0.40.7@sha256:55f1c6a324dc60d3a813777e26135110891a71b1643be6470fb7cd6b5c91d9e5
ghcr.io/cozystack/cozystack/matchbox:v0.41.11@sha256:d11c034f1475d40e83f94a7f51a21082203c72346fe6a35fc931de976c0546c2

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v0.40.7@sha256:3e61975b8bb04d0e1d993a08b49ce8d5265e029a21ad6d57b44d4f93243aff4c
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v0.41.11@sha256:2a3595cd88b30af55b2000d3ca204899beecef0012b0e0402754c3914aad1f7f

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:5bc30fcdc14b289c7eeca3c53388270e3a56d10a3e611fe6c9099afa02661cf4
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:1f03fde12124b94b646532e3ebdebf62b8d87e42e0aa5576cd07c4559ce66403

View File

@@ -6,3 +6,6 @@ coredns:
k8sAppLabelOverride: kube-dns
service:
name: kube-dns
serviceAccount:
create: true
name: kube-dns

View File

@@ -1,5 +1,5 @@
cozystackAPI:
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.40.7@sha256:ac3598860e3a41b466d240c54b5eafceccfa2be5aa084d872e06f5ce7e4054e3
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.41.11@sha256:3a8cb618f140c60eb2a5afd3f07a5ec7e638ab4cd949ea0913abc372703a2d82
localK8sAPIEndpoint:
enabled: true
replicas: 2

View File

@@ -1,6 +1,6 @@
cozystackController:
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.40.7@sha256:0f746b1fa9be8743c249629cff9f457d63ca4a48875161d2ab598f7e69aa0457
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.41.11@sha256:8f1c725989e32706293afaea195d110d7690b06ad2e52742fce2bbe9f71cbe48
debug: false
disableTelemetry: false
cozystackVersion: "v0.40.7"
cozystackVersion: "v0.41.11"
cozystackAPIKind: "DaemonSet"

View File

@@ -1,6 +1,6 @@
{{- $brandingConfig := .Values._cluster.branding | default dict }}
{{- $tenantText := "v0.40.7" }}
{{- $tenantText := "v0.41.11" }}
{{- $footerText := "Cozystack" }}
{{- $titleText := "Cozystack Dashboard" }}
{{- $logoText := "" }}

View File

@@ -63,6 +63,13 @@ spec:
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
startupProbe:
httpGet:
path: /healthcheck
port: 64231
scheme: HTTP
failureThreshold: 30
periodSeconds: 2
name: bff
ports:
- containerPort: 64231
@@ -183,6 +190,13 @@ spec:
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
startupProbe:
httpGet:
path: /healthcheck
port: 8080
scheme: HTTP
failureThreshold: 30
periodSeconds: 2
name: web
ports:
- containerPort: 8080

View File

@@ -1,6 +1,6 @@
openapiUI:
image: ghcr.io/cozystack/cozystack/openapi-ui:v0.40.7@sha256:7ec24b3ecf1e72a6203f337101fb8b326bd319c540bea2f34c5458b836d46867
image: ghcr.io/cozystack/cozystack/openapi-ui:v0.41.11@sha256:87dfcda3aaaade114e099a3bd8fbb4479a20a761d60849dd2fe47ba245db7cb8
openapiUIK8sBff:
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v0.40.7@sha256:d33583995dc81a47c1dcbe45dbd866fa9097f88f4b6eb78b408dca432f15bd38
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v0.41.11@sha256:0ee55b703839497b7d8264000c3f39c3688b550de1047eb754577523c810fa79
tokenProxy:
image: ghcr.io/cozystack/cozystack/token-proxy:v0.40.7@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc
image: ghcr.io/cozystack/cozystack/token-proxy:v0.41.11@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc

View File

@@ -38,8 +38,8 @@
| kubeRbacProxy.args[2] | string | `"--logtostderr=true"` | |
| kubeRbacProxy.args[3] | string | `"--v=0"` | |
| kubeRbacProxy.image.pullPolicy | string | `"IfNotPresent"` | Image pull policy |
| kubeRbacProxy.image.repository | string | `"gcr.io/kubebuilder/kube-rbac-proxy"` | Image repository |
| kubeRbacProxy.image.tag | string | `"v0.16.0"` | Version of image |
| kubeRbacProxy.image.repository | string | `"quay.io/brancz/kube-rbac-proxy"` | Image repository |
| kubeRbacProxy.image.tag | string | `"v0.18.1"` | Version of image |
| kubeRbacProxy.livenessProbe | object | `{}` | https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ |
| kubeRbacProxy.readinessProbe | object | `{}` | https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ |
| kubeRbacProxy.resources | object | `{"limits":{"cpu":"250m","memory":"128Mi"},"requests":{"cpu":"100m","memory":"64Mi"}}` | ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |

View File

@@ -98,13 +98,13 @@ kubeRbacProxy:
image:
# -- Image repository
repository: gcr.io/kubebuilder/kube-rbac-proxy
repository: quay.io/brancz/kube-rbac-proxy
# -- Image pull policy
pullPolicy: IfNotPresent
# -- Version of image
tag: v0.16.0
tag: v0.18.1
args:
- --secure-listen-address=0.0.0.0:8443

View File

@@ -3,7 +3,7 @@ kamaji:
deploy: false
image:
pullPolicy: IfNotPresent
tag: v0.40.7@sha256:fe9b6bb548edfc26be8aaac65801d598a4e2f9884ddf748083b9e509fa00259e
tag: v0.41.11@sha256:9ac09f817c67de652bacedcdc0390cd343401879b6c1a1c28131a0f109af3804
repository: ghcr.io/cozystack/cozystack/kamaji
resources:
limits:
@@ -13,4 +13,4 @@ kamaji:
cpu: 100m
memory: 100Mi
extraArgs:
- --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v0.40.7@sha256:fe9b6bb548edfc26be8aaac65801d598a4e2f9884ddf748083b9e509fa00259e
- --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v0.41.11@sha256:9ac09f817c67de652bacedcdc0390cd343401879b6c1a1c28131a0f109af3804

View File

@@ -5,12 +5,6 @@
{{- $existingKubeappsSecret := lookup "v1" "Secret" .Release.Namespace "kubeapps-client" }}
{{- $existingAuthConfig := lookup "v1" "Secret" "cozy-dashboard" "kubeapps-auth-config" }}
{{- $brandingConfig := .Values._cluster.branding | default dict }}
{{ $branding := "" }}
{{- if $brandingConfig }}
{{- $branding = $brandingConfig.branding }}
{{- end }}
---
apiVersion: v1.edp.epam.com/v1alpha1
@@ -32,9 +26,15 @@ metadata:
spec:
realmName: cozy
clusterKeycloakRef: keycloak-cozy
{{- if $branding }}
displayHtmlName: {{ $branding }}
displayName: {{ $branding }}
{{- if $brandingConfig }}
{{- if hasKey $brandingConfig "brandName" }}
displayName: {{ $brandingConfig.brandName }}
{{- end }}
{{- if hasKey $brandingConfig "brandHtmlName" }}
displayHtmlName: {{ $brandingConfig.brandHtmlName }}
{{- else if hasKey $brandingConfig "branding" }}
displayHtmlName: {{ $brandingConfig.branding }}
{{- end }}
{{- end }}
---

View File

@@ -1,4 +1,4 @@
portSecurity: true
routes: ""
image: ghcr.io/cozystack/cozystack/kubeovn-plunger:v0.40.7@sha256:34e603d6d527ad07d0fd6f969ff4b4f2a98abfdbfacc020ccd9eb61b8394b1c1
image: ghcr.io/cozystack/cozystack/kubeovn-plunger:v0.41.11@sha256:50dcf0aa177d8b88949d15cdbbb225f4ac06677048111b5d8ff4910d6ec97d11
ovnCentralName: ovn-central

View File

@@ -1,3 +1,3 @@
portSecurity: true
routes: ""
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v0.40.7@sha256:e6334c29d3aaf0dea766c88e3e05b53ad623d1bb497b3c836e6f76adade45b29
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v0.41.11@sha256:e18f9fd679e38f65362a8d0042f25468272f6d081136ad47027168d8e7e07a4a

View File

@@ -1,5 +1,3 @@
KUBEOVN_TAG=v0.40.0
export NAME=kubeovn
export NAMESPACE=cozy-$(NAME)
@@ -8,6 +6,6 @@ include ../../../scripts/package.mk
update:
rm -rf charts values.yaml Chart.yaml
tag=$(KUBEOVN_TAG) && \
curl -sSL https://github.com/cozystack/kubeovn/archive/refs/tags/$${tag}.tar.gz | \
tar xzvf - --strip 2 kubeovn-$${tag#*v}/chart
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/cozystack/kubeovn-chart | awk -F'[/^]' 'END{print $$3}') && \
curl -sSL https://github.com/cozystack/kubeovn-chart/archive/refs/tags/$${tag}.tar.gz | \
tar xzvf - --strip 2 kubeovn-chart-$${tag#*v}/chart

View File

@@ -15,12 +15,12 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: v1.14.25
version: v1.15.3
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.14.25"
appVersion: "1.15.3"
kubeVersion: ">= 1.29.0-0"

View File

@@ -69,7 +69,9 @@ Number of master nodes
{{- $imageVersion := (index $ds.spec.template.spec.containers 0).image | splitList ":" | last | trimPrefix "v" -}}
{{- $versionRegex := `^(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)` -}}
{{- if and (ne $newChartVersion $chartVersion) (regexMatch $versionRegex $imageVersion) -}}
{{- if regexFind $versionRegex $imageVersion | semverCompare ">= 1.13.0" -}}
{{- if regexFind $versionRegex $imageVersion | semverCompare ">= 1.15.0" -}}
25.03
{{- else if regexFind $versionRegex $imageVersion | semverCompare ">= 1.13.0" -}}
24.03
{{- else if regexFind $versionRegex $imageVersion | semverCompare ">= 1.12.0" -}}
22.12

View File

@@ -122,6 +122,7 @@ spec:
limits:
cpu: {{ index .Values "ovn-central" "limits" "cpu" }}
memory: {{ index .Values "ovn-central" "limits" "memory" }}
ephemeral-storage: {{ index .Values "ovn-central" "limits" "ephemeral-storage" }}
volumeMounts:
- mountPath: /var/run/ovn
name: host-run-ovn

View File

@@ -101,6 +101,7 @@ spec:
- --pod-nic-type={{- .Values.networking.POD_NIC_TYPE }}
- --enable-lb={{- .Values.func.ENABLE_LB }}
- --enable-np={{- .Values.func.ENABLE_NP }}
- --np-enforcement={{- .Values.func.NP_ENFORCEMENT }}
- --enable-eip-snat={{- .Values.networking.ENABLE_EIP_SNAT }}
- --enable-external-vpc={{- .Values.func.ENABLE_EXTERNAL_VPC }}
- --enable-ecmp={{- .Values.networking.ENABLE_ECMP }}
@@ -117,11 +118,14 @@ spec:
- --secure-serving={{- .Values.func.SECURE_SERVING }}
- --enable-ovn-ipsec={{- .Values.func.ENABLE_OVN_IPSEC }}
- --enable-anp={{- .Values.func.ENABLE_ANP }}
- --enable-dns-name-resolver={{- .Values.func.ENABLE_DNS_NAME_RESOLVER }}
- --ovsdb-con-timeout={{- .Values.func.OVSDB_CON_TIMEOUT }}
- --ovsdb-inactivity-timeout={{- .Values.func.OVSDB_INACTIVITY_TIMEOUT }}
- --enable-live-migration-optimize={{- .Values.func.ENABLE_LIVE_MIGRATION_OPTIMIZE }}
- --enable-ovn-lb-prefer-local={{- .Values.func.ENABLE_OVN_LB_PREFER_LOCAL }}
- --image={{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
- --skip-conntrack-dst-cidrs={{- .Values.networking.SKIP_CONNTRACK_DST_CIDRS }}
- --non-primary-cni-mode={{- .Values.cni_conf.NON_PRIMARY_CNI }}
securityContext:
runAsUser: {{ include "kubeovn.runAsUser" . }}
privileged: false
@@ -140,11 +144,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBE_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBE_NODE_NAME
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
@@ -194,6 +194,7 @@ spec:
limits:
cpu: {{ index .Values "kube-ovn-controller" "limits" "cpu" }}
memory: {{ index .Values "kube-ovn-controller" "limits" "memory" }}
ephemeral-storage: {{ index .Values "kube-ovn-controller" "limits" "ephemeral-storage" }}
nodeSelector:
kubernetes.io/os: "linux"
volumes:

View File

@@ -100,6 +100,7 @@ spec:
limits:
cpu: 3
memory: 1Gi
ephemeral-storage: 1Gi
volumeMounts:
- mountPath: /var/run/ovn
name: host-run-ovn

View File

@@ -81,7 +81,7 @@ spec:
env:
- name: ENABLE_SSL
value: "{{ .Values.networking.ENABLE_SSL }}"
- name: KUBE_NODE_NAME
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
@@ -110,6 +110,7 @@ spec:
limits:
cpu: {{ index .Values "kube-ovn-monitor" "limits" "cpu" }}
memory: {{ index .Values "kube-ovn-monitor" "limits" "memory" }}
ephemeral-storage: {{ index .Values "kube-ovn-monitor" "limits" "ephemeral-storage" }}
volumeMounts:
- mountPath: /var/run/ovn
name: host-run-ovn

View File

@@ -48,10 +48,18 @@ rules:
- switch-lb-rules/status
- vpc-dnses
- vpc-dnses/status
- dnsnameresolvers
- dnsnameresolvers/status
- qos-policies
- qos-policies/status
verbs:
- "*"
- create
- get
- list
- update
- patch
- watch
- delete
- apiGroups:
- ""
resources:
@@ -84,6 +92,8 @@ rules:
- network-attachment-definitions
verbs:
- get
- list
- watch
- apiGroups:
- ""
- networking.k8s.io
@@ -166,7 +176,11 @@ rules:
resources:
- leases
verbs:
- "*"
- create
- update
- patch
- get
- watch
- apiGroups:
- "kubevirt.io"
resources:
@@ -181,6 +195,7 @@ rules:
resources:
- adminnetworkpolicies
- baselineadminnetworkpolicies
- clusternetworkpolicies
verbs:
- get
- list
@@ -276,7 +291,6 @@ rules:
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
@@ -355,12 +369,23 @@ rules:
- "list"
- "watch"
- "delete"
- apiGroups:
- ""
resources:
- "secrets"
verbs:
- "get"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: secret-reader-ovn-ipsec
namespace: {{ .Values.namespace }}
rules:
- apiGroups:
- ""
resources:
- "secrets"
resourceNames:
- "ovn-ipsec-ca"
verbs:
- "get"
- "list"
- "watch"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole

View File

@@ -67,6 +67,20 @@ subjects:
namespace: {{ .Values.namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kube-ovn-cni-secret-reader
namespace: {{ .Values.namespace }}
subjects:
- kind: ServiceAccount
name: kube-ovn-cni
namespace: {{ .Values.namespace }}
roleRef:
kind: Role
name: secret-reader-ovn-ipsec
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-ovn-app

View File

@@ -54,7 +54,7 @@ spec:
value: "{{- .Values.networking.TUNNEL_TYPE }}"
- name: DPDK_TUNNEL_IFACE
value: "{{- .Values.networking.DPDK_TUNNEL_IFACE }}"
- name: KUBE_NODE_NAME
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName

View File

@@ -122,9 +122,7 @@ spec:
- --secure-serving={{- .Values.func.SECURE_SERVING }}
- --enable-ovn-ipsec={{- .Values.func.ENABLE_OVN_IPSEC }}
- --set-vxlan-tx-off={{- .Values.func.SET_VXLAN_TX_OFF }}
{{- with .Values.mtu }}
- --mtu={{ . }}
{{- end }}
- --non-primary-cni-mode={{- .Values.cni_conf.NON_PRIMARY_CNI }}
securityContext:
runAsUser: 0
privileged: false
@@ -143,7 +141,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KUBE_NODE_NAME
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
@@ -227,6 +225,7 @@ spec:
limits:
cpu: {{ index .Values "kube-ovn-cni" "limits" "cpu" }}
memory: {{ index .Values "kube-ovn-cni" "limits" "memory" }}
ephemeral-storage: {{ index .Values "kube-ovn-cni" "limits" "ephemeral-storage" }}
nodeSelector:
kubernetes.io/os: "linux"
volumes:

View File

@@ -115,7 +115,7 @@ spec:
value: "{{- .Values.func.HW_OFFLOAD }}"
- name: TUNNEL_TYPE
value: "{{- .Values.networking.TUNNEL_TYPE }}"
- name: KUBE_NODE_NAME
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
@@ -173,6 +173,7 @@ spec:
limits:
cpu: {{ index .Values "ovs-ovn" "limits" "cpu" }}
memory: {{ index .Values "ovs-ovn" "limits" "memory" }}
ephemeral-storage: {{ index .Values "ovs-ovn" "limits" "ephemeral-storage" }}
nodeSelector:
kubernetes.io/os: "linux"
volumes:

View File

@@ -73,7 +73,6 @@ spec:
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.PINGER_EXTERNAL_DOMAIN }}
{{- end }}
- --ds-namespace={{ .Values.namespace }}
- --logtostderr=false
- --alsologtostderr=true
- --log_file=/var/log/kube-ovn/kube-ovn-pinger.log
@@ -102,6 +101,10 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
@@ -133,6 +136,7 @@ spec:
limits:
cpu: {{ index .Values "kube-ovn-pinger" "limits" "cpu" }}
memory: {{ index .Values "kube-ovn-pinger" "limits" "memory" }}
ephemeral-storage: {{ index .Values "kube-ovn-pinger" "limits" "ephemeral-storage" }}
livenessProbe:
httpGet:
path: /metrics

View File

@@ -120,6 +120,14 @@ spec:
- sh
- -c
- /kube-ovn/remove-finalizer.sh 2>&1 | tee -a /var/log/kube-ovn/remove-finalizer.log
resources:
requests:
cpu: 100m
memory: 200Mi
limits:
cpu: 1
memory: 500Mi
ephemeral-storage: 1Gi
volumeMounts:
- mountPath: /var/log/kube-ovn
name: kube-ovn-log

View File

@@ -31,6 +31,8 @@ rules:
- daemonsets
verbs:
- list
- get
- watch
- apiGroups:
- apps
resources:

View File

@@ -7,7 +7,7 @@ metadata:
kubernetes.io/description: |
kube-ovn vpc-nat common config
data:
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.vpcRepository }}:{{ .Values.global.images.kubeovn.tag }}
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.natgateway.repository }}:{{ or .Values.global.images.natgateway.tag .Values.global.images.kubeovn.tag }}
---
kind: ConfigMap

View File

@@ -8,10 +8,11 @@ global:
images:
kubeovn:
repository: kube-ovn
vpcRepository: vpc-nat-gateway
tag: v1.14.25
support_arm: true
thirdparty: true
tag: v1.15.3
natgateway:
repository: vpc-nat-gateway
# Falls back to the same tag as kubeovn if empty
tag: v1.15.3
image:
pullPolicy: IfNotPresent
@@ -46,6 +47,8 @@ networking:
ENABLE_METRICS: true
# comma-separated string of nodelocal DNS ip addresses
NODE_LOCAL_DNS_IP: ""
# comma-separated list of destination IP CIDRs that should skip conntrack processing
SKIP_CONNTRACK_DST_CIDRS: ""
PROBE_INTERVAL: 180000
OVN_NORTHD_PROBE_INTERVAL: 5000
OVN_LEADER_PROBE_INTERVAL: 5
@@ -57,6 +60,7 @@ networking:
func:
ENABLE_LB: true
ENABLE_NP: true
NP_ENFORCEMENT: standard
ENABLE_EXTERNAL_VPC: false
HW_OFFLOAD: false
ENABLE_LB_SVC: false
@@ -73,6 +77,7 @@ func:
ENABLE_NAT_GW: true
ENABLE_OVN_IPSEC: false
ENABLE_ANP: false
ENABLE_DNS_NAME_RESOLVER: false
SET_VXLAN_TX_OFF: false
OVSDB_CON_TIMEOUT: 3
OVSDB_INACTIVITY_TIMEOUT: 10
@@ -80,6 +85,10 @@ func:
ENABLE_OVN_LB_PREFER_LOCAL: false
ipv4:
POD_CIDR: "10.16.0.0/16"
POD_GATEWAY: "10.16.0.1"
SVC_CIDR: "10.96.0.0/12"
JOIN_CIDR: "100.64.0.0/16"
PINGER_EXTERNAL_ADDRESS: "1.1.1.1"
PINGER_EXTERNAL_DOMAIN: "kube-ovn.io."
@@ -116,6 +125,7 @@ cni_conf:
CNI_CONF_FILE: "/kube-ovn/01-kube-ovn.conflist"
LOCAL_BIN_DIR: "/usr/local/bin"
MOUNT_LOCAL_BIN_DIR: false
NON_PRIMARY_CNI: false
kubelet_conf:
KUBELET_DIR: "/var/lib/kubelet"
@@ -135,7 +145,7 @@ fullnameOverride: ""
HYBRID_DPDK: false
HUGEPAGE_SIZE_TYPE: hugepages-2Mi # Default
HUGEPAGES: 1Gi
DPDK_IMAGE_TAG: "v1.14.0-dpdk"
DPDK_IMAGE_TAG: "v1.15.0-dpdk"
DPDK_CPU: "1000m" # Default CPU configuration
DPDK_MEMORY: "2Gi" # Default Memory configuration
@@ -146,6 +156,7 @@ ovn-central:
limits:
cpu: "3"
memory: "4Gi"
ephemeral-storage: 1Gi
ovs-ovn:
requests:
cpu: "200m"
@@ -153,6 +164,7 @@ ovs-ovn:
limits:
cpu: "2"
memory: "1000Mi"
ephemeral-storage: 1Gi
kube-ovn-controller:
requests:
cpu: "200m"
@@ -160,6 +172,7 @@ kube-ovn-controller:
limits:
cpu: "1000m"
memory: "1Gi"
ephemeral-storage: 1Gi
kube-ovn-cni:
requests:
cpu: "100m"
@@ -167,6 +180,7 @@ kube-ovn-cni:
limits:
cpu: "1000m"
memory: "1Gi"
ephemeral-storage: 1Gi
kube-ovn-pinger:
requests:
cpu: "100m"
@@ -174,6 +188,7 @@ kube-ovn-pinger:
limits:
cpu: "200m"
memory: "400Mi"
ephemeral-storage: 1Gi
kube-ovn-monitor:
requests:
cpu: "200m"
@@ -181,3 +196,4 @@ kube-ovn-monitor:
limits:
cpu: "200m"
memory: "200Mi"
ephemeral-storage: 1Gi

View File

@@ -65,4 +65,4 @@ global:
images:
kubeovn:
repository: kubeovn
tag: v1.14.25@sha256:d0b29daaf36e81cac0f9fb15d0ea6b1b49f1abba81a14c73b88a2e60ffcc5978
tag: v1.15.3@sha256:fa53d5f254f640cb626329ad35d9e7aad647dd8e1e645e68f3f13c3659472a30

File diff suppressed because one or more lines are too long

View File

@@ -1,3 +1,3 @@
storageClass: replicated
csiDriver:
image: ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:0b208ed506dd8c453426761d93ec3d42c9d1b791ba6c91b01c6386dcb1b02442
image: ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:bb5b17044969e663c3b391f7274883735c0ffe05a9523988469bdf2974de2dea

View File

@@ -1,5 +1,5 @@
lineageControllerWebhook:
image: ghcr.io/cozystack/cozystack/lineage-controller-webhook:v0.40.7@sha256:1680ba3634c81691c2cf346759c7746cf700ce59fd752fce862e5e89f3a6fdb5
image: ghcr.io/cozystack/cozystack/lineage-controller-webhook:v0.41.11@sha256:91ad700fe681c6f96e756c51ee22ff50e606536c316c608e11207bdca817e0ce
debug: false
localK8sAPIEndpoint:
enabled: true

View File

@@ -1,7 +1,7 @@
piraeusServer:
image:
repository: ghcr.io/cozystack/cozystack/piraeus-server
tag: 1.32.3@sha256:3d1b4348c665fb88f8bead09a1fa68547e6872172ed0168449cb232c4467ad84
tag: 1.32.3@sha256:18fac1ac740ce64c1dfb31b5ab36b6d008af8d9a70aedd451b32a726c79ca794
linstor:
autoDiskful:
enabled: true
@@ -10,4 +10,4 @@ linstor:
linstorCSI:
image:
repository: ghcr.io/cozystack/cozystack/linstor-csi
tag: v1.10.5@sha256:6e6cf48cb994f3918df946e02ec454ac64916678b3e60d78c136b431f1a26155
tag: v1.10.5@sha256:50ab1ab0210d4e7ebfca311f445bb764516db5ddb63fc6d28536b28622eee753

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,3 @@
apiVersion: v2
name: cozy-mongodb-operator
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -0,0 +1,11 @@
export NAME=mongodb-operator
export NAMESPACE=cozy-$(NAME)
include ../../../scripts/package.mk
update:
rm -rf charts
helm repo add percona https://percona.github.io/percona-helm-charts
helm repo update percona
helm pull percona/psmdb-operator --untar --untardir charts
rm -rf charts/psmdb-operator/charts

View File

@@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,13 @@
apiVersion: v2
appVersion: 1.21.1
description: A Helm chart for deploying the Percona Operator for MongoDB
home: https://docs.percona.com/percona-operator-for-mongodb/
maintainers:
- email: natalia.marukovich@percona.com
name: nmarukovich
- email: julio.pasinatto@percona.com
name: jvpasinatto
- email: eleonora.zinchenko@percona.com
name: eleo007
name: psmdb-operator
version: 1.21.2

View File

@@ -0,0 +1,13 @@
Copyright 2019 Paul Czarkowski <username.taken@gmail.com>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,77 @@
# Percona Operator for MongoDB
Percona Operator for MongoDB allows users to deploy and manage Percona Server for MongoDB Clusters on Kubernetes.
Useful links:
- [Operator Github repository](https://github.com/percona/percona-server-mongodb-operator)
- [Operator Documentation](https://www.percona.com/doc/kubernetes-operator-for-psmongodb/index.html)
## Pre-requisites
* Kubernetes 1.30+
* Helm v3
# Installation
This chart will deploy the Operator Pod for the further Percona Server for MongoDB creation in Kubernetes.
## Installing the chart
To install the chart with the `psmdb` release name using a dedicated namespace (recommended):
```sh
helm repo add percona https://percona.github.io/percona-helm-charts/
helm install my-operator percona/psmdb-operator --version 1.21.2 --namespace my-namespace
```
The chart can be customized using the following configurable parameters:
| Parameter | Description | Default |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------ | ----------------------------------------- |
| `image.repository` | PSMDB Operator Container image name | `percona/percona-server-mongodb-operator` |
| `image.tag` | PSMDB Operator Container image tag | `1.21.1` |
| `image.pullPolicy` | PSMDB Operator Container pull policy | `Always` |
| `image.pullSecrets` | PSMDB Operator Pod pull secret | `[]` |
| `replicaCount` | PSMDB Operator Pod quantity | `1` |
| `tolerations` | List of node taints to tolerate | `[]` |
| `annotations` | PSMDB Operator Deployment annotations | `{}` |
| `podAnnotations` | PSMDB Operator Pod annotations | `{}` |
| `labels` | PSMDB Operator Deployment labels | `{}` |
| `podLabels` | PSMDB Operator Pod labels | `{}` |
| `resources` | Resource requests and limits | `{}` |
| `nodeSelector` | Labels for Pod assignment | `{}` |
| `podAnnotations` | Annotations for pod | `{}` |
| `podSecurityContext` | Pod Security Context | `{}` |
| `watchNamespace` | Set when a different from default namespace is needed to watch (comma separated if multiple needed) | `""` |
| `createNamespace` | Set if you want to create watched namespaces with helm | `false` |
| `rbac.create` | If false RBAC will not be created. RBAC resources will need to be created manually | `true` |
| `securityContext` | Container Security Context | `{}` |
| `serviceAccount.create` | If false the ServiceAccounts will not be created. The ServiceAccounts must be created manually | `true` |
| `serviceAccount.annotations` | PSMDB Operator ServiceAccount annotations | `{}` |
| `logStructured` | Force PSMDB operator to print JSON-wrapped log messages | `false` |
| `logLevel` | PSMDB Operator logging level | `INFO` |
| `disableTelemetry` | Disable sending PSMDB Operator telemetry data to Percona | `false` |
| `maxConcurrentReconciles` | Number of concurrent workers that can reconcile resources in Percona Server for MongoDB clusters in parallel | `1` |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
Alternatively a YAML file that specifies the values for the parameters can be provided like this:
```sh
helm install psmdb-operator -f values.yaml percona/psmdb-operator
```
## Deploy the database
To deploy Percona Server for MongoDB run the following command:
```sh
helm install my-db percona/psmdb-db
```
See more about Percona Server for MongoDB deployment in its chart [here](https://github.com/percona/percona-helm-charts/tree/main/charts/psmdb-db) or in the [Helm chart installation guide](https://www.percona.com/doc/kubernetes-operator-for-psmongodb/helm.html).
# Need help?
**Commercial Support** | **Community Support** |
:-: | :-: |
| <br/>Enterprise-grade assistance for your mission-critical database deployments in containers and Kubernetes. Get expert guidance for complex tasks like multi-cloud replication, database migration and building platforms.<br/><br/> | <br/>Connect with our engineers and fellow users for general questions, troubleshooting, and sharing feedback and ideas.<br/><br/> |
| **[Get Percona Support](https://hubs.ly/Q02ZTH8Q0)** | **[Visit our Forum](https://forums.percona.com/)** |

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,40 @@
1. Percona Operator for MongoDB is deployed.
See if the operator Pod is running:
kubectl get pods -l app.kubernetes.io/name=psmdb-operator --namespace {{ .Release.Namespace }}
Check the operator logs if the Pod is not starting:
export POD=$(kubectl get pods -l app.kubernetes.io/name=psmdb-operator --namespace {{ .Release.Namespace }} --output name)
kubectl logs $POD --namespace={{ .Release.Namespace }}
2. Deploy the database cluster from psmdb-db chart:
helm install my-db percona/psmdb-db --namespace={{ .Release.Namespace }}
{{- if .Release.IsUpgrade }}
{{- $ctx := dict "upgradeCrd" false }}
{{- $crdNames := list "perconaservermongodbbackups.psmdb.percona.com" "perconaservermongodbrestores.psmdb.percona.com " "perconaservermongodbs.psmdb.percona.com" }}
{{- range $name := $crdNames }}
{{- $crd := lookup "apiextensions.k8s.io/v1" "CustomResourceDefinition" "" $name }}
{{- if $crd }}
{{- $crdLabels := (($crd).metadata).labels | default dict }}
{{- $crdVersion := index $crdLabels "app.kubernetes.io/version" }}
{{- if or (not $crdVersion) (semverCompare (printf "< %s" $.Chart.AppVersion) (trimPrefix "v" $crdVersion)) }}
{{- $_ := set $ctx "upgradeCrd" true }}
{{- end }}
{{- end }}
{{- end }}
{{- if $ctx.upgradeCrd }}
** WARNING ** During Helm upgrade CRDs are not automatically upgraded.
Consider upgrading to the latest version of the CRDs using the command below:
kubectl apply --server-side --force-conflicts -f https://raw.githubusercontent.com/percona/percona-server-mongodb-operator/v{{ .Chart.AppVersion }}/deploy/crd.yaml
Ensure all deprecated fields are reviewed as part of the upgrade process, especially when running multiple PSMDB Operator versions in the same cluster. Deprecated fields may be removed or unsupported in newer CRD versions.
{{- end }}
{{- end }}
Read more in our documentation: https://docs.percona.com/percona-operator-for-mongodb/

Some files were not shown because too many files have changed in this diff Show More