Compare commits

...

89 Commits

Author SHA1 Message Date
Kirill Ilin
ba823b0c06 [tenant] Create affinity class design draft
Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
2026-02-18 20:27:51 +05:00
Andrei Kvapil
7ff5b2ba23 [harbor] Add managed Harbor container registry (#2058)
## What this PR does

Adds Harbor v2.14.2 as a managed tenant-level container registry service
in the PaaS bundle.

**Architecture:**

- Wrapper chart (`apps/harbor`) — HelmRelease, Ingress,
WorkloadMonitors, BucketClaim, dashboard RBAC
- Vendored upstream chart (`system/harbor`) from helm.goharbor.io
v1.18.2
- System chart (`system/harbor`) provisions PostgreSQL via CloudNativePG
and Redis via redis-operator
- ApplicationDefinition (`system/harbor-rd`) for dynamic `Harbor` CRD
registration
- PackageSource and paas.yaml bundle entry for platform integration

**Key design decisions:**

- Database and Redis provisioned via CPNG and redis-operator (not
internal Helm-based instances) for reliable day-2 operations
- Registry image storage uses S3 via COSI BucketClaim/BucketAccess from
namespace SeaweedFS
- Trivy vulnerability scanner cache uses PVC (S3 not supported by
vendored chart)
- Token CA key/cert persisted across upgrades via Secret lookup
- Per-component resource configuration (core, registry, jobservice,
trivy)
- Ingress with TLS via cert-manager, cloudflare issuer type handling,
proxy timeouts for large image pushes
- Auto-generated admin credentials persisted across upgrades

**E2E test:** Creates Harbor instance, verifies HelmRelease readiness,
deployment availability, credentials secret, service port, then cleans
up.

### Release note

```release-note
[harbor] Add managed Harbor container registry as a tenant-level service
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added Harbor container registry deployment with integrated Kubernetes
support, including database and cache layers.
  * Enabled metrics monitoring via Prometheus integration.
  * Configured dashboard management interface for Harbor administration.

* **Tests**
  * Added end-to-end testing for Harbor deployment and verification.

* **Chores**
  * Integrated Harbor into the platform's application package bundle.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-18 13:54:26 +01:00
Aleksei Sviridkin
199ffe319a fix(harbor): set UTF-8 encoding and locale for CNPG database
Add encoding, localeCollate, and localeCType to initdb bootstrap
configuration to ensure fulltext search (ilike) works correctly.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 11:26:58 +03:00
Andrei Kvapil
681b2cef54 docs: add changelog for v1.0.0-beta.6 (#2067)
This PR adds the changelog for release `v1.0.0-beta.6`.

 Changelog has been automatically generated in
`docs/changelogs/v1.0.0-beta.6.md`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
  * Cilium-Kilo networking variant for enhanced multi-location setup
  * NATS monitoring dashboards with Prometheus integration
  * DNS validation for Application names
  * Operator auto-installation of CRDs at startup

* **Bug Fixes**
  * Fixed HelmRelease adoption during platform migration
  * Improved test reliability and timeout handling

* **Documentation**
  * Enhanced Azure autoscaling troubleshooting guidance
  * Updated multi-location configuration docs

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-18 00:29:45 +01:00
Andrei Kvapil
5aa53d14c8 Release v1.0.0-beta.6 (#2066)
This PR prepares the release `v1.0.0-beta.6`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated container image references from v1.0.0-beta.5 to v1.0.0-beta.6
across the platform, including updates to operators, controllers,
dashboards, storage components, and other services with corresponding
digest updates.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-18 00:28:50 +01:00
Aleksei Sviridkin
daf1b71e7c fix(harbor): add explicit CNPG bootstrap configuration
Specify initdb bootstrap with database and owner names explicitly
instead of relying on CNPG defaults which may change between versions.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 02:23:00 +03:00
Aleksei Sviridkin
0198c9896a fix(harbor): use standard hostname pattern without double prefix
Follow the same hostname pattern as kubernetes and vpn apps: use
Release.Name directly (which already includes the harbor- prefix)
instead of adding an extra harbor. subdomain.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 02:17:59 +03:00
Aleksei Sviridkin
78f8ee2deb fix(harbor): enable database TLS and fix token key/cert check
Change sslmode from disable to require for CNPG PostgreSQL connection,
as CNPG supports TLS out of the box. Fix token key/cert preservation to
verify both values are present before passing to Harbor core.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 02:12:11 +03:00
Aleksei Sviridkin
87d0390256 fix(harbor): include tenant domain in default hostname and add E2E cleanup
Use tenant base domain in default hostname construction (harbor.RELEASE.DOMAIN)
to match the pattern used by other apps (kubernetes, vpn). Remove unused $ingress
variable from harbor.yaml. Add cleanup of stale resources from previous failed
E2E runs.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 02:07:38 +03:00
cozystack-bot
96467cdefd docs: add changelog for v1.0.0-beta.6
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-17 22:44:01 +00:00
cozystack-bot
bff5468b52 Prepare release v1.0.0-beta.6
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-17 22:36:40 +00:00
Andrei Kvapil
3b267d6882 refactor(e2e): use helm install instead of kubectl apply for cozystack installation (#2060)
## Summary

- Replace pre-rendered static YAML application (`kubectl apply`) with
direct `helm upgrade --install` of the `packages/core/installer` chart
in E2E tests
- Remove CRD/operator artifact upload/download from CI workflow — the
chart with correct values is already present in the sandbox via
workspace copy and `pr.patch`
- Remove `copy-installer-manifest` Makefile target and its dependencies

## Test plan

- [ ] CI build job completes without uploading CRD/operator artifacts
- [ ] E2E `install-cozystack` step succeeds with `helm upgrade
--install`
- [ ] All existing E2E app tests pass

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* PR workflows now only keep the primary disk asset; publishing/fetching
of auxiliary operator and CRD artifacts removed.
* CRD manifests are produced by concatenation and a verify-crds check
was added to unit tests; file-write permissions for embedded manifests
tightened.

* **New Features**
* Operator can install CRDs at startup to ensure resources exist before
reconcile.
  * E2E install now uses the chart-based installer flow.

* **Tests**
* Added comprehensive tests for CRD-install handling and manifest
writing.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-17 23:31:08 +01:00
Aleksei Sviridkin
e7ffc21743 feat(harbor): switch registry storage to S3 via COSI BucketClaim
Replace PVC-based registry storage with S3 via COSI BucketClaim/BucketAccess.
The system chart parses BucketInfo secret and creates a registry-s3 Secret
with REGISTRY_STORAGE_S3_* env vars that override Harbor's ConfigMap values.

- Add bucket-secret.yaml to system chart (BucketInfo parser)
- Remove storageType/size from registry config (S3 is now the only option)
- Use Harbor's existingSecret support for S3 credentials injection
- Add objectstorage-controller to PackageSource dependencies
- Update E2E test with COSI bucket provisioning waits and diagnostics

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 01:23:02 +03:00
Aleksei Sviridkin
5bafdfd453 ci: trigger workflow re-run
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:59:10 +03:00
Andrei Kvapil
3e1981bc24 Add monitoring for NATs (#1381)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[nats] add monitoring
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

## Release Notes

* **New Features**
  * Added Grafana dashboards for NATS JetStream and Server monitoring
* Added support for specifying container image digests and full image
names

* **Documentation**
* Enhanced NATS Helm chart documentation with container resource
configuration guidance

* **Chores**
  * Updated NATS application version and component image versions
* Improved Kubernetes graceful shutdown and Prometheus exporter
configuration

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-17 22:54:55 +01:00
kklinch0
7ac989923d Add monitoring for NATs
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
Signed-off-by: kklinch0 <kklinch0@gmail.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-17 22:54:12 +01:00
Andrei Kvapil
affd91dd41 fix(platform): adopt tenant-root into cozystack-basics during migration (#2065)
## What this PR does

Adds migration 31 to adopt existing `tenant-root` Namespace and
HelmRelease
into the `cozystack-basics` Helm release during upgrade from v0.41.x to
v1.0.

In v0.41.x these resources were applied via `kubectl apply` (from the
platform
chart `apps.yaml`) with no Helm release tracking. In v1.0 they are
created by
the `cozystack-basics` chart. Without Helm ownership annotations, `helm
install`
of `cozystack-basics` fails because the resources already exist.

The migration adds:
- `meta.helm.sh/release-name` and `meta.helm.sh/release-namespace`
annotations
  for Helm adoption
- `app.kubernetes.io/managed-by: Helm` label
- `helm.sh/resource-policy: keep` annotation on the HelmRelease to
prevent
  accidental deletion
- `sharding.fluxcd.io/key: tenants` label required by flux sharding in
v1.0

This follows the same pattern as migrations 22 and 27 (CRD adoption).

Related: #2063

### Release note

```release-note
[platform] adopt tenant-root resources into cozystack-basics Helm release during migration
```
2026-02-17 22:52:46 +01:00
Andrei Kvapil
26178d97be fix(platform): adopt tenant-root into cozystack-basics during migration
In v0.41.x the tenant-root Namespace and HelmRelease were applied via
kubectl apply with no Helm release tracking. In v1.0 these resources
are managed by the cozystack-basics Helm release. Without proper Helm
ownership annotations the install of cozystack-basics fails because
the resources already exist.

Add migration 31 that annotates and labels both the Namespace and
HelmRelease so Helm can adopt them, matching the pattern established
in migrations 22 and 27.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-17 22:51:00 +01:00
Aleksei Sviridkin
efb9bc70b3 fix(harbor): use Release.Name for default host to avoid conflicts
Multiple Harbor instances in the same namespace would get the same
default hostname when derived from namespace host. Use Release.Name
instead for unique hostnames per instance.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:13 +03:00
Aleksei Sviridkin
0e6ae28bb8 fix(harbor): resolve Redis nil pointer on first install
The vendored Harbor chart does an unsafe `lookup` of the Redis auth
Secret at template rendering time to extract the password. On first
install, the Secret doesn't exist yet (created by the same chart),
causing a nil pointer error. Failed installs are rolled back, deleting
the Secret, so retries also fail — creating an infinite failure loop.

Fix by generating the Redis password in the wrapper chart (same
pattern as admin password), storing it in the credentials Secret,
and injecting it via HelmRelease valuesFrom with targetPath. This
bypasses the vendored chart's lookup entirely — it uses the password
value directly instead of looking up the Secret.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:12 +03:00
Aleksei Sviridkin
0f2ba5aba2 fix(harbor): add diagnostic output on E2E system HelmRelease timeout
Dump HelmRelease status, pods, events, and ExternalArtifact info
when harbor-test-system fails to become ready, to diagnose the
root cause of the persistent timeout.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:12 +03:00
Aleksei Sviridkin
490faaf292 fix(harbor): add operator dependencies, fix persistence rendering, increase E2E timeout
Add postgres-operator and redis-operator to PackageSource dependsOn
to ensure CRDs are available before Harbor system chart deploys.

Make persistentVolumeClaim conditional to avoid empty YAML mapping
when using S3 storage without Trivy.

Increase E2E system HelmRelease timeout from 300s to 600s to account
for CPNG + Redis + Harbor bootstrap time on QEMU.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:12 +03:00
Aleksei Sviridkin
cea57f62c8 [harbor] Make registry storage configurable: S3 or PVC
Add registry.storageType parameter (pvc/s3) to let users choose
between PVC storage and S3 via COSI BucketClaim. Default is pvc,
which works without SeaweedFS in the tenant namespace.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:12 +03:00
Aleksei Sviridkin
c815725bcf [harbor] Fix E2E test: use correct HelmRelease name with prefix
ApplicationDefinition has prefix "harbor-", so CR name "harbor" produces
HelmRelease "harbor-harbor". Use name="test" and release="harbor-test"
to correctly reference all resources.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:12 +03:00
Aleksei Sviridkin
6c447b2fcb [harbor] Improve ingress template: quote hosts, handle cloudflare issuer
Add | quote to host values in ingress for proper YAML escaping.
Add cloudflare issuer type handling following bucket/dashboard pattern.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:12 +03:00
Aleksei Sviridkin
0c85639fed [harbor] Move to apps/, use S3 via BucketClaim for registry storage
Move Harbor from packages/extra/ to packages/apps/ as it is a
self-sufficient end-user application, not a singleton tenant module.
Update bundle entry from system to paas accordingly.

Replace registry PVC storage with S3 via COSI BucketClaim/BucketAccess,
provisioned from the namespace's SeaweedFS instance. S3 credentials are
injected into the HelmRelease via valuesFrom with targetPath.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:12 +03:00
Aleksei Sviridkin
2dd3c03279 [harbor] Use CPNG and redis-operator instead of internal databases
Replace Harbor's internal PostgreSQL with CloudNativePG operator and
internal Redis with redis-operator (RedisFailover), following established
Cozystack patterns from seaweedfs and redis apps.

Additional fixes from code review:
- Fix registry resources nesting level (registry.registry/controller)
- Persist token CA across upgrades to prevent JWT invalidation
- Update values schema and ApplicationDefinition

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:11 +03:00
Aleksei Sviridkin
305495d023 [harbor] Fix YAML quoting for Go template values in ApplicationDefinition
Quote resourceNames values starting with {{ to prevent YAML parser
from interpreting them as flow mappings.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:11 +03:00
Aleksei Sviridkin
543ce6e5fd [harbor] Add managed Harbor container registry application
Add Harbor v2.14.2 as a tenant-level managed service with per-component
resource configuration, ingress with TLS termination, and internal
PostgreSQL/Redis.

Includes:
- extra/harbor wrapper chart with HelmRelease, WorkloadMonitors, Ingress
- system/harbor with vendored upstream chart (helm.goharbor.io v1.18.2)
- harbor-rd ApplicationDefinition for dynamic CRD registration
- PackageSource and system.yaml bundle entry
- E2E test with Secret and Service verification

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:50:11 +03:00
Aleksei Sviridkin
09805ff382 fix(manifestutil): check apiVersion in CollectCRDNames for consistent GVK matching
CollectCRDNames now requires both apiVersion "apiextensions.k8s.io/v1"
and kind "CustomResourceDefinition", consistent with the validation in
crdinstall.Install.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:57 +03:00
Aleksei Sviridkin
92d261fc1e fix: address review findings in operator and tests
- Remove duplicate "Starting controller manager" log before install
  phases, keep only the one before mgr.Start()
- Rename misleading test "document without kind returns error" to
  "decoder rejects document without kind" to match actual behavior
- Document Helm uninstall CRD behavior in deployment template comment
- Use --health-probe-bind-address=0 consistently with metrics-bind
- Exclude all dotfiles in verify-crds diff, not just .gitattributes

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:57 +03:00
Aleksei Sviridkin
9eb13fdafe fix(controller): update workload test to use current label name
The workload reconciler was refactored to use the label
workloads.cozystack.io/monitor but the test still used the old
workloadmonitor.cozystack.io/name label, causing the reconciler to
delete the workload instead of keeping it.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:57 +03:00
Aleksei Sviridkin
cecc5861af fix(operator): validate CRD apiVersion, respect SIGTERM during install
- Check both apiVersion and kind when validating embedded CRD manifests
  to prevent applying objects with wrong API group
- Move ctrl.SetupSignalHandler() before install phases so CRD and Flux
  installs respect SIGTERM instead of blocking for up to 2 minutes
- Replace custom contains/searchString helpers with strings.Contains

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:56 +03:00
Aleksei Sviridkin
abd644122f fix(crdinstall): reject non-CRD objects in embedded manifests
Validate that all parsed objects are CustomResourceDefinition before
applying with force server-side apply. This prevents accidental
application of arbitrary resources if a non-CRD file is placed in
the manifests directory.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:56 +03:00
Aleksei Sviridkin
3fbce0dba5 refactor(operator): extract shared manifest utils from crdinstall and fluxinstall
Move duplicated YAML parsing (ReadYAMLObjects, ParseManifestFile) and
CRD readiness check (WaitForCRDsEstablished, CollectCRDNames) into a
shared internal/manifestutil package. Both crdinstall and fluxinstall
now import from manifestutil instead of maintaining identical copies.

Replace fluxinstall's time.Sleep(2s) after CRD apply with proper
WaitForCRDsEstablished polling, matching the crdinstall behavior.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:56 +03:00
Aleksei Sviridkin
b3fe6a8c4a fix(crdinstall): hardcode CRD GVK, add timeout test, document dual install
- Use explicit apiextensions.k8s.io/v1 CRD GVK in waitForCRDsEstablished
  instead of fragile objects[0].GroupVersionKind()
- Add TestInstall_crdNotEstablished for context timeout path
- Add --recursive to diff in verify-crds Makefile target
- Document why both crds/ and --install-crds exist in deployment template

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:56 +03:00
Aleksei Sviridkin
962f8e96f4 ci(makefile): add CRD sync verification between Helm crds/ and operator embed
Add verify-crds target that diffs packages/core/installer/crds/ and
internal/crdinstall/manifests/ to catch accidental divergence. Include
it in unit-tests target so it runs in CI.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:56 +03:00
Aleksei Sviridkin
20d122445d fix(crdinstall): add CRD readiness check, Install tests, fix fluxinstall
- Wait for CRDs to have Established condition after server-side apply,
  instead of returning immediately
- Add TestInstall with fake client and interceptor to simulate CRD
  establishment
- Add TestInstall_noManifests and TestInstall_writeManifestsFails for
  error paths
- Fix fluxinstall/manifests.embed.go: use filepath.Join for OS paths
  and restrict permissions from 0666 to 0600 (same fix as crdinstall)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:56 +03:00
Aleksei Sviridkin
4187b5ed94 fix(crdinstall): use filepath for OS paths, restrict permissions, add tests
- Use filepath.Join instead of path.Join for OS file paths
- Restrict extracted manifest permissions from 0666 to 0600
- Add unit tests for readYAMLObjects, parseManifests, and
  WriteEmbeddedManifests including permission verification

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:56 +03:00
Aleksei Sviridkin
1558fb428a build(codegen): sync CRDs to operator embed directory
After generating CRDs to packages/core/installer/crds/, copy them to
internal/crdinstall/manifests/ so the operator binary embeds the latest
CRD definitions.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:56 +03:00
Aleksei Sviridkin
879b10b777 feat(installer): enable --install-crds in operator deployment
Add --install-crds=true to cozystack-operator container args so the
operator applies embedded CRD manifests on startup via server-side apply.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:55 +03:00
Aleksei Sviridkin
1ddbe68bc2 feat(operator): add --install-crds flag with embedded CRD manifests
Embed Package and PackageSource CRDs in the operator binary using Go
embed, following the same pattern as --install-flux. The operator applies
CRDs at startup using server-side apply, ensuring they are updated on
every operator restart/upgrade.

This addresses the CRD lifecycle concern: Helm crds/ directory handles
initial install, while the operator manages updates on subsequent
deployments.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:55 +03:00
Aleksei Sviridkin
d8dd5adbe0 fix(testing): remove broken test-cluster target
The test-cluster target references non-existent hack/e2e-cluster.bats
file. Remove it and its dependency from the test target.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:55 +03:00
Aleksei Sviridkin
55cd8fc0e1 refactor(installer): move CRDs to crds/ directory for proper Helm install ordering
Helm installs crds/ contents before processing templates, resolving the
chicken-and-egg problem where PackageSource CR validation fails because
its CRD hasn't been registered yet.

- Move definitions/ to crds/ in the installer chart
- Remove templates/crds.yaml (Helm auto-installs from crds/)
- Update codegen script to write CRDs to crds/
- Replace helm template with cat for static CRD manifest generation
- Remove pre-apply CRD workaround from e2e test

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:55 +03:00
Aleksei Sviridkin
58dfc97201 fix(e2e): apply CRDs before helm install to resolve dependency ordering
Helm cannot validate PackageSource CR during install because the CRD
is part of the same chart. Pre-apply CRDs via helm template + kubectl
apply --server-side before running helm upgrade --install.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:55 +03:00
Aleksei Sviridkin
153d2c48ae refactor(e2e): use helm install instead of kubectl apply for cozystack installation
Replace pre-rendered static YAML application with direct helm chart
installation in e2e tests. The chart directory with correct values is
already present in the sandbox after pr.patch application.

- Remove CRD/operator artifact upload/download from CI workflow
- Remove copy-installer-manifest target from testing Makefile
- Use helm upgrade --install from local chart in e2e-install-cozystack.bats

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-18 00:49:55 +03:00
Andrei Kvapil
39b95107a5 feat(platform): add cilium-kilo networking variant (#2064)
## Summary

Add a new `cilium-kilo` networking variant that combines Cilium as the
CNI with Kilo as the WireGuard mesh overlay. This replaces the
standalone kilo PackageSource with a unified variant under the
networking source.

## Changes

- Add `cilium-kilo` variant to `networking.yaml` PackageSource with
proper component ordering and dependencies
- Add `values-kilo.yaml` for Cilium to disable host firewall when used
with Kilo
- Remove standalone `kilo.yaml` PackageSource (now integrated into
networking source)
- Switch Kilo image to official
`ghcr.io/cozystack/cozystack/kilo:v0.8.2`
- Remove unused `podCIDR`/`serviceCIDR` options and `--service-cidr`
flag from Kilo chart
2026-02-17 22:18:06 +01:00
Andrei Kvapil
b3b7307105 fix(kilo): use official kilo image and clean up cilium-kilo config
Switch kilo image to official ghcr.io/cozystack/cozystack/kilo:v0.8.2,
remove unnecessary enable-ipip-termination from cilium-kilo values,
and update platform source digest.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-17 22:17:38 +01:00
Andrei Kvapil
bf1e49d34b fix(kilo): remove podCIDR passed as --service-cidr
The podCIDR was incorrectly passed as --service-cidr to prevent
masquerading on pod traffic. This is unnecessary for multi-location
mesh and was a leftover from single-cluster assumptions.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-17 20:16:01 +01:00
Andrei Kvapil
96ba3b9ca5 fix(kilo): remove service-cidr option from chart
The --service-cidr flag prevents masquerading for service IPs, but
service CIDRs are cluster-local and not useful for multi-location
mesh routing. Remove the serviceCIDR value and its template usage.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-17 20:14:34 +01:00
Andrei Kvapil
536766cffc feat(platform): add cilium-kilo networking variant
Add a new networking variant that integrates Kilo with Cilium
pre-configured. Cilium is deployed with host firewall disabled and
enable-ipip-termination enabled, which are required for correct IPIP
encapsulation through Cilium's overlay.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-17 19:32:19 +01:00
Andrei Kvapil
8cc8e52d15 chore(platform): remove standalone kilo PackageSource
Kilo is now integrated into the cilium-kilo networking variant instead
of being a separate package that users install manually.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-17 19:31:57 +01:00
Andrei Kvapil
6c431d0857 fix(codegen): add gen_client to update-codegen.sh and regenerate applyconfiguration (#2061)
## What this PR does

Fix build error in `pkg/generated/applyconfiguration/utils.go` caused by
a reference to `testing.TypeConverter` which was removed in client-go
v0.34.1.

The root cause was that `hack/update-codegen.sh` called `gen_helpers`
and
`gen_openapi` but never called `gen_client`, so the applyconfiguration
code
was never regenerated after the client-go upgrade.

Changes:
- Fix `THIS_PKG` from `k8s.io/sample-apiserver` template leftover to
correct module path
- Add `kube::codegen::gen_client` call with `--with-applyconfig` flag
- Regenerate applyconfiguration (now uses `managedfields.TypeConverter`)
- Add tests for `ForKind` and `NewTypeConverter` functions

### Release note

```release-note
[maintenance] Regenerate applyconfiguration code for client-go v0.34.1 compatibility
```


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Documentation**
* Updated backup class definitions example to reference MariaDB instead
of MySQL.

* **Chores**
* Updated code generation tooling and module dependencies to support
enhanced functionality.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-17 18:21:39 +01:00
Andrei Kvapil
c6090c554c fix(e2e): make kubernetes test retries effective by cleaning up stale resources (#2062)
## Summary

- Add pre-creation cleanup of backend deployment/service and NFS pod/PVC
in `run-kubernetes.sh`, so E2E test retries start fresh instead of
reusing stuck resources from a failed attempt
- Increase the tenant deployment wait timeout from 90s to 300s to handle
CI resource pressure, aligning with other timeouts in the same function
(control plane 4m, TCP 5m, NFS pod 5m)

## Context

When `kubernetes-previous` (or `kubernetes-latest`) E2E test fails at
the `kubectl wait deployment --timeout=90s` step, `set -eu` causes
immediate exit before cleanup. On retry, `kubectl apply` sees the stuck
deployment as "unchanged", making retries 2 and 3 guaranteed to fail
against the same stuck pod.

This was observed in [run
22076291555](https://github.com/cozystack/cozystack/actions/runs/22076291555)
where `kubernetes-previous` failed 3/3 attempts while passing in 5/6
other recent runs.

## Test plan

- [ ] E2E tests pass (kubernetes-latest and kubernetes-previous)
- [ ] First clean run: `--ignore-not-found` cleanup commands are no-ops,
no side effects
- [ ] Retry after failure: stale deployment is deleted before
re-creation, fresh pod is scheduled

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Improved test deployment reliability with more robust pre/post-test
resource cleanup (including storage-related resources) to prevent
leftovers from prior runs.
* Made environment configuration handling more consistent to avoid
misreads of cluster credentials.
* Increased initialization and load balancer wait timeouts to reduce
flaky failures during provisioning and validation.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-17 18:03:10 +01:00
Aleksei Sviridkin
8b7813fdeb [platform] Add DNS-1035 validation for Application names (#1771)
## What this PR does

Add DNS-1035 validation for Application names to prevent creation of
resources with invalid names that would fail Kubernetes resource
creation.

### Changes

- Add `ValidateApplicationName()` function using standard
`IsDNS1035Label` from `k8s.io/apimachinery`
- Call validation in REST API `Create()` method (skipped on `Update` —
names are immutable)
- Add name length validation: `prefix + name` must fit within Helm
release name limit (53 chars)
- Remove rootHost-based length validation — the underlying label limit
issue is tracked in #2002
- Remove `parseRootHostFromSecret` and `cozystack-values` Secret reading
at API server startup
- Add unit tests for both DNS-1035 format and length validation

### DNS-1035 Requirements

- Start with a lowercase letter `[a-z]`
- Contain only lowercase alphanumeric or hyphens `[-a-z0-9]`
- End with an alphanumeric character `[a-z0-9]`

Fixes #1538
Closes #2001

### Release note

```release-note
[platform] Add DNS-1035 validation for Application names to prevent invalid tenant names
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Enforced DNS-1035 name format and Helm-compatible length limits for
applications during creation; name constraints surfaced early and
reflected in API validation.

* **Documentation**
* Updated tenant naming rules to DNS-1035, clarified hyphen guidance,
and noted maximum length considerations.

* **Tests**
* Added comprehensive tests covering format and length validation,
including many invalid and boundary cases.

* **API**
* OpenAPI/Swagger schemas updated to include name pattern and max-length
validation.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-17 14:33:02 +03:00
Aleksei Sviridkin
a52da8dd8d style(e2e): consistently quote kubeconfig variable references
Quote all tenantkubeconfig-${test_name} references in run-kubernetes.sh
for consistent shell scripting style. The only exception is line 195
inside a sh -ec "..." double-quoted string where inner quotes would
break the outer quoting.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-17 02:00:44 +03:00
Aleksei Sviridkin
315e5dc0bd fix(e2e): make kubernetes test retries effective by cleaning up stale resources
When the kubernetes E2E test fails at the deployment wait step, set -eu
causes immediate exit before cleanup. On retry, kubectl apply outputs
"unchanged" for the stuck deployment, making retries 2 and 3 guaranteed
to fail against the same stuck pod.

Add pre-creation cleanup of backend deployment/service and NFS test
resources using --ignore-not-found, so retries start fresh. Also
increase the deployment wait timeout from 90s to 300s to handle CI
resource pressure, aligning with other timeouts in the same function.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-17 01:58:13 +03:00
Aleksei Sviridkin
75e25fa977 fix(codegen): add gen_client to update-codegen.sh and regenerate applyconfiguration
The applyconfiguration code referenced testing.TypeConverter from
k8s.io/client-go/testing, which was removed in client-go v0.34.1.

Root cause: hack/update-codegen.sh called gen_helpers and gen_openapi
but not gen_client, so applyconfiguration was never regenerated after
the client-go upgrade.

Changes:
- Fix THIS_PKG from sample-apiserver template leftover to correct
  module path
- Add kube::codegen::gen_client call with --with-applyconfig flag
- Regenerate applyconfiguration (now uses managedfields.TypeConverter)
- Add tests for ForKind and NewTypeConverter functions

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-16 23:01:38 +03:00
Aleksei Sviridkin
73b8946a7e chore(codegen): regenerate stale deepcopy and CRD definitions
Run make generate to bring generated files up to date with current
API types. This was pre-existing staleness unrelated to any code change.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-16 23:01:22 +03:00
Andrei Kvapil
f131eb109a Release v1.0.0-beta.5 (#2056)
This PR prepares the release `v1.0.0-beta.5`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated all platform and system components to v1.0.0-beta.5, including
core services, controllers, dashboard, and networking utilities. Changes
include refreshed container image references across installer, platform
migrations, API services, backup systems, and object storage components
for improved stability and consistency.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-16 17:54:52 +01:00
Andrei Kvapil
961da56e96 docs: add changelog for v1.0.0-beta.5 (#2057)
This PR adds the changelog for release `v1.0.0-beta.5`.

 Changelog has been automatically generated in
`docs/changelogs/v1.0.0-beta.5.md`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
  * Added generic Kubernetes deployment support
  * Added Cilium-compatible kilo variant

* **Improvements**
  * Enhanced cluster autoscaler with enforced node group minimum sizes
  * Upgraded dashboard to version 1.4.0

* **Breaking Changes**
* Updated VPC subnet configuration structure; automated migration
provided

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-16 17:54:36 +01:00
cozystack-bot
bfba9fb5e7 docs: add changelog for v1.0.0-beta.5
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-16 16:08:18 +00:00
cozystack-bot
bae70596fc Prepare release v1.0.0-beta.5
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2026-02-16 16:01:30 +00:00
Andrei Kvapil
84b2fa90dd feat(installer): add variant-aware templates for generic Kubernetes support (#2010)
## Summary

- Extend the installer chart to support generic and hosted Kubernetes
deployments via the existing `cozystackOperator.variant` parameter
(introduced in #2034)
- For `variant=generic`: render ConfigMaps (`cozystack`,
`cozystack-operator-config`) and an optional Platform Package CR —
resources previously required to be created manually before deploying on
non-Talos clusters
- Add variant validation in `packagesource.yaml` to fail fast on typos
- Publish the installer chart as an OCI Helm artifact

## Motivation

Deploying Cozystack on generic Kubernetes (k3s, kubeadm, RKE2) currently
requires manually creating ConfigMaps before applying the rendered
operator manifest. This change makes the installer chart variant-aware
so that:

1. `helm template -s` workflow continues to produce correct rendered
manifests
2. `helm install --set cozystackOperator.variant=generic` becomes a
viable single-command deployment path for generic clusters
3. Required ConfigMaps and optional Platform Package CR are generated
from values, eliminating manual steps

## OCI Helm chart publishing

The installer chart is now packaged and pushed to the OCI registry as
part of the `image` build target via `make chart`. A `.helmignore` file
ensures only chart-relevant files are included in the published
artifact.

## Test plan

- [ ] `helm template` with `variant=talos` (default) renders: operator +
PackageSource
- [ ] `helm template` with `variant=generic` renders: operator + 2
ConfigMaps + PackageSource
- [ ] `helm template` with `variant=generic` + `platform.enabled=true`
renders: + Package CR
- [ ] `helm template` with `variant=hosted` renders: operator +
PackageSource
- [ ] Invalid variant value produces a clear error message
- [ ] `make manifests` generates all asset files
- [ ] `helm package` produces a clean chart without build artifacts
2026-02-16 16:49:49 +01:00
Aleksei Sviridkin
7ca6e5ce9e feat(installer): add variant-aware templates for generic Kubernetes support
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-16 18:47:04 +03:00
Andrei Kvapil
e092047630 feat(kilo): add Cilium compatibility variant (#2055)
## Summary
- Add a new `cilium` variant to the kilo PackageSource
- When selected, kilo is deployed with `--compatibility=cilium` flag
- This enables Cilium-aware IPIP encapsulation where outer packets are
routed through Cilium's VxLAN overlay instead of the host network

## Changes
- `packages/core/platform/sources/kilo.yaml` — new `cilium` variant with
`values-cilium.yaml`
- `packages/system/kilo/values-cilium.yaml` — sets `kilo.compatibility:
cilium`
- `packages/system/kilo/templates/kilo.yaml` — conditional
`--compatibility` flag

## Test plan
- [x] Tested on a cluster with Cilium networking and kilo
`--compatibility=cilium`
- [x] Verified IPIP tunneling works through Cilium's VxLAN overlay
- [x] Confirmed `kubectl exec` connectivity to worker nodes across
WireGuard mesh
2026-02-16 16:39:07 +01:00
Andrei Kvapil
956d9cc2a0 Merge branch 'main' into feat/kilo-cilium-variant
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-16 16:38:58 +01:00
Andrei Kvapil
ef040c2ed2 feat(kilo): add Cilium compatibility variant
Add a new "cilium" variant to the kilo PackageSource that deploys kilo
with --compatibility=cilium flag. This enables Cilium-aware IPIP
encapsulation, routing outer packets through Cilium's VxLAN overlay
instead of the host network.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-16 15:32:02 +01:00
Andrei Kvapil
d658850578 [dashboard] Upgrade dashboard to version 1.4.0 (#2051)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

Upgrade dashboard to version 1.4.0.

- Upgrade CRDs in CozyStack dashboard controller
- Add new CRD for CFOMapping
- Increase Ingress proxy timeouts, so that websocket won't be terminated
after 10 seconds of idle
- Add patch in Dockerfile to fix temporary 404 "Factory not found" error
while loading factories on page open

<img width="2560" height="1333" alt="dashboard 1 4 0"
src="https://github.com/user-attachments/assets/4bc19d0e-58a6-4506-b571-07cc6bf4880a"
/>

Notable changes:

- cluster selector is not in the sidebar
- namespace selector moved from navigation to table block

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[dashboard] Upgrade dashboard to version 1.4.0
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a new customization-mapping resource and integrated it into
dashboard navigation and static bootstrapping.

* **Improvements**
* Better routing between factory and detail pages; navigation sync
added.
  * Unified dashboard link/URL placeholders for consistency.
  * YAML editor now includes explicit API group/version context.
  * Increased ingress/proxy timeouts for stability.
* Expanded frontend configuration for navigation, factories, and
customization behavior.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-16 10:20:57 +01:00
Andrei Kvapil
b6dec6042d [vpc] Migrate subnets definition from map to array format (#2052)
## What this PR does

Migrates VPC subnets definition from map format (`map[string]Subnet`) to
array format (`[]Subnet`) with an explicit `name` field. This aligns VPC
subnet definitions with the vm-instance format.

Before:
```yaml
subnets:
  mysubnet0:
    cidr: "172.16.0.0/24"
```

After:
```yaml
subnets:
  - name: mysubnet0
    cidr: "172.16.0.0/24"
```

Subnet ID generation remains unchanged — the sha256 input
(`namespace/vpcId/subnetName`) is the same, so existing resources are
not affected.

Includes a migration script (migration 30) that automatically converts
existing VPC HelmRelease values Secrets from map to array format. The
migration is idempotent and skips subnets that are already arrays or
null.

### Release note

```release-note
[vpc] Migrate VPC subnets from map to array format with automatic data migration
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Breaking Changes**
* VPC subnet configuration format changed from object-based to
array-based structure
* **Migration**
* Automatic migration included to convert existing VPC configurations to
new format

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-16 10:04:28 +01:00
Andrei Kvapil
12fb9ce7dd Update kilo v0.8.0 (#2053)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[]
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated Kilo container version from v0.7.1 to v0.8.0 with
corresponding digest update.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-02-16 10:03:13 +01:00
Andrei Kvapil
cf505c580d Update kilo v0.8.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-15 22:50:24 +01:00
Andrei Kvapil
9031de0538 feat(platform): add migration 30 for VPC subnets map-to-array conversion
Add migration script that converts VPC HelmRelease values from map
format to array format. The script discovers all VirtualPrivateCloud
HelmReleases, reads their values Secrets, and converts subnets using
yq. Idempotent: skips if subnets are already an array or null.

Bumps migration targetVersion from 30 to 31.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-15 21:34:26 +01:00
Andrei Kvapil
8e8bea039f feat(vpc): migrate subnets definition from map to array format
Change VPC subnets from map[string]Subnet to []Subnet with explicit
name field, aligning with the vm-instance subnet format.

Map format:  subnets: {mysubnet: {cidr: "x"}}
Array format: subnets: [{name: mysubnet, cidr: "x"}]

Subnet ID generation (sha256 of namespace/vpcId/subnetName) remains
unchanged — subnetName now comes from .name field instead of map key.
ConfigMap output format stays the same.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-02-15 21:34:19 +01:00
Kirill Ilin
fa55b5f41f [dashboard] Upgrade dashboard to version 1.4.0
- Upgrade CRDs in CozyStack dashboard controller
- Add Ingress proxy timeouts for WebSocket to work without terminations
- Add CFOMapping custom resource

Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
2026-02-15 19:49:19 +05:00
Aleksei Sviridkin
fb8157ef9b refactor(api): remove rootHost-based name length validation
Root-host validation for Tenant names is no longer needed here.
The underlying issue (namespace.cozystack.io/host label exceeding
63-char limit) will be addressed in #2002 by moving the label
to an annotation.

Name length validation now only checks the Helm release name
limit (53 - prefix length), which applies uniformly to all
application types.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-12 13:52:37 +03:00
Aleksei Sviridkin
5bf481ae4d chore: update copyright year in start_test.go
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 13:38:26 +03:00
Aleksei Sviridkin
d5e713a4e7 fix(api): fix import order and context-aware error messages
- Fix goimports order: duration before validation/field
- Show rootHost in error messages only for Tenant kind where it
  actually affects the length calculation

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 13:32:59 +03:00
Aleksei Sviridkin
e267cfcf9d fix(api): address review feedback for validation consistency
- Return field.ErrorList from validateNameLength for consistent
  apierrors.NewInvalid error shape (was NewBadRequest)
- Add klog warning when YAML parsing fails in parseRootHostFromSecret
- Fix maxHelmReleaseName comment to accurately describe Helm convention
- Add note that root-host changes require API server restart
- Replace interface{} with any throughout openapi.go and rest.go
- Remove trailing blank line in const block

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 13:29:42 +03:00
Aleksei Sviridkin
c932740dc5 refactor(api): remove global ObjectMeta name patching from OpenAPI
Remove patchObjectMetaNameValidation and patchObjectMetaNameValidationV2
functions that were modifying the global ObjectMeta schema. This patching
affected ALL resources served by the API server, not just Application
resources. Backend validation in Create() is sufficient for enforcing
name constraints.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 13:24:16 +03:00
Aleksei Sviridkin
e978e00c7e refactor(api): use standard IsDNS1035Label and remove static length limit
Replace custom DNS-1035 regex with k8s.io/apimachinery IsDNS1035Label.
Remove hardcoded maxApplicationNameLength=40 from both validation and
OpenAPI — length validation is now handled entirely by validateNameLength
which computes dynamic limits based on Helm release prefix and root-host.
Fix README to reflect that max length depends on cluster configuration.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 13:18:45 +03:00
Aleksei Sviridkin
9e47669f68 fix(api): remove name validation from Update path and use klog
Skip DNS-1035 and length validation on Update since Kubernetes names
are immutable — validating would block updates to pre-existing resources
with non-conforming names. Replace fmt.Printf with klog for structured
logging consistency.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 13:07:50 +03:00
Aleksei Sviridkin
d4556e4c53 fix(api): address review feedback for name validation
- Add DNS-1035 format validation to Update path (was only in Create)
- Simplify Secret reading by reusing existing scheme instead of
  creating a separate client
- Add nil secret test case for parseRootHostFromSecret

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 12:58:44 +03:00
Aleksei Sviridkin
dd34fb581e fix(api): handle edge case when prefix or root host exhaust name capacity
Add protection against negative or zero maxLen when release prefix or
root host are too long, returning a clear configuration error instead of
a confusing "name too long" message. Add corresponding test cases.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 12:50:30 +03:00
Aleksei Sviridkin
3685d49c4e feat(api): add dynamic name length validation based on root-host
Read root-host from cozystack-values secret at API server startup
and use it to compute maximum allowed name length for applications.

For all apps: validates prefix + name fits within the Helm release
name limit (53 chars). For Tenants: additionally checks that the
host label (name + "." + rootHost) fits within the Kubernetes label
value limit (63 chars).

This replaces the static 40-char limit with a dynamic calculation
that accounts for the actual cluster root host length.

Ref: https://github.com/cozystack/cozystack/issues/2001

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 12:50:30 +03:00
Aleksei Sviridkin
7c0e99e1af [platform] Add OpenAPI schema validation for Application names
Add pattern and maxLength constraints to ObjectMeta.name in OpenAPI schema.
This enables UI form validation when openapi-k8s-toolkit supports it.

- Pattern: ^[a-z]([-a-z0-9]*[a-z0-9])?$ (DNS-1035)
- MaxLength: 40

Depends on: cozystack/openapi-k8s-toolkit#1

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 12:49:30 +03:00
Aleksei Sviridkin
9f20771cf8 docs(tenant): update naming requirements in README
Clarify DNS-1035 naming rules:
- Must start with lowercase letter
- Allowed characters: a-z, 0-9, hyphen
- Must end with letter or number
- Maximum 40 characters

Change wording from "not allowed" to "discouraged" for dashes
since the validation technically permits them.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 12:49:30 +03:00
Aleksei Sviridkin
1cbf183164 fix(validation): limit name to 40 chars and add comprehensive tests
- Reduce maxApplicationNameLength from 63 to 40 characters
  to allow room for prefixes like "tenant-" and nested namespaces
- Add 27 test cases covering:
  - Valid names (simple, single letter, with numbers, double hyphen)
  - Invalid start characters (digit, hyphen)
  - Invalid end characters (hyphen)
  - Invalid characters (uppercase, underscore, dot, space, unicode)
  - Empty/whitespace inputs
  - Length boundary tests (40 valid, 41+ invalid)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 12:49:30 +03:00
Aleksei Sviridkin
87e394c0c9 [platform] Add DNS-1035 validation for Application names
Add validation to ensure Application names (including Tenants) conform
to DNS-1035 format. This prevents creation of resources with names
starting with digits, which would cause Kubernetes resource creation
failures (e.g., Services, Namespaces).

DNS-1035 requires names to:
- Start with a lowercase letter [a-z]
- Contain only lowercase alphanumeric or hyphens [-a-z0-9]
- End with an alphanumeric character [a-z0-9]

Also fixes broken validation.go that referenced non-existent internal
types (apps.Application, apps.ApplicationSpec).

Fixes: https://github.com/cozystack/cozystack/issues/1538

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-02-11 12:49:30 +03:00
208 changed files with 14652 additions and 626 deletions

View File

@@ -71,18 +71,6 @@ jobs:
name: pr-patch
path: _out/assets/pr.patch
- name: Upload CRDs
uses: actions/upload-artifact@v4
with:
name: cozystack-crds
path: _out/assets/cozystack-crds.yaml
- name: Upload operator
uses: actions/upload-artifact@v4
with:
name: cozystack-operator
path: _out/assets/cozystack-operator-talos.yaml
- name: Upload Talos image
uses: actions/upload-artifact@v4
with:
@@ -94,8 +82,6 @@ jobs:
runs-on: ubuntu-latest
if: contains(github.event.pull_request.labels.*.name, 'release')
outputs:
crds_id: ${{ steps.fetch_assets.outputs.crds_id }}
operator_id: ${{ steps.fetch_assets.outputs.operator_id }}
disk_id: ${{ steps.fetch_assets.outputs.disk_id }}
steps:
@@ -139,15 +125,11 @@ jobs:
return;
}
const find = (n) => draft.assets.find(a => a.name === n)?.id;
const crdsId = find('cozystack-crds.yaml');
const operatorId = find('cozystack-operator-talos.yaml');
const diskId = find('nocloud-amd64.raw.xz');
if (!crdsId || !operatorId || !diskId) {
if (!diskId) {
core.setFailed('Required assets missing in draft release');
return;
}
core.setOutput('crds_id', crdsId);
core.setOutput('operator_id', operatorId);
core.setOutput('disk_id', diskId);
@@ -174,20 +156,6 @@ jobs:
name: talos-image
path: _out/assets
- name: "Download CRDs (regular PR)"
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
with:
name: cozystack-crds
path: _out/assets
- name: "Download operator (regular PR)"
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
with:
name: cozystack-operator
path: _out/assets
- name: Download PR patch
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
@@ -208,12 +176,6 @@ jobs:
curl -sSL -H "Authorization: token ${GH_PAT}" -H "Accept: application/octet-stream" \
-o _out/assets/nocloud-amd64.raw.xz \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ needs.resolve_assets.outputs.disk_id }}"
curl -sSL -H "Authorization: token ${GH_PAT}" -H "Accept: application/octet-stream" \
-o _out/assets/cozystack-crds.yaml \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ needs.resolve_assets.outputs.crds_id }}"
curl -sSL -H "Authorization: token ${GH_PAT}" -H "Accept: application/octet-stream" \
-o _out/assets/cozystack-operator-talos.yaml \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ needs.resolve_assets.outputs.operator_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}

View File

@@ -1,4 +1,4 @@
.PHONY: manifests assets unit-tests helm-unit-tests
.PHONY: manifests assets unit-tests helm-unit-tests verify-crds
include hack/common-envs.mk
@@ -38,9 +38,7 @@ build: build-deps
manifests:
mkdir -p _out/assets
helm template installer packages/core/installer -n cozy-system \
-s templates/crds.yaml \
> _out/assets/cozystack-crds.yaml
cat packages/core/installer/crds/*.yaml > _out/assets/cozystack-crds.yaml
# Talos variant (default)
helm template installer packages/core/installer -n cozy-system \
-s templates/cozystack-operator.yaml \
@@ -48,15 +46,16 @@ manifests:
> _out/assets/cozystack-operator-talos.yaml
# Generic Kubernetes variant (k3s, kubeadm, RKE2)
helm template installer packages/core/installer -n cozy-system \
--set cozystackOperator.variant=generic \
--set cozystack.apiServerHost=REPLACE_ME \
-s templates/cozystack-operator.yaml \
-s templates/packagesource.yaml \
--set cozystackOperator.variant=generic \
> _out/assets/cozystack-operator-generic.yaml
# Hosted variant (managed Kubernetes)
helm template installer packages/core/installer -n cozy-system \
--set cozystackOperator.variant=hosted \
-s templates/cozystack-operator.yaml \
-s templates/packagesource.yaml \
--set cozystackOperator.variant=hosted \
> _out/assets/cozystack-operator-hosted.yaml
cozypkg:
@@ -81,7 +80,11 @@ test:
make -C packages/core/testing apply
make -C packages/core/testing test
unit-tests: helm-unit-tests
verify-crds:
@diff --recursive packages/core/installer/crds/ internal/crdinstall/manifests/ --exclude='.*' \
|| (echo "ERROR: CRD manifests out of sync. Run 'make generate' to fix." && exit 1)
unit-tests: helm-unit-tests verify-crds
helm-unit-tests:
hack/helm-unit-tests.sh

View File

@@ -253,3 +253,25 @@ type FactoryList struct {
metav1.ListMeta `json:"metadata,omitempty"`
Items []Factory `json:"items"`
}
// -----------------------------------------------------------------------------
// CustomFormsOverrideMapping
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=cfomappings,scope=Cluster
// +kubebuilder:subresource:status
type CFOMapping struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type CFOMappingList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CFOMapping `json:"items"`
}

View File

@@ -69,6 +69,9 @@ func addKnownTypes(scheme *runtime.Scheme) error {
&Factory{},
&FactoryList{},
&CFOMapping{},
&CFOMappingList{},
)
metav1.AddToGroupVersion(scheme, GroupVersion)
return nil

View File

@@ -159,6 +159,65 @@ func (in *BreadcrumbList) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CFOMapping) DeepCopyInto(out *CFOMapping) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CFOMapping.
func (in *CFOMapping) DeepCopy() *CFOMapping {
if in == nil {
return nil
}
out := new(CFOMapping)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *CFOMapping) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CFOMappingList) DeepCopyInto(out *CFOMappingList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]CFOMapping, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CFOMappingList.
func (in *CFOMappingList) DeepCopy() *CFOMappingList {
if in == nil {
return nil
}
out := new(CFOMappingList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *CFOMappingList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CommonStatus) DeepCopyInto(out *CommonStatus) {
*out = *in

View File

@@ -50,6 +50,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/webhook"
"github.com/cozystack/cozystack/internal/cozyvaluesreplicator"
"github.com/cozystack/cozystack/internal/crdinstall"
"github.com/cozystack/cozystack/internal/fluxinstall"
"github.com/cozystack/cozystack/internal/operator"
"github.com/cozystack/cozystack/internal/telemetry"
@@ -77,6 +78,7 @@ func main() {
var probeAddr string
var secureMetrics bool
var enableHTTP2 bool
var installCRDs bool
var installFlux bool
var disableTelemetry bool
var telemetryEndpoint string
@@ -97,6 +99,7 @@ func main() {
"If set the metrics endpoint is served securely")
flag.BoolVar(&enableHTTP2, "enable-http2", false,
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
flag.BoolVar(&installCRDs, "install-crds", false, "Install Cozystack CRDs before starting reconcile loop")
flag.BoolVar(&installFlux, "install-flux", false, "Install Flux components before starting reconcile loop")
flag.BoolVar(&disableTelemetry, "disable-telemetry", false,
"Disable telemetry collection")
@@ -134,8 +137,7 @@ func main() {
os.Exit(1)
}
// Start the controller manager
setupLog.Info("Starting controller manager")
// Initialize the controller manager
mgr, err := ctrl.NewManager(config, ctrl.Options{
Scheme: scheme,
Cache: cache.Options{
@@ -177,10 +179,26 @@ func main() {
os.Exit(1)
}
// Set up signal handler early so install phases respect SIGTERM
mgrCtx := ctrl.SetupSignalHandler()
// Install Cozystack CRDs before starting reconcile loop
if installCRDs {
setupLog.Info("Installing Cozystack CRDs before starting reconcile loop")
installCtx, installCancel := context.WithTimeout(mgrCtx, 2*time.Minute)
defer installCancel()
if err := crdinstall.Install(installCtx, directClient, crdinstall.WriteEmbeddedManifests); err != nil {
setupLog.Error(err, "failed to install CRDs")
os.Exit(1)
}
setupLog.Info("CRD installation completed successfully")
}
// Install Flux before starting reconcile loop
if installFlux {
setupLog.Info("Installing Flux components before starting reconcile loop")
installCtx, installCancel := context.WithTimeout(context.Background(), 5*time.Minute)
installCtx, installCancel := context.WithTimeout(mgrCtx, 5*time.Minute)
defer installCancel()
// Use direct client for pre-start operations (cache is not ready yet)
@@ -194,7 +212,7 @@ func main() {
// Generate and install platform source resource if specified
if platformSourceURL != "" {
setupLog.Info("Generating platform source resource", "url", platformSourceURL, "name", platformSourceName, "ref", platformSourceRef)
installCtx, installCancel := context.WithTimeout(context.Background(), 2*time.Minute)
installCtx, installCancel := context.WithTimeout(mgrCtx, 2*time.Minute)
defer installCancel()
// Use direct client for pre-start operations (cache is not ready yet)
@@ -276,7 +294,6 @@ func main() {
}
setupLog.Info("Starting controller manager")
mgrCtx := ctrl.SetupSignalHandler()
if err := mgr.Start(mgrCtx); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,36 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v1.0.0-beta.5
-->
> **⚠️ Beta Release Warning**: This is a pre-release version intended for testing and early adoption. Breaking changes may occur before the stable v1.0.0 release.
## Features and Improvements
* **[installer] Add variant-aware templates for generic Kubernetes support**: Extended the installer chart to support generic and hosted Kubernetes deployments via the existing `cozystackOperator.variant` parameter. When using `variant=generic`, the installer now renders separate templates for the Cozystack operator, skipping Talos-specific components. This enables users to deploy Cozystack on standard Kubernetes distributions and hosted Kubernetes services, expanding platform compatibility beyond Talos Linux ([**@lexfrei**](https://github.com/lexfrei) in #2010).
* **[kilo] Add Cilium compatibility variant**: Added a new `cilium` variant to the kilo PackageSource that deploys kilo with the `--compatibility=cilium` flag. This enables Cilium-aware IPIP encapsulation where the outer packet IP matches the inner packet source, allowing Cilium's network policies to function correctly with kilo's WireGuard mesh networking. Users can now run kilo alongside Cilium CNI while maintaining full network policy enforcement capabilities ([**@kvaps**](https://github.com/kvaps) in #2055).
* **[cluster-autoscaler] Enable enforce-node-group-min-size by default**: Enabled the `enforce-node-group-min-size` option for the system cluster-autoscaler chart. This ensures node groups are always scaled up to their configured minimum size, even when current workload demands are lower, preventing unexpected scale-down below minimum thresholds and improving cluster stability for production workloads ([**@kvaps**](https://github.com/kvaps) in #2050).
* **[dashboard] Upgrade dashboard to version 1.4.0**: Updated the Cozystack dashboard to version 1.4.0 with new features and improvements for better user experience and cluster management capabilities ([**@sircthulhu**](https://github.com/sircthulhu) in #2051).
## Breaking Changes & Upgrade Notes
* **[vpc] Migrate subnets definition from map to array format**: Migrated VPC subnets definition from map format (`map[string]Subnet`) to array format (`[]Subnet`) with an explicit `name` field. This aligns VPC subnet definitions with the vm-instance `networks` field pattern and provides more intuitive configuration. Existing VPC deployments are automatically migrated via migration 30, which converts the subnet map to an array while preserving all existing subnet configurations and network connectivity ([**@kvaps**](https://github.com/kvaps) in #2052).
## Dependencies
* **[kilo] Update to v0.8.0**: Updated Kilo WireGuard mesh networking to v0.8.0 with performance improvements, bug fixes, and new compatibility features ([**@kvaps**](https://github.com/kvaps) in #2053).
* **[talm] Skip config loading for __complete command**: Fixed CLI completion behavior by skipping config loading for the `__complete` command, preventing errors during shell completion when configuration files are not available or misconfigured ([**@kitsunoff**](https://github.com/kitsunoff) in cozystack/talm#109).
## Contributors
We'd like to thank all contributors who made this release possible:
* [**@kitsunoff**](https://github.com/kitsunoff)
* [**@kvaps**](https://github.com/kvaps)
* [**@lexfrei**](https://github.com/lexfrei)
* [**@sircthulhu**](https://github.com/sircthulhu)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.0-beta.4...v1.0.0-beta.5

View File

@@ -0,0 +1,46 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v1.0.0-beta.6
-->
> **⚠️ Beta Release Warning**: This is a pre-release version intended for testing and early adoption. Breaking changes may occur before the stable v1.0.0 release.
## Features and Improvements
* **[platform] Add cilium-kilo networking variant**: Added a new `cilium-kilo` networking variant that combines Cilium CNI with Kilo WireGuard mesh overlay. This variant enables `enable-ipip-termination` in Cilium for proper IPIP packet handling and deploys Kilo with `--compatibility=cilium` flag. Users can now select `cilium-kilo` as their networking variant during platform setup, simplifying the multi-location WireGuard setup compared to manually combining Cilium and standalone Kilo ([**@kvaps**](https://github.com/kvaps) in #2064).
* **[nats] Add monitoring**: Added Grafana dashboards for NATS JetStream and server metrics monitoring, along with Prometheus monitoring support with TLS-aware endpoint configuration. Includes updated image customization options (digest and full image name) and component version upgrades for the NATS exporter and utilities. Users now have full observability into NATS message broker performance and health ([**@klinch0**](https://github.com/klinch0) in #1381).
* **[platform] Add DNS-1035 validation for Application names**: Added dynamic DNS-1035 label validation for Application names in the Cozystack API, using `IsDNS1035Label` from `k8s.io/apimachinery`. Validation is performed at creation time and accounts for the root host length to prevent names that would exceed Kubernetes resource naming limits. This prevents creation of resources with invalid names that would fail downstream Kubernetes resource creation ([**@lexfrei**](https://github.com/lexfrei) in #1771).
* **[operator] Add automatic CRD installation at startup**: Added `--install-crds` flag to the Cozystack operator that installs embedded CRD manifests at startup, ensuring CRDs exist before the operator begins reconciliation. CRD manifests are now embedded in the operator binary and verified for consistency with the Helm `crds/` directory via a new CI Makefile check. This eliminates ordering issues during initial cluster setup where CRDs might not yet be present ([**@lexfrei**](https://github.com/lexfrei) in #2060).
## Fixes
* **[platform] Adopt tenant-root into cozystack-basics during migration**: Added migration 31 to adopt existing `tenant-root` Namespace and HelmRelease into the `cozystack-basics` Helm release when upgrading from v0.41.x to v1.0. Previously these resources were applied via `kubectl apply` with no Helm release tracking, causing Helm to treat them as foreign resources and potentially delete them during reconciliation. This migration ensures a safe upgrade path by annotating and labeling these resources for Helm adoption ([**@kvaps**](https://github.com/kvaps) in #2065).
* **[platform] Preserve tenant-root HelmRelease during migration**: Fixed a data-loss risk during migration from v0.41.x to v1.0.0-beta where the `tenant-root` HelmRelease (and the namespace it manages) could be deleted, causing tenant service outages. Added safety annotation to the HelmRelease and lookup logic to preserve current parameters during migration, preventing unwanted deletion of tenant-root resources ([**@sircthulhu**](https://github.com/sircthulhu) in #2063).
* **[codegen] Add gen_client to update-codegen.sh and regenerate applyconfiguration**: Fixed a build error in `pkg/generated/applyconfiguration/utils.go` caused by a reference to `testing.TypeConverter` which was removed in client-go v0.34.1. The root cause was that `hack/update-codegen.sh` never called `gen_client`, leaving the generated applyconfiguration code stale. Running the full code generation now produces a consistent and compilable codebase ([**@lexfrei**](https://github.com/lexfrei) in #2061).
* **[e2e] Make kubernetes test retries effective by cleaning up stale resources**: Fixed E2E test retries for the Kubernetes tenant test by adding pre-creation cleanup of backend deployment/service and NFS pod/PVC in `run-kubernetes.sh`. Previously, retries would fail immediately because stale resources from a failed attempt blocked re-creation. Also increased the tenant deployment wait timeout from 90s to 300s to handle CI resource pressure ([**@lexfrei**](https://github.com/lexfrei) in #2062).
## Development, Testing, and CI/CD
* **[e2e] Use helm install instead of kubectl apply for cozystack installation**: Replaced the pre-rendered static YAML application flow (`kubectl apply`) with direct `helm upgrade --install` of the `packages/core/installer` chart in E2E tests. Removed the CRD/operator artifact upload/download steps from the CI workflow, simplifying the pipeline. The chart with correct values is already present in the sandbox via workspace copy and `pr.patch` ([**@lexfrei**](https://github.com/lexfrei) in #2060).
## Documentation
* **[website] Improve Azure autoscaling troubleshooting guide**: Enhanced the Azure autoscaling troubleshooting documentation with serial console instructions for debugging VMSS worker nodes, a troubleshooting section for nodes stuck in maintenance mode due to invalid or missing machine config, `az vmss update --custom-data` instructions for updating machine config, and a warning that Azure does not support reading back `customData` ([**@kvaps**](https://github.com/kvaps) in cozystack/website#424).
* **[website] Update multi-location documentation for cilium-kilo variant**: Updated multi-location networking documentation to reflect the new integrated `cilium-kilo` variant selection during platform setup, replacing the previous manual Kilo installation and Cilium configuration steps. Added explanation of `enable-ipip-termination` and updated the troubleshooting section ([**@kvaps**](https://github.com/kvaps) in cozystack/website@02d63f0).
## Contributors
We'd like to thank all contributors who made this release possible:
* [**@klinch0**](https://github.com/klinch0)
* [**@kvaps**](https://github.com/kvaps)
* [**@lexfrei**](https://github.com/lexfrei)
* [**@sircthulhu**](https://github.com/sircthulhu)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.0-beta.5...v1.0.0-beta.6

View File

@@ -0,0 +1,356 @@
# AffinityClass: Named Placement Classes for CozyStack Applications (Draft)
## Concept
Similar to StorageClass in Kubernetes, a new resource **AffinityClass** is introduced — a named abstraction over scheduling constraints. When creating an Application, the user selects an AffinityClass by name without knowing the details of the cluster topology.
```
StorageClass → "which disk" → PV provisioning
AffinityClass → "where to place" → Pod scheduling
```
## Design
### 1. AffinityClass CRD
A cluster-scoped resource created by the platform administrator:
```yaml
apiVersion: cozystack.io/v1alpha1
kind: AffinityClass
metadata:
name: dc1
spec:
# nodeSelector that MUST be present on every pod of the application.
# Used for validation by the lineage webhook.
nodeSelector:
topology.kubernetes.io/zone: dc1
```
```yaml
apiVersion: cozystack.io/v1alpha1
kind: AffinityClass
metadata:
name: dc2
spec:
nodeSelector:
topology.kubernetes.io/zone: dc2
```
```yaml
apiVersion: cozystack.io/v1alpha1
kind: AffinityClass
metadata:
name: gpu
spec:
nodeSelector:
node.kubernetes.io/gpu: "true"
```
An AffinityClass contains a `nodeSelector` — a set of key=value pairs that must be present in `pod.spec.nodeSelector` on every pod of the application. This is a contract: the chart is responsible for setting these selectors, the webhook is responsible for verifying them.
### 2. Tenant: Restricting Available Classes
Tenant gets `allowedAffinityClasses` and `defaultAffinityClass` fields:
```yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: Tenant
metadata:
name: acme
namespace: tenant-root
spec:
defaultAffinityClass: dc1 # default class for applications
allowedAffinityClasses: # which classes are allowed
- dc1
- dc2
etcd: false
ingress: true
monitoring: false
```
These values are propagated to the `cozystack-values` Secret in the child namespace:
```yaml
# Secret cozystack-values in namespace tenant-acme
stringData:
values.yaml: |
_cluster:
# ... existing cluster config
_namespace:
# ... existing namespace config
defaultAffinityClass: dc1
allowedAffinityClasses:
- dc1
- dc2
```
### 3. Application: Selecting a Class
Each application can specify an `affinityClass`. If not specified, the `defaultAffinityClass` from the tenant is used:
```yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: Postgres
metadata:
name: main-db
namespace: tenant-acme
spec:
affinityClass: dc1 # explicit selection
replicas: 3
```
```yaml
apiVersion: apps.cozystack.io/v1alpha1
kind: Redis
metadata:
name: cache
namespace: tenant-acme
spec:
# affinityClass not specified → uses tenant's defaultAffinityClass (dc1)
replicas: 2
```
### 4. How affinityClass Reaches the HelmRelease
When creating an Application, the API server (`pkg/registry/apps/application/rest.go`):
1. Extracts `affinityClass` from `spec` (or uses the default from `cozystack-values`)
2. Records `affinityClass` as a **label on the HelmRelease**:
```
apps.cozystack.io/affinity-class: dc1
```
3. Resolves AffinityClass to `nodeSelector` and passes it into HelmRelease values as `_scheduling`:
```yaml
_scheduling:
affinityClass: dc1
nodeSelector:
topology.kubernetes.io/zone: dc1
```
### 5. How Charts Apply Scheduling
A helper is added to `cozy-lib`:
```yaml
{{- define "cozy-lib.scheduling.nodeSelector" -}}
{{- if .Values._scheduling }}
{{- if .Values._scheduling.nodeSelector }}
nodeSelector:
{{- .Values._scheduling.nodeSelector | toYaml | nindent 2 }}
{{- end }}
{{- end }}
{{- end -}}
```
Each app chart uses the helper when rendering Pod/StatefulSet/Deployment specs:
```yaml
# packages/apps/postgres/templates/db.yaml
spec:
instances: {{ .Values.replicas }}
{{- include "cozy-lib.scheduling.nodeSelector" . | nindent 2 }}
```
```yaml
# packages/apps/redis/templates/redis.yaml
spec:
replicas: {{ .Values.replicas }}
template:
spec:
{{- include "cozy-lib.scheduling.nodeSelector" . | nindent 6 }}
```
Charts **must** apply `_scheduling.nodeSelector`. If they don't, pods will be rejected by the webhook.
---
## Validation via Lineage Webhook
### Why Validation, Not Mutation
Mutation (injecting nodeSelector into a pod) creates problems:
- Requires merging with existing pod nodeSelector/affinity — complex logic with edge cases
- Operators (CNPG, Strimzi) may overwrite nodeSelector on pod restart
- Hidden behavior: pod is created with one spec but actually runs with another
Validation is simpler and more reliable:
- Webhook checks: "does this pod **have** the required nodeSelector?"
- If not, the pod is **rejected** with a clear error message
- The chart and operator are responsible for setting the correct spec
### What Already Exists in the Lineage Webhook
The lineage webhook (`internal/lineagecontrollerwebhook/webhook.go`) on every Pod creation:
1. Decodes the Pod
2. Walks the ownership graph (`lineage.WalkOwnershipGraph`) — finds the **owning HelmRelease**
3. Extracts labels from the HelmRelease: `apps.cozystack.io/application.kind`, `.group`, `.name`
4. Applies these labels to the Pod
**Key point:** the webhook already knows which HelmRelease owns each Pod.
### What Is Added
After computing lineage labels, a validation step is added:
```
Handle(pod):
1. [existing] computeLabels(pod) → finds owning HelmRelease
2. [existing] applyLabels(pod, labels) → mutates labels
3. [NEW] validateAffinity(pod, hr) → checks nodeSelector
4. Return patch or Denied
```
The `validateAffinity` logic:
```go
func (h *LineageControllerWebhook) validateAffinity(
ctx context.Context,
pod *unstructured.Unstructured,
hr *helmv2.HelmRelease,
) *admission.Response {
// 1. Extract affinityClass from HelmRelease label
affinityClassName, ok := hr.Labels["apps.cozystack.io/affinity-class"]
if !ok {
return nil // no affinityClass — no validation needed
}
// 2. Look up AffinityClass from cache
affinityClass, ok := h.affinityClassMap[affinityClassName]
if !ok {
resp := admission.Denied(fmt.Sprintf(
"AffinityClass %q not found", affinityClassName))
return &resp
}
// 3. Check pod's nodeSelector
podNodeSelector := extractNodeSelector(pod) // from pod.spec.nodeSelector
for key, expected := range affinityClass.Spec.NodeSelector {
actual, exists := podNodeSelector[key]
if !exists || actual != expected {
resp := admission.Denied(fmt.Sprintf(
"pod %s/%s belongs to application with AffinityClass %q "+
"but missing required nodeSelector %s=%s",
pod.GetNamespace(), pod.GetName(),
affinityClassName, key, expected))
return &resp
}
}
return nil // validation passed
}
```
### AffinityClass Caching
The lineage webhook controller already caches ApplicationDefinitions (`runtimeConfig.appCRDMap`). An AffinityClass cache is added in the same way:
```go
type runtimeConfig struct {
appCRDMap map[appRef]*cozyv1alpha1.ApplicationDefinition
affinityClassMap map[string]*cozyv1alpha1.AffinityClass // NEW
}
```
The controller adds a watch on AffinityClass:
```go
func (c *LineageControllerWebhook) SetupWithManagerAsController(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&cozyv1alpha1.ApplicationDefinition{}).
Watches(&cozyv1alpha1.AffinityClass{}, &handler.EnqueueRequestForObject{}).
Complete(c)
}
```
When an AffinityClass changes, the cache is rebuilt.
---
## End-to-End Flow
```
1. Admin creates AffinityClass "dc1" (nodeSelector: zone=dc1)
2. Admin creates Tenant "acme" (defaultAffinityClass: dc1, allowed: [dc1, dc2])
→ namespace tenant-acme
→ cozystack-values Secret with defaultAffinityClass
3. User creates Postgres "main-db" (affinityClass: dc1)
→ API server checks: dc1 ∈ allowedAffinityClasses? ✓
→ API server resolves AffinityClass → nodeSelector
→ HelmRelease is created with:
- label: apps.cozystack.io/affinity-class=dc1
- values: _scheduling.nodeSelector.topology.kubernetes.io/zone=dc1
4. FluxCD deploys HelmRelease → Helm renders the chart
→ Chart uses cozy-lib helper
→ CNPG Cluster is created with nodeSelector: {zone: dc1}
5. CNPG operator creates Pod
→ Pod has nodeSelector: {zone: dc1}
6. Lineage webhook intercepts the Pod:
a. WalkOwnershipGraph → finds HelmRelease "main-db"
b. HelmRelease label → affinityClass=dc1
c. AffinityClass "dc1" → nodeSelector: {zone: dc1}
d. Checks: pod.spec.nodeSelector contains zone=dc1? ✓
e. Admits Pod (+ standard lineage labels)
7. Scheduler places the Pod on a node in dc1
```
### Error Scenario (chart forgot to apply nodeSelector):
```
5. CNPG operator creates Pod WITHOUT nodeSelector
6. Lineage webhook:
d. Checks: pod.spec.nodeSelector contains zone=dc1? ✗
e. REJECTS Pod:
"pod main-db-1 belongs to application with AffinityClass dc1
but missing required nodeSelector topology.kubernetes.io/zone=dc1"
7. Pod is not created. CNPG operator sees the error and retries.
→ Chart developer gets a signal that the chart does not support scheduling.
```
---
## Code Changes
### New Files
| File | Description |
|------------------------------------------------------|-------------------------|
| `api/v1alpha1/affinityclass_types.go` | AffinityClass CRD types |
| `config/crd/bases/cozystack.io_affinityclasses.yaml` | CRD manifest |
### Modified Files
| File | Change |
|-------------------------------------------------------|-------------------------------------------------------------------|
| `internal/lineagecontrollerwebhook/webhook.go` | Add `validateAffinity()` to `Handle()` |
| `internal/lineagecontrollerwebhook/config.go` | Add `affinityClassMap` to `runtimeConfig` |
| `internal/lineagecontrollerwebhook/controller.go` | Add watch on AffinityClass |
| `pkg/registry/apps/application/rest.go` | On Create/Update: resolve affinityClass, pass to values and label |
| `packages/apps/tenant/values.yaml` | Add `defaultAffinityClass`, `allowedAffinityClasses` |
| `packages/apps/tenant/templates/namespace.yaml` | Propagate to cozystack-values |
| `packages/system/tenant-rd/cozyrds/tenant.yaml` | Extend OpenAPI schema |
| `packages/library/cozy-lib/templates/_cozyconfig.tpl` | Add `cozy-lib.scheduling.nodeSelector` helper |
| `packages/apps/*/templates/*.yaml` | Each app chart: add helper usage |
---
## Open Questions
1. **AffinityClass outside Tenants**: Should AffinityClass work for applications outside tenant namespaces (system namespace)? Or only for tenant workloads?
2. **affinityClass validation on Application creation**: The API server should verify that the specified affinityClass exists and is included in the tenant's `allowedAffinityClasses`. Where should this be done — in the REST handler (`rest.go`) or in a separate validating webhook?
3. **Soft mode (warn vs deny)**: Is a mode needed where the webhook issues a warning instead of rejecting? This would simplify gradual adoption while not all charts support `_scheduling`.
4. **affinityClass inheritance**: If a child Tenant does not specify `defaultAffinityClass`, should it be inherited from the parent? The current `cozystack-values` architecture supports this inheritance natively.
5. **Multiple nodeSelectors**: Is OR-logic support needed (pod can be in dc1 OR dc2)? With `nodeSelector` this is impossible — AffinityClass would need to be extended to `nodeAffinity`. However, validation becomes significantly more complex.

3
go.mod
View File

@@ -29,7 +29,7 @@ require (
k8s.io/kube-openapi v0.0.0-20250710124328-f3f2b991d03b
k8s.io/utils v0.0.0-20250820121507-0af2bda4dd1d
sigs.k8s.io/controller-runtime v0.22.4
sigs.k8s.io/structured-merge-diff/v4 v4.7.0
sigs.k8s.io/structured-merge-diff/v6 v6.3.0
)
require (
@@ -125,7 +125,6 @@ require (
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.2 // indirect
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect
sigs.k8s.io/yaml v1.6.0 // indirect
)

5
go.sum
View File

@@ -81,7 +81,6 @@ github.com/google/cel-go v0.26.0 h1:DPGjXackMpJWH680oGY4lZhYjIameYmR+/6RBdDGmaI=
github.com/google/cel-go v0.26.0/go.mod h1:A9O8OU9rdvrK5MQyrqfIxo1a0u4g3sF8KB6PUIaryMM=
github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo=
github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@@ -324,13 +323,9 @@ sigs.k8s.io/controller-runtime v0.22.4 h1:GEjV7KV3TY8e+tJ2LCTxUTanW4z/FmNB7l327U
sigs.k8s.io/controller-runtime v0.22.4/go.mod h1:+QX1XUpTXN4mLoblf4tqr5CQcyHPAki2HLXqQMY6vh8=
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg=
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v4 v4.7.0 h1:qPeWmscJcXP0snki5IYF79Z8xrl8ETFxgMd7wez1XkI=
sigs.k8s.io/structured-merge-diff/v4 v4.7.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=

View File

@@ -83,6 +83,8 @@ modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//flux/flux-stats
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//kafka/strimzi-kafka.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//seaweedfs/seaweedfs.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//goldpinger/goldpinger.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//nats/nats-jetstream.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//nats/nats-server.json
EOT

74
hack/e2e-apps/harbor.bats Normal file
View File

@@ -0,0 +1,74 @@
#!/usr/bin/env bats
@test "Create Harbor" {
name='test'
release="harbor-$name"
# Clean up stale resources from previous failed runs
kubectl -n tenant-test delete harbor.apps.cozystack.io $name 2>/dev/null || true
kubectl -n tenant-test wait hr $release --timeout=60s --for=delete 2>/dev/null || true
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Harbor
metadata:
name: $name
namespace: tenant-test
spec:
host: ""
storageClass: ""
core:
resources: {}
resourcesPreset: "nano"
registry:
resources: {}
resourcesPreset: "nano"
jobservice:
resources: {}
resourcesPreset: "nano"
trivy:
enabled: false
size: 2Gi
resources: {}
resourcesPreset: "nano"
database:
size: 2Gi
replicas: 1
redis:
size: 1Gi
replicas: 1
EOF
sleep 5
kubectl -n tenant-test wait hr $release --timeout=60s --for=condition=ready
# Wait for COSI to provision bucket
kubectl -n tenant-test wait bucketclaims.objectstorage.k8s.io $release-registry \
--timeout=300s --for=jsonpath='{.status.bucketReady}'=true
kubectl -n tenant-test wait bucketaccesses.objectstorage.k8s.io $release-registry \
--timeout=60s --for=jsonpath='{.status.accessGranted}'=true
kubectl -n tenant-test wait hr $release-system --timeout=600s --for=condition=ready || {
echo "=== HelmRelease status ==="
kubectl -n tenant-test get hr $release-system -o yaml 2>&1 || true
echo "=== Pods ==="
kubectl -n tenant-test get pods 2>&1 || true
echo "=== Events ==="
kubectl -n tenant-test get events --sort-by='.lastTimestamp' 2>&1 | tail -30 || true
echo "=== ExternalArtifact ==="
kubectl -n cozy-system get externalartifact cozystack-harbor-application-default-harbor-system -o yaml 2>&1 || true
echo "=== BucketClaim status ==="
kubectl -n tenant-test get bucketclaims.objectstorage.k8s.io $release-registry -o yaml 2>&1 || true
echo "=== BucketAccess status ==="
kubectl -n tenant-test get bucketaccesses.objectstorage.k8s.io $release-registry -o yaml 2>&1 || true
echo "=== BucketAccess Secret ==="
kubectl -n tenant-test get secret $release-registry-bucket -o jsonpath='{.data.BucketInfo}' 2>&1 | base64 -d 2>&1 || true
false
}
kubectl -n tenant-test wait deploy $release-core --timeout=120s --for=condition=available
kubectl -n tenant-test wait deploy $release-registry --timeout=120s --for=condition=available
kubectl -n tenant-test wait deploy $release-portal --timeout=120s --for=condition=available
kubectl -n tenant-test get secret $release-credentials -o jsonpath='{.data.admin-password}' | base64 --decode | grep -q '.'
kubectl -n tenant-test get secret $release-credentials -o jsonpath='{.data.url}' | base64 --decode | grep -q 'https://'
kubectl -n tenant-test get svc $release -o jsonpath='{.spec.ports[0].port}' | grep -q '80'
kubectl -n tenant-test delete harbor.apps.cozystack.io $name
}

View File

@@ -80,10 +80,10 @@ EOF
# Wait for the machine deployment to scale to 2 replicas (timeout after 1 minute)
kubectl wait machinedeployment kubernetes-${test_name}-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
# Get the admin kubeconfig and save it to a file
kubectl get secret kubernetes-${test_name}-admin-kubeconfig -ojsonpath='{.data.super-admin\.conf}' -n tenant-test | base64 -d > tenantkubeconfig-${test_name}
kubectl get secret kubernetes-${test_name}-admin-kubeconfig -ojsonpath='{.data.super-admin\.conf}' -n tenant-test | base64 -d > "tenantkubeconfig-${test_name}"
# Update the kubeconfig to use localhost for the API server
yq -i ".clusters[0].cluster.server = \"https://localhost:${port}\"" tenantkubeconfig-${test_name}
yq -i ".clusters[0].cluster.server = \"https://localhost:${port}\"" "tenantkubeconfig-${test_name}"
# Set up port forwarding to the Kubernetes API server for a 200 second timeout
@@ -98,8 +98,8 @@ EOF
done
'
# Verify the nodes are ready
kubectl --kubeconfig tenantkubeconfig-${test_name} wait node --all --timeout=2m --for=condition=Ready
kubectl --kubeconfig tenantkubeconfig-${test_name} get nodes -o wide
kubectl --kubeconfig "tenantkubeconfig-${test_name}" wait node --all --timeout=2m --for=condition=Ready
kubectl --kubeconfig "tenantkubeconfig-${test_name}" get nodes -o wide
# Verify the kubelet version matches what we expect
versions=$(kubectl --kubeconfig "tenantkubeconfig-${test_name}" \
@@ -125,15 +125,21 @@ EOF
fi
kubectl --kubeconfig tenantkubeconfig-${test_name} apply -f - <<EOF
kubectl --kubeconfig "tenantkubeconfig-${test_name}" apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: tenant-test
EOF
# Clean up backend resources from any previous failed attempt
kubectl delete deployment --kubeconfig "tenantkubeconfig-${test_name}" "${test_name}-backend" \
-n tenant-test --ignore-not-found --timeout=60s || true
kubectl delete service --kubeconfig "tenantkubeconfig-${test_name}" "${test_name}-backend" \
-n tenant-test --ignore-not-found --timeout=60s || true
# Backend 1
kubectl apply --kubeconfig tenantkubeconfig-${test_name} -f- <<EOF
kubectl apply --kubeconfig "tenantkubeconfig-${test_name}" -f- <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -165,7 +171,7 @@ spec:
EOF
# LoadBalancer Service
kubectl apply --kubeconfig tenantkubeconfig-${test_name} -f- <<EOF
kubectl apply --kubeconfig "tenantkubeconfig-${test_name}" -f- <<EOF
apiVersion: v1
kind: Service
metadata:
@@ -182,7 +188,7 @@ spec:
EOF
# Wait for pods readiness
kubectl wait deployment --kubeconfig tenantkubeconfig-${test_name} ${test_name}-backend -n tenant-test --for=condition=Available --timeout=90s
kubectl wait deployment --kubeconfig "tenantkubeconfig-${test_name}" "${test_name}-backend" -n tenant-test --for=condition=Available --timeout=300s
# Wait for LoadBalancer to be provisioned (IP or hostname)
timeout 90 sh -ec "
@@ -193,7 +199,7 @@ EOF
"
LB_ADDR=$(
kubectl get svc --kubeconfig tenantkubeconfig-${test_name} "${test_name}-backend" \
kubectl get svc --kubeconfig "tenantkubeconfig-${test_name}" "${test_name}-backend" \
-n tenant-test \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}{.status.loadBalancer.ingress[0].hostname}'
)
@@ -215,11 +221,17 @@ fi
fi
# Cleanup
kubectl delete deployment --kubeconfig tenantkubeconfig-${test_name} "${test_name}-backend" -n tenant-test
kubectl delete service --kubeconfig tenantkubeconfig-${test_name} "${test_name}-backend" -n tenant-test
kubectl delete deployment --kubeconfig "tenantkubeconfig-${test_name}" "${test_name}-backend" -n tenant-test
kubectl delete service --kubeconfig "tenantkubeconfig-${test_name}" "${test_name}-backend" -n tenant-test
# Clean up NFS test resources from any previous failed attempt
kubectl --kubeconfig "tenantkubeconfig-${test_name}" delete pod nfs-test-pod \
-n tenant-test --ignore-not-found --timeout=60s || true
kubectl --kubeconfig "tenantkubeconfig-${test_name}" delete pvc nfs-test-pvc \
-n tenant-test --ignore-not-found --timeout=60s || true
# Test RWX NFS mount in tenant cluster (uses kubevirt CSI driver with RWX support)
kubectl --kubeconfig tenantkubeconfig-${test_name} apply -f - <<EOF
kubectl --kubeconfig "tenantkubeconfig-${test_name}" apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
@@ -235,10 +247,10 @@ spec:
EOF
# Wait for PVC to be bound
kubectl --kubeconfig tenantkubeconfig-${test_name} wait pvc nfs-test-pvc -n tenant-test --timeout=2m --for=jsonpath='{.status.phase}'=Bound
kubectl --kubeconfig "tenantkubeconfig-${test_name}" wait pvc nfs-test-pvc -n tenant-test --timeout=2m --for=jsonpath='{.status.phase}'=Bound
# Create Pod that writes and reads data from NFS volume
kubectl --kubeconfig tenantkubeconfig-${test_name} apply -f - <<EOF
kubectl --kubeconfig "tenantkubeconfig-${test_name}" apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
@@ -260,20 +272,20 @@ spec:
EOF
# Wait for Pod to complete successfully
kubectl --kubeconfig tenantkubeconfig-${test_name} wait pod nfs-test-pod -n tenant-test --timeout=5m --for=jsonpath='{.status.phase}'=Succeeded
kubectl --kubeconfig "tenantkubeconfig-${test_name}" wait pod nfs-test-pod -n tenant-test --timeout=5m --for=jsonpath='{.status.phase}'=Succeeded
# Verify NFS data integrity
nfs_result=$(kubectl --kubeconfig tenantkubeconfig-${test_name} logs nfs-test-pod -n tenant-test)
nfs_result=$(kubectl --kubeconfig "tenantkubeconfig-${test_name}" logs nfs-test-pod -n tenant-test)
if [ "$nfs_result" != "nfs-mount-ok" ]; then
echo "NFS mount test failed: expected 'nfs-mount-ok', got '$nfs_result'" >&2
kubectl --kubeconfig tenantkubeconfig-${test_name} delete pod nfs-test-pod -n tenant-test --wait=false 2>/dev/null || true
kubectl --kubeconfig tenantkubeconfig-${test_name} delete pvc nfs-test-pvc -n tenant-test --wait=false 2>/dev/null || true
kubectl --kubeconfig "tenantkubeconfig-${test_name}" delete pod nfs-test-pod -n tenant-test --wait=false 2>/dev/null || true
kubectl --kubeconfig "tenantkubeconfig-${test_name}" delete pvc nfs-test-pvc -n tenant-test --wait=false 2>/dev/null || true
exit 1
fi
# Cleanup NFS test resources in tenant cluster
kubectl --kubeconfig tenantkubeconfig-${test_name} delete pod nfs-test-pod -n tenant-test --wait
kubectl --kubeconfig tenantkubeconfig-${test_name} delete pvc nfs-test-pvc -n tenant-test
kubectl --kubeconfig "tenantkubeconfig-${test_name}" delete pod nfs-test-pod -n tenant-test --wait
kubectl --kubeconfig "tenantkubeconfig-${test_name}" delete pvc nfs-test-pvc -n tenant-test
# Wait for all machine deployment replicas to be ready (timeout after 10 minutes)
kubectl wait machinedeployment kubernetes-${test_name}-md0 -n tenant-test --timeout=10m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2

View File

@@ -1,25 +1,22 @@
#!/usr/bin/env bats
@test "Required installer assets exist" {
if [ ! -f _out/assets/cozystack-crds.yaml ]; then
echo "Missing: _out/assets/cozystack-crds.yaml" >&2
exit 1
fi
if [ ! -f _out/assets/cozystack-operator-talos.yaml ]; then
echo "Missing: _out/assets/cozystack-operator-talos.yaml" >&2
@test "Required installer chart exists" {
if [ ! -f packages/core/installer/Chart.yaml ]; then
echo "Missing: packages/core/installer/Chart.yaml" >&2
exit 1
fi
}
@test "Install Cozystack" {
# Create namespace
kubectl create namespace cozy-system --dry-run=client -o yaml | kubectl apply -f -
# Install cozy-installer chart (CRDs from crds/ are applied automatically)
helm upgrade installer packages/core/installer \
--install \
--namespace cozy-system \
--create-namespace \
--wait \
--timeout 2m
# Apply installer manifests (CRDs + operator)
kubectl apply -f _out/assets/cozystack-crds.yaml
kubectl apply -f _out/assets/cozystack-operator-talos.yaml
# Wait for the operator deployment to become available
# Verify the operator deployment is available
kubectl wait deployment/cozystack-operator -n cozy-system --timeout=1m --for=condition=Available
# Create platform Package with isp-full variant

View File

@@ -24,7 +24,8 @@ API_KNOWN_VIOLATIONS_DIR="${API_KNOWN_VIOLATIONS_DIR:-"${SCRIPT_ROOT}/api/api-ru
UPDATE_API_KNOWN_VIOLATIONS="${UPDATE_API_KNOWN_VIOLATIONS:-true}"
CONTROLLER_GEN="go run sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.4"
TMPDIR=$(mktemp -d)
OPERATOR_CRDDIR=packages/core/installer/definitions
OPERATOR_CRDDIR=packages/core/installer/crds
OPERATOR_EMBEDDIR=internal/crdinstall/manifests
COZY_CONTROLLER_CRDDIR=packages/system/cozystack-controller/definitions
COZY_RD_CRDDIR=packages/system/application-definition-crd/definition
BACKUPS_CORE_CRDDIR=packages/system/backup-controller/definitions
@@ -34,7 +35,7 @@ trap 'rm -rf ${TMPDIR}' EXIT
source "${CODEGEN_PKG}/kube_codegen.sh"
THIS_PKG="k8s.io/sample-apiserver"
THIS_PKG="github.com/cozystack/cozystack"
kube::codegen::gen_helpers \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
@@ -60,12 +61,22 @@ kube::codegen::gen_openapi \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
"${SCRIPT_ROOT}/pkg/apis"
kube::codegen::gen_client \
--with-applyconfig \
--output-dir "${SCRIPT_ROOT}/pkg/generated" \
--output-pkg "${THIS_PKG}/pkg/generated" \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
"${SCRIPT_ROOT}/pkg/apis"
$CONTROLLER_GEN object:headerFile="hack/boilerplate.go.txt" paths="./api/..."
$CONTROLLER_GEN rbac:roleName=manager-role crd paths="./api/..." output:crd:artifacts:config=${TMPDIR}
mv ${TMPDIR}/cozystack.io_packages.yaml ${OPERATOR_CRDDIR}/cozystack.io_packages.yaml
mv ${TMPDIR}/cozystack.io_packagesources.yaml ${OPERATOR_CRDDIR}/cozystack.io_packagesources.yaml
cp ${OPERATOR_CRDDIR}/cozystack.io_packages.yaml ${OPERATOR_EMBEDDIR}/cozystack.io_packages.yaml
cp ${OPERATOR_CRDDIR}/cozystack.io_packagesources.yaml ${OPERATOR_EMBEDDIR}/cozystack.io_packagesources.yaml
mv ${TMPDIR}/cozystack.io_applicationdefinitions.yaml \
${COZY_RD_CRDDIR}/cozystack.io_applicationdefinitions.yaml

View File

@@ -33,12 +33,12 @@ func (m *Manager) ensureBreadcrumb(ctx context.Context, crd *cozyv1alpha1.Applic
key := plural // e.g., "virtualmachines"
label := labelPlural
link := fmt.Sprintf("/openapi-ui/{clusterName}/{namespace}/api-table/%s/%s/%s", strings.ToLower(group), strings.ToLower(version), plural)
link := fmt.Sprintf("/openapi-ui/{cluster}/{namespace}/api-table/%s/%s/%s", strings.ToLower(group), strings.ToLower(version), plural)
// If this is a module, change the first breadcrumb item to "Tenant Modules"
if crd.Spec.Dashboard != nil && crd.Spec.Dashboard.Module {
key = "tenantmodules"
label = "Tenant Modules"
link = "/openapi-ui/{clusterName}/{namespace}/api-table/core.cozystack.io/v1alpha1/tenantmodules"
link = "/openapi-ui/{cluster}/{namespace}/api-table/core.cozystack.io/v1alpha1/tenantmodules"
}
items := []any{

View File

@@ -84,6 +84,53 @@ func (m *Manager) ensureCustomFormsOverride(ctx context.Context, crd *cozyv1alph
return err
}
// ensureCFOMapping updates the CFOMapping resource to include a mapping for the given CRD
func (m *Manager) ensureCFOMapping(ctx context.Context, crd *cozyv1alpha1.ApplicationDefinition) error {
g, v, kind := pickGVK(crd)
plural := pickPlural(kind, crd)
resourcePath := fmt.Sprintf("/%s/%s/%s", g, v, plural)
customizationID := fmt.Sprintf("default-%s", resourcePath)
obj := &dashv1alpha1.CFOMapping{}
obj.SetName("cfomapping")
_, err := controllerutil.CreateOrUpdate(ctx, m.Client, obj, func() error {
// Parse existing mappings
mappings := make(map[string]string)
if obj.Spec.JSON.Raw != nil {
var spec map[string]any
if err := json.Unmarshal(obj.Spec.JSON.Raw, &spec); err == nil {
if m, ok := spec["mappings"].(map[string]any); ok {
for k, val := range m {
if s, ok := val.(string); ok {
mappings[k] = s
}
}
}
}
}
// Add/update the mapping for this CRD
mappings[resourcePath] = customizationID
specData := map[string]any{
"mappings": mappings,
}
b, err := json.Marshal(specData)
if err != nil {
return err
}
newSpec := dashv1alpha1.ArbitrarySpec{JSON: apiextv1.JSON{Raw: b}}
if !compareArbitrarySpecs(obj.Spec, newSpec) {
obj.Spec = newSpec
}
return nil
})
return err
}
// buildMultilineStringSchema parses OpenAPI schema and creates schema with multilineString
// for all string fields inside spec that don't have enum
func buildMultilineStringSchema(openAPISchema string) (map[string]any, error) {

View File

@@ -47,7 +47,7 @@ func (m *Manager) ensureFactory(ctx context.Context, crd *cozyv1alpha1.Applicati
if prefix, ok := vncTabPrefix(kind); ok {
tabs = append(tabs, vncTab(prefix))
}
tabs = append(tabs, yamlTab(plural))
tabs = append(tabs, yamlTab(g, v, plural))
// Use unified factory creation
config := UnifiedResourceConfig{
@@ -160,11 +160,11 @@ func detailsTab(kind, endpoint, schemaJSON string, keysOrder [][]string) map[str
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "vpc-subnets-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "virtualprivatecloud-subnets",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/configmaps",
"id": "vpc-subnets-table",
"baseprefix": "/openapi-ui",
"cluster": "{2}",
"customizationId": "virtualprivatecloud-subnets",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/configmaps",
"fieldSelector": map[string]any{
"metadata.name": "virtualprivatecloud-{6}-subnets",
},
@@ -188,12 +188,12 @@ func detailsTab(kind, endpoint, schemaJSON string, keysOrder [][]string) map[str
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "resource-quotas-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-resource-quotas",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{reqsJsonPath[0]['.status.namespace']}/resourcequotas",
"pathToItems": []any{`items`},
"id": "resource-quotas-table",
"baseprefix": "/openapi-ui",
"cluster": "{2}",
"customizationId": "factory-resource-quotas",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{reqsJsonPath[0]['.status.namespace']}/resourcequotas",
"pathToItems": []any{`items`},
},
},
}),
@@ -242,13 +242,13 @@ func detailsTab(kind, endpoint, schemaJSON string, keysOrder [][]string) map[str
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "conditions-table",
"fetchUrl": endpoint,
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-status-conditions",
"baseprefix": "/openapi-ui",
"withoutControls": true,
"pathToItems": []any{"status", "conditions"},
"id": "conditions-table",
"fetchUrl": endpoint,
"cluster": "{2}",
"customizationId": "factory-status-conditions",
"baseprefix": "/openapi-ui",
"withoutControls": true,
"pathToItems": []any{"status", "conditions"},
},
},
}),
@@ -264,12 +264,12 @@ func workloadsTab(kind string) map[string]any {
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "workloads-table",
"fetchUrl": "/api/clusters/{2}/k8s/apis/cozystack.io/v1alpha1/namespaces/{3}/workloadmonitors",
"clusterNamePartOfUrl": "{2}",
"baseprefix": "/openapi-ui",
"customizationId": "factory-details-v1alpha1.cozystack.io.workloadmonitors",
"pathToItems": []any{"items"},
"id": "workloads-table",
"fetchUrl": "/api/clusters/{2}/k8s/apis/cozystack.io/v1alpha1/namespaces/{3}/workloadmonitors",
"cluster": "{2}",
"baseprefix": "/openapi-ui",
"customizationId": "factory-details-v1alpha1.cozystack.io.workloadmonitors",
"pathToItems": []any{"items"},
"labelSelector": map[string]any{
"apps.cozystack.io/application.group": "apps.cozystack.io",
"apps.cozystack.io/application.kind": kind,
@@ -289,12 +289,12 @@ func servicesTab(kind string) map[string]any {
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "services-table",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/services",
"clusterNamePartOfUrl": "{2}",
"baseprefix": "/openapi-ui",
"customizationId": "factory-details-v1.services",
"pathToItems": []any{"items"},
"id": "services-table",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/services",
"cluster": "{2}",
"baseprefix": "/openapi-ui",
"customizationId": "factory-details-v1.services",
"pathToItems": []any{"items"},
"labelSelector": map[string]any{
"apps.cozystack.io/application.group": "apps.cozystack.io",
"apps.cozystack.io/application.kind": kind,
@@ -315,12 +315,12 @@ func ingressesTab(kind string) map[string]any {
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "ingresses-table",
"fetchUrl": "/api/clusters/{2}/k8s/apis/networking.k8s.io/v1/namespaces/{3}/ingresses",
"clusterNamePartOfUrl": "{2}",
"baseprefix": "/openapi-ui",
"customizationId": "factory-details-networking.k8s.io.v1.ingresses",
"pathToItems": []any{"items"},
"id": "ingresses-table",
"fetchUrl": "/api/clusters/{2}/k8s/apis/networking.k8s.io/v1/namespaces/{3}/ingresses",
"cluster": "{2}",
"baseprefix": "/openapi-ui",
"customizationId": "factory-details-networking.k8s.io.v1.ingresses",
"pathToItems": []any{"items"},
"labelSelector": map[string]any{
"apps.cozystack.io/application.group": "apps.cozystack.io",
"apps.cozystack.io/application.kind": kind,
@@ -341,12 +341,12 @@ func secretsTab(kind string) map[string]any {
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "secrets-table",
"fetchUrl": "/api/clusters/{2}/k8s/apis/core.cozystack.io/v1alpha1/namespaces/{3}/tenantsecrets",
"clusterNamePartOfUrl": "{2}",
"baseprefix": "/openapi-ui",
"customizationId": "factory-details-v1alpha1.core.cozystack.io.tenantsecrets",
"pathToItems": []any{"items"},
"id": "secrets-table",
"fetchUrl": "/api/clusters/{2}/k8s/apis/core.cozystack.io/v1alpha1/namespaces/{3}/tenantsecrets",
"cluster": "{2}",
"baseprefix": "/openapi-ui",
"customizationId": "factory-details-v1alpha1.core.cozystack.io.tenantsecrets",
"pathToItems": []any{"items"},
"labelSelector": map[string]any{
"apps.cozystack.io/application.group": "apps.cozystack.io",
"apps.cozystack.io/application.kind": kind,
@@ -358,7 +358,7 @@ func secretsTab(kind string) map[string]any {
}
}
func yamlTab(plural string) map[string]any {
func yamlTab(group, version, plural string) map[string]any {
return map[string]any{
"key": "yaml",
"label": "YAML",
@@ -369,8 +369,10 @@ func yamlTab(plural string) map[string]any {
"id": "yaml-editor",
"cluster": "{2}",
"isNameSpaced": true,
"type": "builtin",
"typeName": plural,
"type": "apis",
"apiGroup": group,
"apiVersion": version,
"plural": plural,
"prefillValuesRequestIndex": float64(0),
"readOnly": true,
"substractHeight": float64(400),

View File

@@ -132,6 +132,10 @@ func (m *Manager) EnsureForAppDef(ctx context.Context, crd *cozyv1alpha1.Applica
return reconcile.Result{}, err
}
if err := m.ensureCFOMapping(ctx, crd); err != nil {
return reconcile.Result{}, err
}
if err := m.ensureSidebar(ctx, crd); err != nil {
return reconcile.Result{}, err
}
@@ -139,6 +143,10 @@ func (m *Manager) EnsureForAppDef(ctx context.Context, crd *cozyv1alpha1.Applica
if err := m.ensureFactory(ctx, crd); err != nil {
return reconcile.Result{}, err
}
if err := m.ensureNavigation(ctx, crd); err != nil {
return reconcile.Result{}, err
}
return reconcile.Result{}, nil
}

View File

@@ -74,7 +74,7 @@ func (m *Manager) ensureMarketplacePanel(ctx context.Context, crd *cozyv1alpha1.
"type": "nonCrd",
"apiGroup": "apps.cozystack.io",
"apiVersion": "v1alpha1",
"typeName": app.Plural, // e.g., "buckets"
"plural": app.Plural, // e.g., "buckets"
"disabled": false,
"hidden": false,
"tags": tags,

View File

@@ -0,0 +1,69 @@
package dashboard
import (
"context"
"encoding/json"
"fmt"
"strings"
dashv1alpha1 "github.com/cozystack/cozystack/api/dashboard/v1alpha1"
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
apiextv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
)
// ensureNavigation updates the Navigation resource to include a baseFactoriesMapping entry for the given CRD
func (m *Manager) ensureNavigation(ctx context.Context, crd *cozyv1alpha1.ApplicationDefinition) error {
g, v, kind := pickGVK(crd)
plural := pickPlural(kind, crd)
lowerKind := strings.ToLower(kind)
factoryKey := fmt.Sprintf("%s-details", lowerKind)
// All CRD resources are namespaced API resources
mappingKey := fmt.Sprintf("base-factory-namespaced-api-%s-%s-%s", g, v, plural)
obj := &dashv1alpha1.Navigation{}
obj.SetName("navigation")
_, err := controllerutil.CreateOrUpdate(ctx, m.Client, obj, func() error {
// Parse existing spec
spec := make(map[string]any)
if obj.Spec.JSON.Raw != nil {
if err := json.Unmarshal(obj.Spec.JSON.Raw, &spec); err != nil {
spec = make(map[string]any)
}
}
// Get or create baseFactoriesMapping
var mappings map[string]string
if existing, ok := spec["baseFactoriesMapping"].(map[string]any); ok {
mappings = make(map[string]string, len(existing))
for k, val := range existing {
if s, ok := val.(string); ok {
mappings[k] = s
}
}
} else {
mappings = make(map[string]string)
}
// Add/update the mapping for this CRD
mappings[mappingKey] = factoryKey
spec["baseFactoriesMapping"] = mappings
b, err := json.Marshal(spec)
if err != nil {
return err
}
newSpec := dashv1alpha1.ArbitrarySpec{JSON: apiextv1.JSON{Raw: b}}
if !compareArbitrarySpecs(obj.Spec, newSpec) {
obj.Spec = newSpec
}
return nil
})
return err
}

View File

@@ -22,8 +22,8 @@ import (
//
// Menu rules:
// - The first section is "Marketplace" with two hardcoded entries:
// - Marketplace (/openapi-ui/{clusterName}/{namespace}/factory/marketplace)
// - Tenant Info (/openapi-ui/{clusterName}/{namespace}/factory/info-details/info)
// - Marketplace (/openapi-ui/{cluster}/{namespace}/factory/marketplace)
// - Tenant Info (/openapi-ui/{cluster}/{namespace}/factory/info-details/info)
// - All other sections are built from CRDs where spec.dashboard != nil.
// - Categories are ordered strictly as:
// Marketplace, IaaS, PaaS, NaaS, <others A→Z>, Resources, Backups, Administration
@@ -91,7 +91,7 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
// Weight (default 0)
weight := def.Spec.Dashboard.Weight
link := fmt.Sprintf("/openapi-ui/{clusterName}/{namespace}/api-table/%s/%s/%s", g, v, plural)
link := fmt.Sprintf("/openapi-ui/{cluster}/{namespace}/api-table/%s/%s/%s", g, v, plural)
categories[cat] = append(categories[cat], item{
Key: plural,
@@ -146,7 +146,7 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
map[string]any{
"key": "marketplace",
"label": "Marketplace",
"link": "/openapi-ui/{clusterName}/{namespace}/factory/marketplace",
"link": "/openapi-ui/{cluster}/{namespace}/factory/marketplace",
},
},
},
@@ -205,12 +205,12 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
map[string]any{
"key": "info",
"label": "Info",
"link": "/openapi-ui/{clusterName}/{namespace}/factory/info-details/info",
"link": "/openapi-ui/{cluster}/{namespace}/factory/info-details/info",
},
map[string]any{
"key": "modules",
"label": "Modules",
"link": "/openapi-ui/{clusterName}/{namespace}/api-table/core.cozystack.io/v1alpha1/tenantmodules",
"link": "/openapi-ui/{cluster}/{namespace}/api-table/core.cozystack.io/v1alpha1/tenantmodules",
},
map[string]any{
"key": "loadbalancer-services",
@@ -220,7 +220,7 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
map[string]any{
"key": "tenants",
"label": "Tenants",
"link": "/openapi-ui/{clusterName}/{namespace}/api-table/apps.cozystack.io/v1alpha1/tenants",
"link": "/openapi-ui/{cluster}/{namespace}/api-table/apps.cozystack.io/v1alpha1/tenants",
},
},
})

View File

@@ -1134,7 +1134,7 @@ func yamlEditor(id, cluster string, isNameSpaced bool, typeName string, prefillV
"cluster": cluster,
"isNameSpaced": isNameSpaced,
"type": "builtin",
"typeName": typeName,
"plural": typeName,
"prefillValuesRequestIndex": prefillValuesRequestIndex,
"substractHeight": float64(400),
},

View File

@@ -49,6 +49,8 @@ func (m *Manager) ensureStaticResource(ctx context.Context, obj client.Object) e
resource.(*dashv1alpha1.Navigation).Spec = o.Spec
case *dashv1alpha1.TableUriMapping:
resource.(*dashv1alpha1.TableUriMapping).Spec = o.Spec
case *dashv1alpha1.CFOMapping:
resource.(*dashv1alpha1.CFOMapping).Spec = o.Spec
}
// Ensure labels are always set
m.addDashboardLabels(resource, nil, ResourceTypeStatic)

View File

@@ -17,111 +17,111 @@ func CreateAllBreadcrumbs() []*dashboardv1alpha1.Breadcrumb {
return []*dashboardv1alpha1.Breadcrumb{
// Stock project factory configmap details
createBreadcrumb("stock-project-factory-configmap-details", []map[string]any{
createBreadcrumbItem("configmaps", "v1/configmaps", "/openapi-ui/{clusterName}/{namespace}/builtin-table/configmaps"),
createBreadcrumbItem("configmaps", "v1/configmaps", "/openapi-ui/{cluster}/{namespace}/builtin-table/configmaps"),
createBreadcrumbItem("configmap", "{6}"),
}),
// Stock cluster factory namespace details
createBreadcrumb("stock-cluster-factory-namespace-details", []map[string]any{
createBreadcrumbItem("namespaces", "v1/namespaces", "/openapi-ui/{clusterName}/builtin-table/namespaces"),
createBreadcrumbItem("namespaces", "v1/namespaces", "/openapi-ui/{cluster}/builtin-table/namespaces"),
createBreadcrumbItem("namespace", "{5}"),
}),
// Stock cluster factory node details
createBreadcrumb("stock-cluster-factory-node-details", []map[string]any{
createBreadcrumbItem("node", "v1/nodes", "/openapi-ui/{clusterName}/builtin-table/nodes"),
createBreadcrumbItem("node", "v1/nodes", "/openapi-ui/{cluster}/builtin-table/nodes"),
createBreadcrumbItem("node", "{5}"),
}),
// Stock project factory pod details
createBreadcrumb("stock-project-factory-pod-details", []map[string]any{
createBreadcrumbItem("pods", "v1/pods", "/openapi-ui/{clusterName}/{namespace}/builtin-table/pods"),
createBreadcrumbItem("pods", "v1/pods", "/openapi-ui/{cluster}/{namespace}/builtin-table/pods"),
createBreadcrumbItem("pod", "{6}"),
}),
// Stock project factory secret details
createBreadcrumb("stock-project-factory-kube-secret-details", []map[string]any{
createBreadcrumbItem("secrets", "v1/secrets", "/openapi-ui/{clusterName}/{namespace}/builtin-table/secrets"),
createBreadcrumbItem("secrets", "v1/secrets", "/openapi-ui/{cluster}/{namespace}/builtin-table/secrets"),
createBreadcrumbItem("secret", "{6}"),
}),
// Stock project factory service details
createBreadcrumb("stock-project-factory-kube-service-details", []map[string]any{
createBreadcrumbItem("services", "v1/services", "/openapi-ui/{clusterName}/{namespace}/builtin-table/services"),
createBreadcrumbItem("services", "v1/services", "/openapi-ui/{cluster}/{namespace}/builtin-table/services"),
createBreadcrumbItem("service", "{6}"),
}),
// Stock project factory ingress details
createBreadcrumb("stock-project-factory-kube-ingress-details", []map[string]any{
createBreadcrumbItem("ingresses", "networking.k8s.io/v1/ingresses", "/openapi-ui/{clusterName}/{namespace}/builtin-table/ingresses"),
createBreadcrumbItem("ingresses", "networking.k8s.io/v1/ingresses", "/openapi-ui/{cluster}/{namespace}/builtin-table/ingresses"),
createBreadcrumbItem("ingress", "{6}"),
}),
// Stock cluster api table
createBreadcrumb("stock-cluster-api-table", []map[string]any{
createBreadcrumbItem("api", "{apiGroup}/{apiVersion}/{typeName}"),
createBreadcrumbItem("api", "{apiGroup}/{apiVersion}/{plural}"),
}),
// Stock cluster api form
createBreadcrumb("stock-cluster-api-form", []map[string]any{
createBreadcrumbItem("create-api-res-namespaced-table", "{apiGroup}/{apiVersion}/{typeName}", "/openapi-ui/{clusterName}/api-table/{apiGroup}/{apiVersion}/{typeName}"),
createBreadcrumbItem("create-api-res-namespaced-table", "{apiGroup}/{apiVersion}/{plural}", "/openapi-ui/{cluster}/api-table/{apiGroup}/{apiVersion}/{plural}"),
createBreadcrumbItem("create-api-res-namespaced-typename", "Create"),
}),
// Stock cluster api form edit
createBreadcrumb("stock-cluster-api-form-edit", []map[string]any{
createBreadcrumbItem("create-api-res-namespaced-table", "{apiGroup}/{apiVersion}/{typeName}", "/openapi-ui/{clusterName}/api-table/{apiGroup}/{apiVersion}/{typeName}"),
createBreadcrumbItem("create-api-res-namespaced-table", "{apiGroup}/{apiVersion}/{plural}", "/openapi-ui/{cluster}/api-table/{apiGroup}/{apiVersion}/{plural}"),
createBreadcrumbItem("create-api-res-namespaced-typename", "Update"),
}),
// Stock cluster builtin table
createBreadcrumb("stock-cluster-builtin-table", []map[string]any{
createBreadcrumbItem("api", "v1/{typeName}"),
createBreadcrumbItem("api", "v1/{plural}"),
}),
// Stock cluster builtin form
createBreadcrumb("stock-cluster-builtin-form", []map[string]any{
createBreadcrumbItem("create-api-res-namespaced-table", "v1/{typeName}", "/openapi-ui/{clusterName}/builtin-table/{typeName}"),
createBreadcrumbItem("create-api-res-namespaced-table", "v1/{plural}", "/openapi-ui/{cluster}/builtin-table/{plural}"),
createBreadcrumbItem("create-api-res-namespaced-typename", "Create"),
}),
// Stock cluster builtin form edit
createBreadcrumb("stock-cluster-builtin-form-edit", []map[string]any{
createBreadcrumbItem("create-api-res-namespaced-table", "v1/{typeName}", "/openapi-ui/{clusterName}/builtin-table/{typeName}"),
createBreadcrumbItem("create-api-res-namespaced-table", "v1/{plural}", "/openapi-ui/{cluster}/builtin-table/{plural}"),
createBreadcrumbItem("create-api-res-namespaced-typename", "Update"),
}),
// Stock project api table
createBreadcrumb("stock-project-api-table", []map[string]any{
createBreadcrumbItem("api", "{apiGroup}/{apiVersion}/{typeName}"),
createBreadcrumbItem("api", "{apiGroup}/{apiVersion}/{plural}"),
}),
// Stock project api form
createBreadcrumb("stock-project-api-form", []map[string]any{
createBreadcrumbItem("create-api-res-namespaced-table", "{apiGroup}/{apiVersion}/{typeName}", "/openapi-ui/{clusterName}/{namespace}/api-table/{apiGroup}/{apiVersion}/{typeName}"),
createBreadcrumbItem("create-api-res-namespaced-table", "{apiGroup}/{apiVersion}/{plural}", "/openapi-ui/{cluster}/{namespace}/api-table/{apiGroup}/{apiVersion}/{plural}"),
createBreadcrumbItem("create-api-res-namespaced-typename", "Create"),
}),
// Stock project api form edit
createBreadcrumb("stock-project-api-form-edit", []map[string]any{
createBreadcrumbItem("create-api-res-namespaced-table", "{apiGroup}/{apiVersion}/{typeName}", "/openapi-ui/{clusterName}/{namespace}/api-table/{apiGroup}/{apiVersion}/{typeName}"),
createBreadcrumbItem("create-api-res-namespaced-table", "{apiGroup}/{apiVersion}/{plural}", "/openapi-ui/{cluster}/{namespace}/api-table/{apiGroup}/{apiVersion}/{plural}"),
createBreadcrumbItem("create-api-res-namespaced-typename", "Update"),
}),
// Stock project builtin table
createBreadcrumb("stock-project-builtin-table", []map[string]any{
createBreadcrumbItem("api", "v1/{typeName}"),
createBreadcrumbItem("api", "v1/{plural}"),
}),
// Stock project builtin form
createBreadcrumb("stock-project-builtin-form", []map[string]any{
createBreadcrumbItem("create-api-res-namespaced-table", "v1/{typeName}", "/openapi-ui/{clusterName}/{namespace}/builtin-table/{typeName}"),
createBreadcrumbItem("create-api-res-namespaced-table", "v1/{plural}", "/openapi-ui/{cluster}/{namespace}/builtin-table/{plural}"),
createBreadcrumbItem("create-api-res-namespaced-typename", "Create"),
}),
// Stock project builtin form edit
createBreadcrumb("stock-project-builtin-form-edit", []map[string]any{
createBreadcrumbItem("create-api-res-namespaced-table", "v1/{typeName}", "/openapi-ui/{clusterName}/{namespace}/builtin-table/{typeName}"),
createBreadcrumbItem("create-api-res-namespaced-table", "v1/{plural}", "/openapi-ui/{cluster}/{namespace}/builtin-table/{plural}"),
createBreadcrumbItem("create-api-res-namespaced-typename", "Update"),
}),
}
@@ -535,14 +535,14 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
}, []any{
map[string]any{
"data": map[string]any{
"baseApiVersion": "v1alpha1",
"baseprefix": "openapi-ui",
"clusterNamePartOfUrl": "{2}",
"id": 311,
"mpResourceKind": "MarketplacePanel",
"mpResourceName": "marketplacepanels",
"namespacePartOfUrl": "{3}",
"baseApiGroup": "dashboard.cozystack.io",
"baseApiVersion": "v1alpha1",
"baseprefix": "openapi-ui",
"cluster": "{2}",
"id": 311,
"marketplaceKind": "MarketplacePanel",
"marketplacePlural": "marketplacepanels",
"namespace": "{3}",
"baseApiGroup": "dashboard.cozystack.io",
},
"type": "MarketplaceCard",
},
@@ -855,7 +855,7 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
"prefillValuesRequestIndex": 0,
"substractHeight": float64(400),
"type": "builtin",
"typeName": "secrets",
"plural": "secrets",
"readOnly": true,
},
},
@@ -1085,13 +1085,13 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "service-port-mapping-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-kube-service-details-port-mapping",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/services/{6}",
"pathToItems": ".spec.ports",
"withoutControls": true,
"id": "service-port-mapping-table",
"baseprefix": "/openapi-ui",
"cluster": "{2}",
"customizationId": "factory-kube-service-details-port-mapping",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/services/{6}",
"pathToItems": ".spec.ports",
"withoutControls": true,
},
},
}),
@@ -1111,11 +1111,11 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "service-pod-serving-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-kube-service-details-endpointslice",
"fetchUrl": "/api/clusters/{2}/k8s/apis/discovery.k8s.io/v1/namespaces/{3}/endpointslices",
"id": "service-pod-serving-table",
"baseprefix": "/openapi-ui",
"cluster": "{2}",
"customizationId": "factory-kube-service-details-endpointslice",
"fetchUrl": "/api/clusters/{2}/k8s/apis/discovery.k8s.io/v1/namespaces/{3}/endpointslices",
"labelSelector": map[string]any{
"kubernetes.io/service-name": "{reqsJsonPath[0]['.metadata.name']['-']}",
},
@@ -1145,7 +1145,7 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
"prefillValuesRequestIndex": 0,
"substractHeight": float64(400),
"type": "builtin",
"typeName": "services",
"plural": "services",
},
},
},
@@ -1168,11 +1168,11 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "pods-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-node-details-/v1/pods",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/pods",
"id": "pods-table",
"baseprefix": "/openapi-ui",
"cluster": "{2}",
"customizationId": "factory-node-details-/v1/pods",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/pods",
"labelSelectorFull": map[string]any{
"pathToLabels": ".spec.selector",
"reqIndex": 0,
@@ -1300,13 +1300,13 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "rules-table",
"fetchUrl": "/api/clusters/{2}/k8s/apis/networking.k8s.io/v1/namespaces/{3}/ingresses/{6}",
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-kube-ingress-details-rules",
"baseprefix": "/openapi-ui",
"withoutControls": true,
"pathToItems": []any{"spec", "rules"},
"id": "rules-table",
"fetchUrl": "/api/clusters/{2}/k8s/apis/networking.k8s.io/v1/namespaces/{3}/ingresses/{6}",
"cluster": "{2}",
"customizationId": "factory-kube-ingress-details-rules",
"baseprefix": "/openapi-ui",
"withoutControls": true,
"pathToItems": []any{"spec", "rules"},
},
},
}),
@@ -1322,8 +1322,10 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
"id": "yaml-editor",
"cluster": "{2}",
"isNameSpaced": true,
"type": "builtin",
"typeName": "ingresses",
"type": "apis",
"apiGroup": "networking.k8s.io",
"apiVersion": "v1",
"plural": "ingresses",
"prefillValuesRequestIndex": float64(0),
"substractHeight": float64(400),
},
@@ -1452,11 +1454,11 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "workloads-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "factory-details-v1alpha1.cozystack.io.workloads",
"fetchUrl": "/api/clusters/{2}/k8s/apis/cozystack.io/v1alpha1/namespaces/{3}/workloads",
"id": "workloads-table",
"baseprefix": "/openapi-ui",
"cluster": "{2}",
"customizationId": "factory-details-v1alpha1.cozystack.io.workloads",
"fetchUrl": "/api/clusters/{2}/k8s/apis/cozystack.io/v1alpha1/namespaces/{3}/workloads",
"labelSelector": map[string]any{
"workloads.cozystack.io/monitor": "{reqs[0]['metadata','name']}",
},
@@ -1477,8 +1479,10 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
"isNameSpaced": true,
"prefillValuesRequestIndex": 0,
"substractHeight": float64(400),
"type": "builtin",
"typeName": "workloadmonitors",
"type": "apis",
"apiGroup": "cozystack.io",
"apiVersion": "v1alpha1",
"plural": "workloadmonitors",
},
},
},
@@ -1960,12 +1964,27 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
// CreateAllNavigations creates all navigation resources using helper functions
func CreateAllNavigations() []*dashboardv1alpha1.Navigation {
// Build baseFactoriesMapping for static (built-in) factories
baseFactoriesMapping := map[string]string{
// Cluster-scoped builtin resources
"base-factory-clusterscoped-builtin-v1-namespaces": "namespace-details",
"base-factory-clusterscoped-builtin-v1-nodes": "node-details",
// Namespaced builtin resources
"base-factory-namespaced-builtin-v1-pods": "pod-details",
"base-factory-namespaced-builtin-v1-secrets": "kube-secret-details",
"base-factory-namespaced-builtin-v1-services": "kube-service-details",
// Namespaced API resources
"base-factory-namespaced-api-networking.k8s.io-v1-ingresses": "kube-ingress-details",
"base-factory-namespaced-api-cozystack.io-v1alpha1-workloadmonitors": "workloadmonitor-details",
}
return []*dashboardv1alpha1.Navigation{
createNavigation("navigation", map[string]any{
"namespaces": map[string]any{
"change": "/openapi-ui/{selectedCluster}/{value}/factory/marketplace",
"clear": "/openapi-ui/{selectedCluster}/api-table/core.cozystack.io/v1alpha1/tenantnamespaces",
},
"baseFactoriesMapping": baseFactoriesMapping,
}),
}
}
@@ -2342,6 +2361,51 @@ func createWorkloadmonitorHeader() map[string]any {
}
}
// CreateStaticCFOMapping creates the CFOMapping resource with mappings from static CustomFormsOverrides
func CreateStaticCFOMapping() *dashboardv1alpha1.CFOMapping {
// Build mappings from static CustomFormsOverrides
customFormsOverrides := CreateAllCustomFormsOverrides()
mappings := make(map[string]string, len(customFormsOverrides))
for _, cfo := range customFormsOverrides {
var spec map[string]any
if err := json.Unmarshal(cfo.Spec.JSON.Raw, &spec); err != nil {
continue
}
customizationID, ok := spec["customizationId"].(string)
if !ok {
continue
}
// Extract the resource path from customizationId (remove "default-" prefix)
resourcePath := strings.TrimPrefix(customizationID, "default-")
mappings[resourcePath] = customizationID
}
return createCFOMapping("cfomapping", mappings)
}
// createCFOMapping creates a CFOMapping resource
func createCFOMapping(name string, mappings map[string]string) *dashboardv1alpha1.CFOMapping {
spec := map[string]any{
"mappings": mappings,
}
jsonData, _ := json.Marshal(spec)
return &dashboardv1alpha1.CFOMapping{
TypeMeta: metav1.TypeMeta{
APIVersion: "dashboard.cozystack.io/v1alpha1",
Kind: "CFOMapping",
},
ObjectMeta: metav1.ObjectMeta{
Name: name,
},
Spec: dashboardv1alpha1.ArbitrarySpec{
JSON: v1.JSON{
Raw: jsonData,
},
},
}
}
// ---------------- Complete resource creation function ----------------
// CreateAllStaticResources creates all static dashboard resources using helper functions
@@ -2378,5 +2442,8 @@ func CreateAllStaticResources() []client.Object {
resources = append(resources, tableUriMapping)
}
// Add CFOMapping
resources = append(resources, CreateStaticCFOMapping())
return resources
}

View File

@@ -43,7 +43,7 @@ func TestWorkloadReconciler_DeletesOnMissingMonitor(t *testing.T) {
Name: "pod-foo",
Namespace: "default",
Labels: map[string]string{
"workloadmonitor.cozystack.io/name": "missing-monitor",
"workloads.cozystack.io/monitor": "missing-monitor",
},
},
}
@@ -89,7 +89,7 @@ func TestWorkloadReconciler_KeepsWhenAllExist(t *testing.T) {
Name: "pod-foo",
Namespace: "default",
Labels: map[string]string{
"workloadmonitor.cozystack.io/name": "mon",
"workloads.cozystack.io/monitor": "mon",
},
},
}

View File

@@ -0,0 +1,112 @@
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package crdinstall
import (
"context"
"fmt"
"os"
"path/filepath"
"strings"
"github.com/cozystack/cozystack/internal/manifestutil"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
)
// Install applies Cozystack CRDs using embedded manifests.
// It extracts the manifests and applies them to the cluster using server-side apply,
// then waits for all CRDs to have the Established condition.
func Install(ctx context.Context, k8sClient client.Client, writeEmbeddedManifests func(string) error) error {
logger := log.FromContext(ctx)
tmpDir, err := os.MkdirTemp("", "crd-install-*")
if err != nil {
return fmt.Errorf("failed to create temp directory: %w", err)
}
defer os.RemoveAll(tmpDir)
manifestsDir := filepath.Join(tmpDir, "manifests")
if err := os.MkdirAll(manifestsDir, 0755); err != nil {
return fmt.Errorf("failed to create manifests directory: %w", err)
}
if err := writeEmbeddedManifests(manifestsDir); err != nil {
return fmt.Errorf("failed to extract embedded manifests: %w", err)
}
entries, err := os.ReadDir(manifestsDir)
if err != nil {
return fmt.Errorf("failed to read manifests directory: %w", err)
}
var manifestFiles []string
for _, entry := range entries {
if strings.HasSuffix(entry.Name(), ".yaml") {
manifestFiles = append(manifestFiles, filepath.Join(manifestsDir, entry.Name()))
}
}
if len(manifestFiles) == 0 {
return fmt.Errorf("no YAML manifest files found in directory")
}
var objects []*unstructured.Unstructured
for _, manifestPath := range manifestFiles {
objs, err := manifestutil.ParseManifestFile(manifestPath)
if err != nil {
return fmt.Errorf("failed to parse manifests from %s: %w", manifestPath, err)
}
objects = append(objects, objs...)
}
if len(objects) == 0 {
return fmt.Errorf("no objects found in manifests")
}
// Validate all objects are CRDs — reject anything else to prevent
// accidental force-apply of arbitrary resources.
for _, obj := range objects {
if obj.GetAPIVersion() != "apiextensions.k8s.io/v1" || obj.GetKind() != "CustomResourceDefinition" {
return fmt.Errorf("unexpected object %s %s/%s in CRD manifests, only apiextensions.k8s.io/v1 CustomResourceDefinition is allowed",
obj.GetAPIVersion(), obj.GetKind(), obj.GetName())
}
}
logger.Info("Applying Cozystack CRDs", "count", len(objects))
for _, obj := range objects {
patchOptions := &client.PatchOptions{
FieldManager: "cozystack-operator",
Force: func() *bool { b := true; return &b }(),
}
if err := k8sClient.Patch(ctx, obj, client.Apply, patchOptions); err != nil {
return fmt.Errorf("failed to apply CRD %s: %w", obj.GetName(), err)
}
logger.Info("Applied CRD", "name", obj.GetName())
}
crdNames := manifestutil.CollectCRDNames(objects)
if err := manifestutil.WaitForCRDsEstablished(ctx, k8sClient, crdNames); err != nil {
return fmt.Errorf("CRDs not established after apply: %w", err)
}
logger.Info("CRD installation completed successfully")
return nil
}

View File

@@ -0,0 +1,302 @@
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package crdinstall
import (
"context"
"fmt"
"os"
"path/filepath"
"strings"
"testing"
"time"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/client/interceptor"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
)
func TestWriteEmbeddedManifests(t *testing.T) {
tmpDir := t.TempDir()
if err := WriteEmbeddedManifests(tmpDir); err != nil {
t.Fatalf("WriteEmbeddedManifests() error = %v", err)
}
entries, err := os.ReadDir(tmpDir)
if err != nil {
t.Fatalf("failed to read output dir: %v", err)
}
var yamlFiles []string
for _, e := range entries {
if strings.HasSuffix(e.Name(), ".yaml") {
yamlFiles = append(yamlFiles, e.Name())
}
}
if len(yamlFiles) == 0 {
t.Error("WriteEmbeddedManifests() produced no YAML files")
}
expectedFiles := []string{
"cozystack.io_packages.yaml",
"cozystack.io_packagesources.yaml",
}
for _, expected := range expectedFiles {
found := false
for _, actual := range yamlFiles {
if actual == expected {
found = true
break
}
}
if !found {
t.Errorf("expected file %q not found in output, got %v", expected, yamlFiles)
}
}
// Verify files are non-empty
for _, f := range yamlFiles {
data, err := os.ReadFile(filepath.Join(tmpDir, f))
if err != nil {
t.Errorf("failed to read %s: %v", f, err)
continue
}
if len(data) == 0 {
t.Errorf("file %s is empty", f)
}
}
}
func TestWriteEmbeddedManifests_filePermissions(t *testing.T) {
tmpDir := t.TempDir()
if err := WriteEmbeddedManifests(tmpDir); err != nil {
t.Fatalf("WriteEmbeddedManifests() error = %v", err)
}
entries, err := os.ReadDir(tmpDir)
if err != nil {
t.Fatalf("failed to read output dir: %v", err)
}
for _, e := range entries {
if !strings.HasSuffix(e.Name(), ".yaml") {
continue
}
info, err := e.Info()
if err != nil {
t.Errorf("failed to get info for %s: %v", e.Name(), err)
continue
}
perm := info.Mode().Perm()
if perm&0o077 != 0 {
t.Errorf("file %s has overly permissive mode %o, expected no group/other access", e.Name(), perm)
}
}
}
// newCRDManifestWriter returns a function that writes test CRD YAML files.
func newCRDManifestWriter(crds ...string) func(string) error {
return func(dir string) error {
for i, crd := range crds {
filename := filepath.Join(dir, fmt.Sprintf("crd%d.yaml", i+1))
if err := os.WriteFile(filename, []byte(crd), 0600); err != nil {
return err
}
}
return nil
}
}
var testCRD1 = `apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: packages.cozystack.io
spec:
group: cozystack.io
names:
kind: Package
plural: packages
scope: Namespaced
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
`
var testCRD2 = `apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: packagesources.cozystack.io
spec:
group: cozystack.io
names:
kind: PackageSource
plural: packagesources
scope: Namespaced
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
`
// establishedInterceptor simulates CRDs becoming Established in the API server.
func establishedInterceptor() interceptor.Funcs {
return interceptor.Funcs{
Get: func(ctx context.Context, c client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error {
if err := c.Get(ctx, key, obj, opts...); err != nil {
return err
}
u, ok := obj.(*unstructured.Unstructured)
if !ok {
return nil
}
if u.GetKind() == "CustomResourceDefinition" {
_ = unstructured.SetNestedSlice(u.Object, []interface{}{
map[string]interface{}{
"type": "Established",
"status": "True",
},
}, "status", "conditions")
}
return nil
},
}
}
func TestInstall_appliesAllCRDs(t *testing.T) {
log.SetLogger(zap.New(zap.UseDevMode(true)))
scheme := runtime.NewScheme()
if err := apiextensionsv1.AddToScheme(scheme); err != nil {
t.Fatalf("failed to add apiextensions to scheme: %v", err)
}
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithInterceptorFuncs(establishedInterceptor()).
Build()
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
ctx = log.IntoContext(ctx, log.FromContext(context.Background()))
err := Install(ctx, fakeClient, newCRDManifestWriter(testCRD1, testCRD2))
if err != nil {
t.Fatalf("Install() error = %v", err)
}
}
func TestInstall_noManifests(t *testing.T) {
log.SetLogger(zap.New(zap.UseDevMode(true)))
scheme := runtime.NewScheme()
fakeClient := fake.NewClientBuilder().WithScheme(scheme).Build()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
ctx = log.IntoContext(ctx, log.FromContext(context.Background()))
err := Install(ctx, fakeClient, func(string) error { return nil })
if err == nil {
t.Error("Install() expected error for empty manifests, got nil")
}
if !strings.Contains(err.Error(), "no YAML manifest files found") {
t.Errorf("Install() error = %v, want error containing 'no YAML manifest files found'", err)
}
}
func TestInstall_writeManifestsFails(t *testing.T) {
log.SetLogger(zap.New(zap.UseDevMode(true)))
scheme := runtime.NewScheme()
fakeClient := fake.NewClientBuilder().WithScheme(scheme).Build()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
ctx = log.IntoContext(ctx, log.FromContext(context.Background()))
err := Install(ctx, fakeClient, func(string) error { return os.ErrPermission })
if err == nil {
t.Error("Install() expected error when writeManifests fails, got nil")
}
}
func TestInstall_rejectsNonCRDObjects(t *testing.T) {
log.SetLogger(zap.New(zap.UseDevMode(true)))
scheme := runtime.NewScheme()
if err := apiextensionsv1.AddToScheme(scheme); err != nil {
t.Fatalf("failed to add apiextensions to scheme: %v", err)
}
fakeClient := fake.NewClientBuilder().WithScheme(scheme).Build()
nonCRD := `apiVersion: v1
kind: Namespace
metadata:
name: should-not-be-applied
`
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
ctx = log.IntoContext(ctx, log.FromContext(context.Background()))
err := Install(ctx, fakeClient, newCRDManifestWriter(nonCRD))
if err == nil {
t.Fatal("Install() expected error for non-CRD object, got nil")
}
if !strings.Contains(err.Error(), "unexpected object") {
t.Errorf("Install() error = %v, want error containing 'unexpected object'", err)
}
}
func TestInstall_crdNotEstablished(t *testing.T) {
log.SetLogger(zap.New(zap.UseDevMode(true)))
scheme := runtime.NewScheme()
if err := apiextensionsv1.AddToScheme(scheme); err != nil {
t.Fatalf("failed to add apiextensions to scheme: %v", err)
}
// No interceptor: CRDs will never get Established condition
fakeClient := fake.NewClientBuilder().WithScheme(scheme).Build()
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
ctx = log.IntoContext(ctx, log.FromContext(context.Background()))
err := Install(ctx, fakeClient, newCRDManifestWriter(testCRD1))
if err == nil {
t.Fatal("Install() expected error when CRDs never become established, got nil")
}
if !strings.Contains(err.Error(), "CRDs not established") {
t.Errorf("Install() error = %v, want error containing 'CRDs not established'", err)
}
}

View File

@@ -0,0 +1,51 @@
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package crdinstall
import (
"embed"
"fmt"
"io/fs"
"os"
"path"
"path/filepath"
)
//go:embed manifests/*.yaml
var embeddedCRDManifests embed.FS
// WriteEmbeddedManifests extracts embedded CRD manifests to a directory.
func WriteEmbeddedManifests(dir string) error {
manifests, err := fs.ReadDir(embeddedCRDManifests, "manifests")
if err != nil {
return fmt.Errorf("failed to read embedded manifests: %w", err)
}
for _, manifest := range manifests {
data, err := fs.ReadFile(embeddedCRDManifests, path.Join("manifests", manifest.Name()))
if err != nil {
return fmt.Errorf("failed to read file %s: %w", manifest.Name(), err)
}
outputPath := filepath.Join(dir, manifest.Name())
if err := os.WriteFile(outputPath, data, 0600); err != nil {
return fmt.Errorf("failed to write file %s: %w", outputPath, err)
}
}
return nil
}

View File

@@ -17,18 +17,15 @@ limitations under the License.
package fluxinstall
import (
"bufio"
"bytes"
"context"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"time"
"github.com/cozystack/cozystack/internal/manifestutil"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
k8syaml "k8s.io/apimachinery/pkg/util/yaml"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
)
@@ -76,7 +73,7 @@ func Install(ctx context.Context, k8sClient client.Client, writeEmbeddedManifest
// Parse all manifest files
var objects []*unstructured.Unstructured
for _, manifestPath := range manifestFiles {
objs, err := parseManifests(manifestPath)
objs, err := manifestutil.ParseManifestFile(manifestPath)
if err != nil {
return fmt.Errorf("failed to parse manifests from %s: %w", manifestPath, err)
}
@@ -110,56 +107,6 @@ func Install(ctx context.Context, k8sClient client.Client, writeEmbeddedManifest
return nil
}
// parseManifests parses YAML manifests into unstructured objects.
func parseManifests(manifestPath string) ([]*unstructured.Unstructured, error) {
data, err := os.ReadFile(manifestPath)
if err != nil {
return nil, fmt.Errorf("failed to read manifest file: %w", err)
}
return readYAMLObjects(bytes.NewReader(data))
}
// readYAMLObjects parses multi-document YAML into unstructured objects.
func readYAMLObjects(reader io.Reader) ([]*unstructured.Unstructured, error) {
var objects []*unstructured.Unstructured
yamlReader := k8syaml.NewYAMLReader(bufio.NewReader(reader))
for {
doc, err := yamlReader.Read()
if err != nil {
if err == io.EOF {
break
}
return nil, fmt.Errorf("failed to read YAML document: %w", err)
}
// Skip empty documents
if len(bytes.TrimSpace(doc)) == 0 {
continue
}
obj := &unstructured.Unstructured{}
decoder := k8syaml.NewYAMLOrJSONDecoder(bytes.NewReader(doc), len(doc))
if err := decoder.Decode(obj); err != nil {
// Skip documents that can't be decoded (might be comments or empty)
if err == io.EOF {
continue
}
return nil, fmt.Errorf("failed to decode YAML document: %w", err)
}
// Skip empty objects (no kind)
if obj.GetKind() == "" {
continue
}
objects = append(objects, obj)
}
return objects, nil
}
// applyManifests applies Kubernetes objects using server-side apply.
func applyManifests(ctx context.Context, k8sClient client.Client, objects []*unstructured.Unstructured) error {
logger := log.FromContext(ctx)
@@ -183,8 +130,11 @@ func applyManifests(ctx context.Context, k8sClient client.Client, objects []*uns
return fmt.Errorf("failed to apply cluster definitions: %w", err)
}
// Wait a bit for CRDs to be registered
time.Sleep(2 * time.Second)
// Wait for CRDs to be established before applying dependent resources
crdNames := manifestutil.CollectCRDNames(stageOne)
if err := manifestutil.WaitForCRDsEstablished(ctx, k8sClient, crdNames); err != nil {
return fmt.Errorf("CRDs not established after apply: %w", err)
}
}
// Apply stage two (everything else)
@@ -215,7 +165,6 @@ func applyObjects(ctx context.Context, k8sClient client.Client, objects []*unstr
return nil
}
// extractNamespace extracts the namespace name from the Namespace object in the manifests.
func extractNamespace(objects []*unstructured.Unstructured) (string, error) {
for _, obj := range objects {
@@ -386,4 +335,3 @@ func setEnvVar(env []interface{}, name, value string) []interface{} {
return env
}

View File

@@ -22,6 +22,7 @@ import (
"io/fs"
"os"
"path"
"path/filepath"
)
//go:embed manifests/*.yaml
@@ -40,8 +41,8 @@ func WriteEmbeddedManifests(dir string) error {
return fmt.Errorf("failed to read file %s: %w", manifest.Name(), err)
}
outputPath := path.Join(dir, manifest.Name())
if err := os.WriteFile(outputPath, data, 0666); err != nil {
outputPath := filepath.Join(dir, manifest.Name())
if err := os.WriteFile(outputPath, data, 0600); err != nil {
return fmt.Errorf("failed to write file %s: %w", outputPath, err)
}
}

View File

@@ -0,0 +1,118 @@
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package manifestutil
import (
"context"
"fmt"
"time"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
)
var crdGVK = schema.GroupVersionKind{
Group: "apiextensions.k8s.io",
Version: "v1",
Kind: "CustomResourceDefinition",
}
// WaitForCRDsEstablished polls the API server until all named CRDs have the
// Established condition set to True, or the context is cancelled.
func WaitForCRDsEstablished(ctx context.Context, k8sClient client.Client, crdNames []string) error {
if len(crdNames) == 0 {
return nil
}
logger := log.FromContext(ctx)
ticker := time.NewTicker(500 * time.Millisecond)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return fmt.Errorf("context cancelled while waiting for CRDs to be established: %w", ctx.Err())
default:
}
allEstablished := true
var pendingCRD string
for _, name := range crdNames {
crd := &unstructured.Unstructured{}
crd.SetGroupVersionKind(crdGVK)
if err := k8sClient.Get(ctx, types.NamespacedName{Name: name}, crd); err != nil {
allEstablished = false
pendingCRD = name
break
}
conditions, found, err := unstructured.NestedSlice(crd.Object, "status", "conditions")
if err != nil || !found {
allEstablished = false
pendingCRD = name
break
}
established := false
for _, c := range conditions {
cond, ok := c.(map[string]interface{})
if !ok {
continue
}
if cond["type"] == "Established" && cond["status"] == "True" {
established = true
break
}
}
if !established {
allEstablished = false
pendingCRD = name
break
}
}
if allEstablished {
logger.Info("All CRDs established", "count", len(crdNames))
return nil
}
logger.V(1).Info("Waiting for CRD to be established", "crd", pendingCRD)
select {
case <-ctx.Done():
return fmt.Errorf("context cancelled while waiting for CRD %q to be established: %w", pendingCRD, ctx.Err())
case <-ticker.C:
}
}
}
// CollectCRDNames returns the names of all CustomResourceDefinition objects
// from the given list of unstructured objects. Only objects with
// apiVersion "apiextensions.k8s.io/v1" and kind "CustomResourceDefinition"
// are matched.
func CollectCRDNames(objects []*unstructured.Unstructured) []string {
var names []string
for _, obj := range objects {
if obj.GetAPIVersion() == "apiextensions.k8s.io/v1" && obj.GetKind() == "CustomResourceDefinition" {
names = append(names, obj.GetName())
}
}
return names
}

View File

@@ -0,0 +1,202 @@
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package manifestutil
import (
"context"
"strings"
"testing"
"time"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/client/interceptor"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
)
func TestCollectCRDNames(t *testing.T) {
objects := []*unstructured.Unstructured{
{Object: map[string]interface{}{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": map[string]interface{}{"name": "test-ns"},
}},
{Object: map[string]interface{}{
"apiVersion": "apiextensions.k8s.io/v1",
"kind": "CustomResourceDefinition",
"metadata": map[string]interface{}{"name": "packages.cozystack.io"},
}},
{Object: map[string]interface{}{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": map[string]interface{}{"name": "test-deploy"},
}},
{Object: map[string]interface{}{
"apiVersion": "apiextensions.k8s.io/v1",
"kind": "CustomResourceDefinition",
"metadata": map[string]interface{}{"name": "packagesources.cozystack.io"},
}},
}
names := CollectCRDNames(objects)
if len(names) != 2 {
t.Fatalf("CollectCRDNames() returned %d names, want 2", len(names))
}
if names[0] != "packages.cozystack.io" {
t.Errorf("names[0] = %q, want %q", names[0], "packages.cozystack.io")
}
if names[1] != "packagesources.cozystack.io" {
t.Errorf("names[1] = %q, want %q", names[1], "packagesources.cozystack.io")
}
}
func TestCollectCRDNames_ignoresWrongAPIVersion(t *testing.T) {
objects := []*unstructured.Unstructured{
{Object: map[string]interface{}{
"apiVersion": "apiextensions.k8s.io/v1",
"kind": "CustomResourceDefinition",
"metadata": map[string]interface{}{"name": "real.crd.io"},
}},
{Object: map[string]interface{}{
"apiVersion": "apiextensions.k8s.io/v1beta1",
"kind": "CustomResourceDefinition",
"metadata": map[string]interface{}{"name": "legacy.crd.io"},
}},
}
names := CollectCRDNames(objects)
if len(names) != 1 {
t.Fatalf("CollectCRDNames() returned %d names, want 1", len(names))
}
if names[0] != "real.crd.io" {
t.Errorf("names[0] = %q, want %q", names[0], "real.crd.io")
}
}
func TestCollectCRDNames_noCRDs(t *testing.T) {
objects := []*unstructured.Unstructured{
{Object: map[string]interface{}{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": map[string]interface{}{"name": "test"},
}},
}
names := CollectCRDNames(objects)
if len(names) != 0 {
t.Errorf("CollectCRDNames() returned %d names, want 0", len(names))
}
}
func TestWaitForCRDsEstablished_success(t *testing.T) {
log.SetLogger(zap.New(zap.UseDevMode(true)))
scheme := runtime.NewScheme()
if err := apiextensionsv1.AddToScheme(scheme); err != nil {
t.Fatalf("failed to add apiextensions to scheme: %v", err)
}
// Create a CRD object in the fake client
crd := &unstructured.Unstructured{Object: map[string]interface{}{
"apiVersion": "apiextensions.k8s.io/v1",
"kind": "CustomResourceDefinition",
"metadata": map[string]interface{}{"name": "packages.cozystack.io"},
}}
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(crd).
WithInterceptorFuncs(interceptor.Funcs{
Get: func(ctx context.Context, c client.WithWatch, key client.ObjectKey, obj client.Object, opts ...client.GetOption) error {
if err := c.Get(ctx, key, obj, opts...); err != nil {
return err
}
u, ok := obj.(*unstructured.Unstructured)
if !ok {
return nil
}
if u.GetKind() == "CustomResourceDefinition" {
_ = unstructured.SetNestedSlice(u.Object, []interface{}{
map[string]interface{}{
"type": "Established",
"status": "True",
},
}, "status", "conditions")
}
return nil
},
}).
Build()
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
ctx = log.IntoContext(ctx, log.FromContext(context.Background()))
err := WaitForCRDsEstablished(ctx, fakeClient, []string{"packages.cozystack.io"})
if err != nil {
t.Fatalf("WaitForCRDsEstablished() error = %v", err)
}
}
func TestWaitForCRDsEstablished_timeout(t *testing.T) {
log.SetLogger(zap.New(zap.UseDevMode(true)))
scheme := runtime.NewScheme()
if err := apiextensionsv1.AddToScheme(scheme); err != nil {
t.Fatalf("failed to add apiextensions to scheme: %v", err)
}
// CRD exists but never gets Established condition
crd := &unstructured.Unstructured{Object: map[string]interface{}{
"apiVersion": "apiextensions.k8s.io/v1",
"kind": "CustomResourceDefinition",
"metadata": map[string]interface{}{"name": "packages.cozystack.io"},
}}
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(crd).
Build()
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
ctx = log.IntoContext(ctx, log.FromContext(context.Background()))
err := WaitForCRDsEstablished(ctx, fakeClient, []string{"packages.cozystack.io"})
if err == nil {
t.Fatal("WaitForCRDsEstablished() expected error on timeout, got nil")
}
if !strings.Contains(err.Error(), "packages.cozystack.io") {
t.Errorf("error should mention stuck CRD name, got: %v", err)
}
}
func TestWaitForCRDsEstablished_empty(t *testing.T) {
scheme := runtime.NewScheme()
fakeClient := fake.NewClientBuilder().WithScheme(scheme).Build()
ctx := context.Background()
err := WaitForCRDsEstablished(ctx, fakeClient, nil)
if err != nil {
t.Fatalf("WaitForCRDsEstablished() with empty names should return nil, got: %v", err)
}
}

View File

@@ -0,0 +1,76 @@
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package manifestutil
import (
"bufio"
"bytes"
"fmt"
"io"
"os"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
k8syaml "k8s.io/apimachinery/pkg/util/yaml"
)
// ParseManifestFile reads a YAML file and parses it into unstructured objects.
func ParseManifestFile(manifestPath string) ([]*unstructured.Unstructured, error) {
data, err := os.ReadFile(manifestPath)
if err != nil {
return nil, fmt.Errorf("failed to read manifest file: %w", err)
}
return ReadYAMLObjects(bytes.NewReader(data))
}
// ReadYAMLObjects parses multi-document YAML from a reader into unstructured objects.
// Empty documents and documents without a kind are skipped.
func ReadYAMLObjects(reader io.Reader) ([]*unstructured.Unstructured, error) {
var objects []*unstructured.Unstructured
yamlReader := k8syaml.NewYAMLReader(bufio.NewReader(reader))
for {
doc, err := yamlReader.Read()
if err != nil {
if err == io.EOF {
break
}
return nil, fmt.Errorf("failed to read YAML document: %w", err)
}
if len(bytes.TrimSpace(doc)) == 0 {
continue
}
obj := &unstructured.Unstructured{}
decoder := k8syaml.NewYAMLOrJSONDecoder(bytes.NewReader(doc), len(doc))
if err := decoder.Decode(obj); err != nil {
if err == io.EOF {
continue
}
return nil, fmt.Errorf("failed to decode YAML document: %w", err)
}
if obj.GetKind() == "" {
continue
}
objects = append(objects, obj)
}
return objects, nil
}

View File

@@ -0,0 +1,161 @@
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package manifestutil
import (
"os"
"path/filepath"
"strings"
"testing"
)
func TestReadYAMLObjects(t *testing.T) {
tests := []struct {
name string
input string
wantCount int
wantErr bool
}{
{
name: "single document",
input: `apiVersion: v1
kind: ConfigMap
metadata:
name: test
`,
wantCount: 1,
},
{
name: "multiple documents",
input: `apiVersion: v1
kind: ConfigMap
metadata:
name: test1
---
apiVersion: v1
kind: ConfigMap
metadata:
name: test2
`,
wantCount: 2,
},
{
name: "empty input",
input: "",
wantCount: 0,
},
{
name: "decoder rejects document without kind",
input: `apiVersion: v1
metadata:
name: test
`,
wantErr: true,
},
{
name: "whitespace-only document between separators is skipped",
input: `apiVersion: v1
kind: ConfigMap
metadata:
name: test1
---
---
apiVersion: v1
kind: ConfigMap
metadata:
name: test2
`,
wantCount: 2,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
objects, err := ReadYAMLObjects(strings.NewReader(tt.input))
if (err != nil) != tt.wantErr {
t.Errorf("ReadYAMLObjects() error = %v, wantErr %v", err, tt.wantErr)
return
}
if len(objects) != tt.wantCount {
t.Errorf("ReadYAMLObjects() returned %d objects, want %d", len(objects), tt.wantCount)
}
})
}
}
func TestReadYAMLObjects_preservesFields(t *testing.T) {
input := `apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: packages.cozystack.io
spec:
group: cozystack.io
`
objects, err := ReadYAMLObjects(strings.NewReader(input))
if err != nil {
t.Fatalf("ReadYAMLObjects() error = %v", err)
}
if len(objects) != 1 {
t.Fatalf("expected 1 object, got %d", len(objects))
}
obj := objects[0]
if obj.GetKind() != "CustomResourceDefinition" {
t.Errorf("kind = %q, want %q", obj.GetKind(), "CustomResourceDefinition")
}
if obj.GetName() != "packages.cozystack.io" {
t.Errorf("name = %q, want %q", obj.GetName(), "packages.cozystack.io")
}
if obj.GetAPIVersion() != "apiextensions.k8s.io/v1" {
t.Errorf("apiVersion = %q, want %q", obj.GetAPIVersion(), "apiextensions.k8s.io/v1")
}
}
func TestParseManifestFile(t *testing.T) {
tmpDir := t.TempDir()
manifestPath := filepath.Join(tmpDir, "test.yaml")
content := `apiVersion: v1
kind: ConfigMap
metadata:
name: cm1
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cm2
`
if err := os.WriteFile(manifestPath, []byte(content), 0600); err != nil {
t.Fatalf("failed to write test manifest: %v", err)
}
objects, err := ParseManifestFile(manifestPath)
if err != nil {
t.Fatalf("ParseManifestFile() error = %v", err)
}
if len(objects) != 2 {
t.Errorf("ParseManifestFile() returned %d objects, want 2", len(objects))
}
}
func TestParseManifestFile_notFound(t *testing.T) {
_, err := ParseManifestFile("/nonexistent/path/test.yaml")
if err == nil {
t.Error("ParseManifestFile() expected error for nonexistent file, got nil")
}
}

View File

@@ -0,0 +1,7 @@
apiVersion: v2
name: harbor
description: Managed Harbor container registry
icon: /logos/harbor.svg
type: application
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process
appVersion: "2.14.2"

View File

@@ -0,0 +1,7 @@
NAME=harbor
include ../../../hack/package.mk
generate:
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
../../../hack/update-crd.sh

View File

@@ -0,0 +1,47 @@
# Managed Harbor Container Registry
Harbor is an open source trusted cloud native registry project that stores, signs, and scans content.
## Parameters
### Common parameters
| Name | Description | Type | Value |
| -------------- | -------------------------------------------------------------------------------------------- | -------- | ----- |
| `host` | Hostname for external access to Harbor (defaults to 'harbor' subdomain for the tenant host). | `string` | `""` |
| `storageClass` | StorageClass used to store the data. | `string` | `""` |
### Component configuration
| Name | Description | Type | Value |
| ----------------------------- | -------------------------------------------------------------------------------------------------------- | ---------- | ------- |
| `core` | Core API server configuration. | `object` | `{}` |
| `core.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `core.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `core.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `core.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `registry` | Container image registry configuration. | `object` | `{}` |
| `registry.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `registry.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `registry.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `registry.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `jobservice` | Background job service configuration. | `object` | `{}` |
| `jobservice.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `jobservice.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `jobservice.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `jobservice.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
| `trivy` | Trivy vulnerability scanner configuration. | `object` | `{}` |
| `trivy.enabled` | Enable or disable the vulnerability scanner. | `bool` | `true` |
| `trivy.size` | Persistent Volume size for vulnerability database cache. | `quantity` | `5Gi` |
| `trivy.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `trivy.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `trivy.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `trivy.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
| `database` | PostgreSQL database configuration. | `object` | `{}` |
| `database.size` | Persistent Volume size for database storage. | `quantity` | `5Gi` |
| `database.replicas` | Number of database instances. | `int` | `2` |
| `redis` | Redis cache configuration. | `object` | `{}` |
| `redis.size` | Persistent Volume size for cache storage. | `quantity` | `1Gi` |
| `redis.replicas` | Number of Redis replicas. | `int` | `2` |

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 6.0 KiB

View File

@@ -0,0 +1,19 @@
{{- $seaweedfs := .Values._namespace.seaweedfs }}
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketClaim
metadata:
name: {{ .Release.Name }}-registry
spec:
bucketClassName: {{ $seaweedfs }}
protocols:
- s3
---
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketAccess
metadata:
name: {{ .Release.Name }}-registry
spec:
bucketAccessClassName: {{ $seaweedfs }}
bucketClaimName: {{ .Release.Name }}-registry
credentialsSecretName: {{ .Release.Name }}-registry-bucket
protocol: s3

View File

@@ -0,0 +1,46 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-dashboard-resources
rules:
- apiGroups:
- ""
resources:
- services
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- {{ .Release.Name }}-credentials
verbs: ["get", "list", "watch"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
resourceNames:
- {{ .Release.Name }}-ingress
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}-core
- {{ .Release.Name }}-registry
- {{ .Release.Name }}-portal
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "super-admin" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,201 @@
{{- $host := .Values._namespace.host }}
{{- $harborHost := .Values.host | default (printf "%s.%s" .Release.Name $host) }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-credentials" .Release.Name) }}
{{- $adminPassword := randAlphaNum 16 }}
{{- $redisPassword := randAlphaNum 32 }}
{{- if $existingSecret }}
{{- $adminPassword = index $existingSecret.data "admin-password" | b64dec }}
{{- if hasKey $existingSecret.data "redis-password" }}
{{- $redisPassword = index $existingSecret.data "redis-password" | b64dec }}
{{- end }}
{{- end }}
{{- $existingCoreSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-core" .Release.Name) }}
{{- $tokenKey := "" }}
{{- $tokenCert := "" }}
{{- if $existingCoreSecret }}
{{- if hasKey $existingCoreSecret.data "tls.key" }}
{{- $tokenKey = index $existingCoreSecret.data "tls.key" | b64dec }}
{{- end }}
{{- if hasKey $existingCoreSecret.data "tls.crt" }}
{{- $tokenCert = index $existingCoreSecret.data "tls.crt" | b64dec }}
{{- end }}
{{- end }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-credentials
stringData:
admin-password: {{ $adminPassword | quote }}
redis-password: {{ $redisPassword | quote }}
url: https://{{ $harborHost }}
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-system
labels:
sharding.fluxcd.io/key: tenants
spec:
chartRef:
kind: ExternalArtifact
name: cozystack-harbor-application-default-harbor-system
namespace: cozy-system
interval: 5m
timeout: 15m
install:
remediation:
retries: -1
upgrade:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
- kind: Secret
name: {{ .Release.Name }}-credentials
valuesKey: redis-password
targetPath: redis.password
- kind: Secret
name: {{ .Release.Name }}-credentials
valuesKey: redis-password
targetPath: harbor.redis.external.password
values:
bucket:
secretName: {{ .Release.Name }}-registry-bucket
db:
replicas: {{ .Values.database.replicas }}
size: {{ .Values.database.size }}
{{- with .Values.storageClass }}
storageClass: {{ . }}
{{- end }}
redis:
replicas: {{ .Values.redis.replicas }}
size: {{ .Values.redis.size }}
{{- with .Values.storageClass }}
storageClass: {{ . }}
{{- end }}
harbor:
fullnameOverride: {{ .Release.Name }}
harborAdminPassword: {{ $adminPassword | quote }}
externalURL: https://{{ $harborHost }}
expose:
type: clusterIP
clusterIP:
name: {{ .Release.Name }}
tls:
enabled: false
persistence:
enabled: true
resourcePolicy: "keep"
imageChartStorage:
type: s3
s3:
existingSecret: {{ .Release.Name }}-registry-s3
region: us-east-1
bucket: {{ .Release.Name }}-registry
secure: false
v4auth: true
{{- if .Values.trivy.enabled }}
persistentVolumeClaim:
trivy:
size: {{ .Values.trivy.size }}
{{- with .Values.storageClass }}
storageClass: {{ . }}
{{- end }}
{{- end }}
portal:
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list "nano" (dict) $) | nindent 10 }}
core:
{{- if and $tokenKey $tokenCert }}
tokenKey: {{ $tokenKey | quote }}
tokenCert: {{ $tokenCert | quote }}
{{- end }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.core.resourcesPreset .Values.core.resources $) | nindent 10 }}
registry:
registry:
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.registry.resourcesPreset .Values.registry.resources $) | nindent 12 }}
controller:
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.registry.resourcesPreset .Values.registry.resources $) | nindent 12 }}
jobservice:
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.jobservice.resourcesPreset .Values.jobservice.resources $) | nindent 10 }}
trivy:
enabled: {{ .Values.trivy.enabled }}
{{- if .Values.trivy.enabled }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.trivy.resourcesPreset .Values.trivy.resources $) | nindent 10 }}
{{- end }}
database:
type: external
external:
host: "{{ .Release.Name }}-db-rw"
port: "5432"
username: app
coreDatabase: app
sslmode: require
existingSecret: "{{ .Release.Name }}-db-app"
redis:
type: external
external:
addr: "rfs-{{ .Release.Name }}-redis:26379"
sentinelMasterSet: "mymaster"
coreDatabaseIndex: "0"
jobserviceDatabaseIndex: "1"
registryDatabaseIndex: "2"
trivyAdapterIndex: "5"
metrics:
enabled: true
serviceMonitor:
enabled: true
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-core
spec:
replicas: 1
minReplicas: 1
kind: harbor
type: core
selector:
release: {{ $.Release.Name }}-system
component: core
version: {{ $.Chart.Version }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-registry
spec:
replicas: 1
minReplicas: 1
kind: harbor
type: registry
selector:
release: {{ $.Release.Name }}-system
component: registry
version: {{ $.Chart.Version }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-portal
spec:
replicas: 1
minReplicas: 1
kind: harbor
type: portal
selector:
release: {{ $.Release.Name }}-system
component: portal
version: {{ $.Chart.Version }}

View File

@@ -0,0 +1,36 @@
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
{{- $harborHost := .Values.host | default (printf "%s.%s" .Release.Name $host) }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "900"
nginx.ingress.kubernetes.io/proxy-send-timeout: "900"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
{{- if ne $issuerType "cloudflare" }}
acme.cert-manager.io/http01-ingress-class: {{ $ingress }}
{{- end }}
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: {{ $ingress }}
tls:
- hosts:
- {{ $harborHost | quote }}
secretName: {{ .Release.Name }}-ingress-tls
rules:
- host: {{ $harborHost | quote }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}
port:
number: 80

View File

@@ -0,0 +1,315 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"core": {
"description": "Core API server configuration.",
"type": "object",
"default": {},
"properties": {
"resources": {
"description": "Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"default": {},
"properties": {
"cpu": {
"description": "Number of CPU cores allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Amount of memory allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted.",
"type": "string",
"default": "small",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
}
}
},
"database": {
"description": "PostgreSQL database configuration.",
"type": "object",
"default": {},
"required": [
"replicas",
"size"
],
"properties": {
"replicas": {
"description": "Number of database instances.",
"type": "integer",
"default": 2
},
"size": {
"description": "Persistent Volume size for database storage.",
"default": "5Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"host": {
"description": "Hostname for external access to Harbor (defaults to 'harbor' subdomain for the tenant host).",
"type": "string",
"default": ""
},
"jobservice": {
"description": "Background job service configuration.",
"type": "object",
"default": {},
"properties": {
"resources": {
"description": "Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"default": {},
"properties": {
"cpu": {
"description": "Number of CPU cores allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Amount of memory allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted.",
"type": "string",
"default": "nano",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
}
}
},
"redis": {
"description": "Redis cache configuration.",
"type": "object",
"default": {},
"required": [
"replicas",
"size"
],
"properties": {
"replicas": {
"description": "Number of Redis replicas.",
"type": "integer",
"default": 2
},
"size": {
"description": "Persistent Volume size for cache storage.",
"default": "1Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"registry": {
"description": "Container image registry configuration.",
"type": "object",
"default": {},
"properties": {
"resources": {
"description": "Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"default": {},
"properties": {
"cpu": {
"description": "Number of CPU cores allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Amount of memory allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted.",
"type": "string",
"default": "small",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
}
}
},
"storageClass": {
"description": "StorageClass used to store the data.",
"type": "string",
"default": ""
},
"trivy": {
"description": "Trivy vulnerability scanner configuration.",
"type": "object",
"default": {},
"required": [
"enabled",
"size"
],
"properties": {
"enabled": {
"description": "Enable or disable the vulnerability scanner.",
"type": "boolean",
"default": true
},
"resources": {
"description": "Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"default": {},
"properties": {
"cpu": {
"description": "Number of CPU cores allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Amount of memory allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted.",
"type": "string",
"default": "nano",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"size": {
"description": "Persistent Volume size for vulnerability database cache.",
"default": "5Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
}
}
}

View File

@@ -0,0 +1,84 @@
##
## @section Common parameters
##
## @param {string} [host] - Hostname for external access to Harbor (defaults to 'harbor' subdomain for the tenant host).
host: ""
## @param {string} storageClass - StorageClass used to store the data.
storageClass: ""
##
## @section Component configuration
##
## @typedef {struct} Resources - Resource configuration.
## @field {quantity} [cpu] - Number of CPU cores allocated.
## @field {quantity} [memory] - Amount of memory allocated.
## @enum {string} ResourcesPreset - Default sizing preset.
## @value nano
## @value micro
## @value small
## @value medium
## @value large
## @value xlarge
## @value 2xlarge
## @typedef {struct} Core - Core API server configuration.
## @field {Resources} [resources] - Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.
## @field {ResourcesPreset} [resourcesPreset] - Default sizing preset used when `resources` is omitted.
## @param {Core} core - Core API server configuration.
core:
resources: {}
resourcesPreset: "small"
## @typedef {struct} Registry - Container image registry configuration.
## @field {Resources} [resources] - Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.
## @field {ResourcesPreset} [resourcesPreset] - Default sizing preset used when `resources` is omitted.
## @param {Registry} registry - Container image registry configuration.
registry:
resources: {}
resourcesPreset: "small"
## @typedef {struct} Jobservice - Background job service configuration.
## @field {Resources} [resources] - Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.
## @field {ResourcesPreset} [resourcesPreset] - Default sizing preset used when `resources` is omitted.
## @param {Jobservice} jobservice - Background job service configuration.
jobservice:
resources: {}
resourcesPreset: "nano"
## @typedef {struct} Trivy - Trivy vulnerability scanner configuration.
## @field {bool} enabled - Enable or disable the vulnerability scanner.
## @field {quantity} size - Persistent Volume size for vulnerability database cache.
## @field {Resources} [resources] - Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.
## @field {ResourcesPreset} [resourcesPreset] - Default sizing preset used when `resources` is omitted.
## @param {Trivy} trivy - Trivy vulnerability scanner configuration.
trivy:
enabled: true
size: 5Gi
resources: {}
resourcesPreset: "nano"
## @typedef {struct} Database - PostgreSQL database configuration (provisioned via CloudNativePG).
## @field {quantity} size - Persistent Volume size for database storage.
## @field {int} replicas - Number of database instances.
## @param {Database} database - PostgreSQL database configuration.
database:
size: 5Gi
replicas: 2
## @typedef {struct} Redis - Redis cache configuration (provisioned via redis-operator).
## @field {quantity} size - Persistent Volume size for cache storage.
## @field {int} replicas - Number of Redis replicas.
## @param {Redis} redis - Redis cache configuration.
redis:
size: 1Gi
replicas: 2

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.0.0@sha256:9e34fd50393b418d9516aadb488067a3a63675b045811beb1c0afc9c61e149e8
ghcr.io/cozystack/cozystack/nginx-cache:0.0.0@sha256:cb25e40cb665b8bbeee8cb1ec39da4c9a7452ef3f2f371912bbc0d1b1e2d40a8

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:1997d623bb60a5b540027a4e0716e7ca84f668ee09f31290e9c43324f0003137
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:604561e23df1b8eb25c24cf73fd93c7aaa6d1e7c56affbbda5c6f0f83424e4b1

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.33@sha256:71a74ca30f75967bae309be2758f19aa3d37c60b19426b9b622ff1c33a80362f
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.33@sha256:19ee4c76f0b3b7b40b97995ca78988ad8c82f6e9c75288d8b7b4b88a64f75d50

View File

@@ -95,10 +95,6 @@ spec:
{{- with .Values.storageClass }}
storageClassName: {{ . }}
{{- end }}
promExporter:
enabled: true
podMonitor:
enabled: true
{{- if .Values.external }}
service:
merge:

View File

@@ -6,15 +6,20 @@ Tenants can be created recursively and are subject to the following rules:
### Tenant naming
Tenant names must be alphanumeric.
Using dashes (`-`) in tenant names is not allowed, unlike with other services.
This limitation exists to keep consistent naming in tenants, nested tenants, and services deployed in them.
Tenant names must follow DNS-1035 naming rules:
- Must start with a lowercase letter (`a-z`)
- Can only contain lowercase letters, numbers, and hyphens (`a-z`, `0-9`, `-`)
- Must end with a letter or number (not a hyphen)
- Maximum length depends on the cluster configuration (Helm release prefix and root domain)
**Note:** Using dashes (`-`) in tenant names is **allowed but discouraged**, unlike with other services.
This is to keep consistent naming in tenants, nested tenants, and services deployed in them.
Names with dashes (e.g., `foo-bar`) may lead to ambiguous parsing of internal resource names like `tenant-foo-bar`.
For example:
- The root tenant is named `root`, but internally it's referenced as `tenant-root`.
- A nested tenant could be named `foo`, which would result in `tenant-foo` in service names and URLs.
- However, a tenant can not be named `foo-bar`, because parsing names such as `tenant-foo-bar` would be ambiguous.
### Unique domains

View File

@@ -16,8 +16,8 @@ spec:
namespaces:
- {{ .Release.Namespace }}
{{- range $subnetName, $subnetConfig := .Values.subnets }}
{{- $subnetId := print "subnet-" (print $.Release.Namespace "/" $vpcId "/" $subnetName | sha256sum | trunc 8) }}
{{- range .Values.subnets }}
{{- $subnetId := print "subnet-" (print $.Release.Namespace "/" $vpcId "/" .name | sha256sum | trunc 8) }}
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
@@ -25,7 +25,7 @@ metadata:
name: {{ $subnetId }}
namespace: {{ $.Release.Namespace }}
labels:
cozystack.io/subnetName: {{ $subnetName }}
cozystack.io/subnetName: {{ .name }}
cozystack.io/vpcId: {{ $vpcId }}
cozystack.io/vpcName: {{ $.Release.Name }}
cozystack.io/tenantName: {{ $.Release.Namespace }}
@@ -42,13 +42,13 @@ kind: Subnet
metadata:
name: {{ $subnetId }}
labels:
cozystack.io/subnetName: {{ $subnetName }}
cozystack.io/subnetName: {{ .name }}
cozystack.io/vpcId: {{ $vpcId }}
cozystack.io/vpcName: {{ $.Release.Name }}
cozystack.io/tenantName: {{ $.Release.Namespace }}
spec:
vpc: {{ $vpcId }}
cidrBlock: {{ $subnetConfig.cidr }}
cidrBlock: {{ .cidr }}
provider: "{{ $subnetId }}.{{ $.Release.Namespace }}.ovn"
protocol: IPv4
enableLb: false
@@ -66,9 +66,9 @@ metadata:
cozystack.io/vpcId: {{ $vpcId }}
cozystack.io/tenantName: {{ $.Release.Namespace }}
data:
{{- range $subnetName, $subnetConfig := .Values.subnets }}
{{ $subnetName }}.ID: {{ print "subnet-" (print $.Release.Namespace "/" $vpcId "/" $subnetName | sha256sum | trunc 8) }}
{{ $subnetName }}.CIDR: {{ $subnetConfig.cidr }}
{{- range .Values.subnets }}
{{ .name }}.ID: {{ print "subnet-" (print $.Release.Namespace "/" $vpcId "/" .name | sha256sum | trunc 8) }}
{{ .name }}.CIDR: {{ .cidr }}
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -4,14 +4,21 @@
"properties": {
"subnets": {
"description": "Subnets of a VPC",
"type": "object",
"default": {},
"additionalProperties": {
"type": "array",
"default": [],
"items": {
"type": "object",
"required": [
"name"
],
"properties": {
"cidr": {
"description": "IP address range",
"type": "string"
},
"name": {
"description": "Subnet name",
"type": "string"
}
}
}

View File

@@ -3,13 +3,14 @@
##
## @typedef {struct} Subnet - Subnet of a VPC
## @field {string} name - Subnet name
## @field {string} [cidr] - IP address range
## @param {map[string]Subnet} subnets - Subnets of a VPC
subnets: {}
## @param {[]Subnet} subnets - Subnets of a VPC
subnets: []
## Example:
## subnets:
## mysubnet0:
## - name: mysubnet0
## cidr: "172.16.0.0/24"
## mysubnet1:
## - name: mysubnet1
## cidr: "172.16.1.0/24"

View File

@@ -0,0 +1,9 @@
# VCS and IDE
.git
.gitignore
# Build artifacts
Makefile
images/
example/
*.tgz

View File

@@ -15,7 +15,7 @@ apply:
diff:
cozyhr show --namespace $(NAMESPACE) $(NAME) --plain | kubectl diff -f -
image: pre-checks image-operator image-packages
image: pre-checks image-operator image-packages chart
image-operator:
docker buildx build -f images/cozystack-operator/Dockerfile ../../.. \
@@ -43,3 +43,9 @@ image-packages:
test -n "$$DIGEST" && \
yq -i '.cozystackOperator.platformSourceUrl = strenv(REPO)' values.yaml && \
yq -i '.cozystackOperator.platformSourceRef = "digest=" + strenv(DIGEST)' values.yaml
chart:
set -e; \
PKG=$$(helm package . --version $(COZYSTACK_VERSION) | awk '{print $$NF}'); \
trap 'rm -f "$$PKG"' EXIT; \
if [ "$(PUSH)" = "1" ]; then helm push "$$PKG" oci://$(REGISTRY); fi

View File

@@ -0,0 +1 @@
*.yaml linguist-generated

View File

@@ -0,0 +1,171 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.4
name: packages.cozystack.io
spec:
group: cozystack.io
names:
kind: Package
listKind: PackageList
plural: packages
shortNames:
- pkg
- pkgs
singular: package
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Selected variant
jsonPath: .spec.variant
name: Variant
type: string
- description: Ready status
jsonPath: .status.conditions[?(@.type=='Ready')].status
name: Ready
type: string
- description: Ready message
jsonPath: .status.conditions[?(@.type=='Ready')].message
name: Status
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: Package is the Schema for the packages API
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: PackageSpec defines the desired state of Package
properties:
components:
additionalProperties:
description: PackageComponent defines overrides for a specific component
properties:
enabled:
description: |-
Enabled indicates whether this component should be installed
If false, the component will be disabled even if it's defined in the PackageSource
type: boolean
values:
description: |-
Values contains Helm chart values as a JSON object
These values will be merged with the default values from the PackageSource
x-kubernetes-preserve-unknown-fields: true
type: object
description: |-
Components is a map of release name to component overrides
Allows overriding values and enabling/disabling specific components from the PackageSource
type: object
ignoreDependencies:
description: |-
IgnoreDependencies is a list of package source dependencies to ignore
Dependencies listed here will not be installed even if they are specified in the PackageSource
items:
type: string
type: array
variant:
description: |-
Variant is the name of the variant to use from the PackageSource
If not specified, defaults to "default"
type: string
type: object
status:
description: PackageStatus defines the observed state of Package
properties:
conditions:
description: Conditions represents the latest available observations
of a Package's state
items:
description: Condition contains details for one aspect of the current
state of this API Resource.
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from one status to another.
This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: |-
message is a human readable message indicating details about the transition.
This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: |-
observedGeneration represents the .metadata.generation that the condition was set based upon.
For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date
with respect to the current state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: |-
reason contains a programmatic identifier indicating the reason for the condition's last transition.
Producers of specific condition types may define expected values and meanings for this field,
and whether the values are considered a guaranteed API.
The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
dependencies:
additionalProperties:
description: DependencyStatus represents the readiness status of
a dependency
properties:
ready:
description: Ready indicates whether the dependency is ready
type: boolean
required:
- ready
type: object
description: |-
Dependencies tracks the readiness status of each dependency
Key is the dependency package name, value indicates if the dependency is ready
type: object
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -0,0 +1,250 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.4
name: packagesources.cozystack.io
spec:
group: cozystack.io
names:
kind: PackageSource
listKind: PackageSourceList
plural: packagesources
shortNames:
- pks
singular: packagesource
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Package variants (comma-separated)
jsonPath: .status.variants
name: Variants
type: string
- description: Ready status
jsonPath: .status.conditions[?(@.type=='Ready')].status
name: Ready
type: string
- description: Ready message
jsonPath: .status.conditions[?(@.type=='Ready')].message
name: Status
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: PackageSource is the Schema for the packagesources API
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: PackageSourceSpec defines the desired state of PackageSource
properties:
sourceRef:
description: SourceRef is the source reference for the package source
charts
properties:
kind:
description: Kind of the source reference
enum:
- GitRepository
- OCIRepository
type: string
name:
description: Name of the source reference
type: string
namespace:
description: Namespace of the source reference
type: string
path:
description: |-
Path is the base path where packages are located in the source.
For GitRepository, defaults to "packages" if not specified.
For OCIRepository, defaults to empty string (root) if not specified.
type: string
required:
- kind
- name
- namespace
type: object
variants:
description: |-
Variants is a list of package source variants
Each variant defines components, applications, dependencies, and libraries for a specific configuration
items:
description: Variant defines a single variant configuration
properties:
components:
description: Components is a list of Helm releases to be installed
as part of this variant
items:
description: Component defines a single Helm release component
within a package source
properties:
install:
description: Install defines installation parameters for
this component
properties:
dependsOn:
description: DependsOn is a list of component names
that must be installed before this component
items:
type: string
type: array
namespace:
description: Namespace is the Kubernetes namespace
where the release will be installed
type: string
privileged:
description: Privileged indicates whether this release
requires privileged access
type: boolean
releaseName:
description: |-
ReleaseName is the name of the HelmRelease resource that will be created
If not specified, defaults to the component Name field
type: string
type: object
libraries:
description: |-
Libraries is a list of library names that this component depends on
These libraries must be defined at the variant level
items:
type: string
type: array
name:
description: Name is the unique identifier for this component
within the package source
type: string
path:
description: Path is the path to the Helm chart directory
type: string
valuesFiles:
description: ValuesFiles is a list of values file names
to use
items:
type: string
type: array
required:
- name
- path
type: object
type: array
dependsOn:
description: |-
DependsOn is a list of package source dependencies
For example: "cozystack.networking"
items:
type: string
type: array
libraries:
description: Libraries is a list of Helm library charts used
by components in this variant
items:
description: Library defines a Helm library chart
properties:
name:
description: Name is the optional name for library placed
in charts
type: string
path:
description: Path is the path to the library chart directory
type: string
required:
- path
type: object
type: array
name:
description: Name is the unique identifier for this variant
type: string
required:
- name
type: object
type: array
type: object
status:
description: PackageSourceStatus defines the observed state of PackageSource
properties:
conditions:
description: Conditions represents the latest available observations
of a PackageSource's state
items:
description: Condition contains details for one aspect of the current
state of this API Resource.
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from one status to another.
This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: |-
message is a human readable message indicating details about the transition.
This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: |-
observedGeneration represents the .metadata.generation that the condition was set based upon.
For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date
with respect to the current state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: |-
reason contains a programmatic identifier indicating the reason for the condition's last transition.
Producers of specific condition types may define expected values and meanings for this field,
and whether the values are considered a guaranteed API.
The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
variants:
description: |-
Variants is a comma-separated list of package variant names
This field is populated by the controller based on spec.variants keys
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -53,8 +53,13 @@ spec:
args:
- --leader-elect=true
- --install-flux=true
# CRDs are also in crds/ for initial helm install, but Helm never updates
# them on upgrade and never deletes them on uninstall. The operator applies
# embedded CRDs via server-side apply on every startup, ensuring they stay
# up to date. To fully remove CRDs, delete them manually after helm uninstall.
- --install-crds=true
- --metrics-bind-address=0
- --health-probe-bind-address=
- --health-probe-bind-address=0
{{- if .Values.cozystackOperator.disableTelemetry }}
- --disable-telemetry
{{- end }}
@@ -72,20 +77,11 @@ spec:
value: "7445"
{{- else if eq .Values.cozystackOperator.variant "generic" }}
env:
# Generic Kubernetes: read from ConfigMap
# Create cozystack-operator-config ConfigMap before applying this manifest
# Generic Kubernetes: API server endpoint
- name: KUBERNETES_SERVICE_HOST
valueFrom:
configMapKeyRef:
name: cozystack-operator-config
key: KUBERNETES_SERVICE_HOST
optional: false
value: {{ required "cozystack.apiServerHost is required in generic mode" .Values.cozystack.apiServerHost | quote }}
- name: KUBERNETES_SERVICE_PORT
valueFrom:
configMapKeyRef:
name: cozystack-operator-config
key: KUBERNETES_SERVICE_PORT
optional: false
value: {{ .Values.cozystack.apiServerPort | quote }}
{{- else if eq .Values.cozystackOperator.variant "hosted" }}
# Hosted: use in-cluster service account, no env override needed
env: []

View File

@@ -1,3 +0,0 @@
{{- range $path, $_ := .Files.Glob "definitions/*.yaml" }}
{{ $.Files.Get $path }}
{{- end }}

View File

@@ -1,3 +1,7 @@
{{- $validVariants := list "talos" "generic" "hosted" -}}
{{- if not (has .Values.cozystackOperator.variant $validVariants) -}}
{{- fail (printf "Invalid cozystackOperator.variant %q: must be one of talos, generic, hosted" .Values.cozystackOperator.variant) -}}
{{- end -}}
---
apiVersion: cozystack.io/v1alpha1
kind: PackageSource

View File

@@ -1,6 +1,15 @@
cozystackOperator:
# Deployment variant: talos, generic, hosted
variant: talos
image: ghcr.io/cozystack/cozystack/cozystack-operator:v1.0.0-beta.4@sha256:322dd7358df369525f76e6e43512482e38caec5315d36a878399d2d60bf2f18d
image: ghcr.io/cozystack/cozystack/cozystack-operator:v1.0.0-beta.6@sha256:c7490da9c1ccb51bff4dd5657ca6a33a29ac71ad9861dfa8c72fdfc8b5765b93
platformSourceUrl: 'oci://ghcr.io/cozystack/cozystack/cozystack-packages'
platformSourceRef: 'digest=sha256:b88502242b535a31ab33c06ffc0a96d1c67230d2db2c5e873fa23f6523592ff6'
platformSourceRef: 'digest=sha256:b29b87d1a2b80452ffd4db7516a102c30c55121552dcdb237055d4124d12c55d'
# Generic variant configuration (only used when cozystackOperator.variant=generic)
cozystack:
# Kubernetes API server host (IP only, no protocol/port)
# Must be the INTERNAL IP of the control-plane node
# (the IP visible on the node's network interface, not a public/NAT IP)
# Used by the operator and networking components (cilium, kube-ovn)
apiServerHost: ""
# Kubernetes API server port
apiServerPort: "6443"

View File

@@ -0,0 +1,116 @@
#!/bin/bash
# Migration 30 --> 31
# Convert VPC subnets from map format to array format in HelmRelease values.
# Map format: subnets: {name: {cidr: x}}
# Array format: subnets: [{name: name, cidr: x}]
# Idempotent: skips if subnets is already an array or empty/null.
set -euo pipefail
# ============================================================
# STEP 1: Discover all VirtualPrivateCloud HelmReleases
# ============================================================
echo "=== Discovering VirtualPrivateCloud HelmReleases ==="
INSTANCES=()
while IFS=/ read -r ns name; do
[ -z "$ns" ] && continue
INSTANCES+=("${ns}/${name}")
echo " Found: ${ns}/${name}"
done < <(kubectl get hr -A -l "apps.cozystack.io/application.kind=VirtualPrivateCloud" \
-o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}{"\n"}{end}' 2>/dev/null)
if [ ${#INSTANCES[@]} -eq 0 ]; then
echo " No VirtualPrivateCloud HelmReleases found. Nothing to migrate."
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=31 --dry-run=client -o yaml | kubectl apply -f-
exit 0
fi
echo " Total: ${#INSTANCES[@]} instance(s)"
# ============================================================
# STEP 2: Migrate each instance
# ============================================================
for entry in "${INSTANCES[@]}"; do
NAMESPACE="${entry%%/*}"
HR_NAME="${entry#*/}"
echo ""
echo "======================================================================"
echo "=== Processing: ${HR_NAME} in ${NAMESPACE}"
echo "======================================================================"
# --- Find values Secret ---
VALUES_SECRET=$(kubectl -n "$NAMESPACE" get hr "$HR_NAME" -o json | \
jq -r '.spec.valuesFrom // [] | map(select(.kind == "Secret" and (.name | test("cozystack-values") | not))) | .[0].name // ""')
VALUES_KEY=$(kubectl -n "$NAMESPACE" get hr "$HR_NAME" -o json | \
jq -r '.spec.valuesFrom // [] | map(select(.kind == "Secret" and (.name | test("cozystack-values") | not))) | .[0].valuesKey // "values.yaml"')
if [ -z "$VALUES_SECRET" ]; then
echo " [SKIP] No values Secret found for hr/${HR_NAME}"
continue
fi
if ! kubectl -n "$NAMESPACE" get secret "$VALUES_SECRET" --no-headers 2>/dev/null | grep -q .; then
echo " [SKIP] Secret ${VALUES_SECRET} not found"
continue
fi
echo " Reading values from secret: ${VALUES_SECRET} (key: ${VALUES_KEY})"
# --- Decode current values ---
VALUES_YAML=$(kubectl -n "$NAMESPACE" get secret "$VALUES_SECRET" \
-o jsonpath="{.data.${VALUES_KEY}}" 2>/dev/null | base64 -d 2>/dev/null || true)
if [ -z "$VALUES_YAML" ]; then
VALUES_YAML=$(kubectl -n "$NAMESPACE" get secret "$VALUES_SECRET" \
-o jsonpath="{.stringData.${VALUES_KEY}}" 2>/dev/null || true)
fi
if [ -z "$VALUES_YAML" ]; then
echo " [SKIP] Could not read values from secret"
continue
fi
# --- Check subnets type ---
SUBNETS_TYPE=$(echo "$VALUES_YAML" | yq -r '.subnets | type')
case "$SUBNETS_TYPE" in
"!!map"|"object")
echo " [CONVERT] subnets is a map, converting to array"
;;
"!!seq"|"array")
echo " [SKIP] subnets is already an array"
continue
;;
"!!null"|"null")
echo " [SKIP] subnets is null/empty"
continue
;;
*)
echo " [SKIP] subnets has unexpected type: ${SUBNETS_TYPE}"
continue
;;
esac
# --- Convert map to array ---
# {name: {cidr: x}} -> [{name: name, cidr: x}]
NEW_VALUES=$(echo "$VALUES_YAML" | yq '
.subnets = ([.subnets | to_entries[] | {"name": .key} + .value])
')
# --- Patch the Secret ---
NEW_VALUES_B64=$(echo "$NEW_VALUES" | base64)
echo " [PATCH] Updating secret/${VALUES_SECRET}"
kubectl -n "$NAMESPACE" get secret "$VALUES_SECRET" -o json | \
jq --arg key "$VALUES_KEY" --arg val "$NEW_VALUES_B64" \
'.data[$key] = $val' | \
kubectl apply -f -
echo " [OK] Converted subnets for ${HR_NAME}"
done
echo ""
echo "=== Migration complete (${#INSTANCES[@]} instance(s)) ==="
# Stamp version
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=31 --dry-run=client -o yaml | kubectl apply -f-

View File

@@ -0,0 +1,45 @@
#!/bin/sh
# Migration 31 --> 32
# Adopt tenant-root resources into cozystack-basics Helm release.
#
# In v0.41.x tenant-root Namespace and HelmRelease were applied via
# kubectl apply (no Helm tracking). In v1.0 they are managed by the
# cozystack-basics Helm release. Without Helm ownership annotations
# the install of cozystack-basics fails because the resources already
# exist. This migration adds the required annotations and labels so
# Helm can adopt them.
set -euo pipefail
RELEASE_NAME="cozystack-basics"
RELEASE_NS="cozy-system"
# Adopt Namespace tenant-root
if kubectl get namespace tenant-root >/dev/null 2>&1; then
echo "Adopting Namespace tenant-root into $RELEASE_NAME"
kubectl annotate namespace tenant-root \
meta.helm.sh/release-name="$RELEASE_NAME" \
meta.helm.sh/release-namespace="$RELEASE_NS" \
--overwrite
kubectl label namespace tenant-root \
app.kubernetes.io/managed-by=Helm \
--overwrite
fi
# Adopt HelmRelease tenant-root
if kubectl get helmrelease -n tenant-root tenant-root >/dev/null 2>&1; then
echo "Adopting HelmRelease tenant-root into $RELEASE_NAME"
kubectl annotate helmrelease -n tenant-root tenant-root \
meta.helm.sh/release-name="$RELEASE_NAME" \
meta.helm.sh/release-namespace="$RELEASE_NS" \
helm.sh/resource-policy=keep \
--overwrite
kubectl label helmrelease -n tenant-root tenant-root \
app.kubernetes.io/managed-by=Helm \
sharding.fluxcd.io/key=tenants \
--overwrite
fi
# Stamp version
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=32 --dry-run=client -o yaml | kubectl apply -f-

View File

@@ -0,0 +1,32 @@
---
apiVersion: cozystack.io/v1alpha1
kind: PackageSource
metadata:
name: cozystack.harbor-application
spec:
sourceRef:
kind: OCIRepository
name: cozystack-packages
namespace: cozy-system
path: /
variants:
- name: default
dependsOn:
- cozystack.networking
- cozystack.postgres-operator
- cozystack.redis-operator
- cozystack.objectstorage-controller
libraries:
- name: cozy-lib
path: library/cozy-lib
components:
- name: harbor-system
path: system/harbor
- name: harbor
path: apps/harbor
libraries: ["cozy-lib"]
- name: harbor-rd
path: system/harbor-rd
install:
namespace: cozy-system
releaseName: harbor-rd

View File

@@ -1,22 +0,0 @@
---
apiVersion: cozystack.io/v1alpha1
kind: PackageSource
metadata:
name: cozystack.kilo
spec:
sourceRef:
kind: OCIRepository
name: cozystack-packages
namespace: cozy-system
path: /
variants:
- name: default
dependsOn:
- cozystack.networking
components:
- name: kilo
path: system/kilo
install:
privileged: true
namespace: cozy-kilo
releaseName: kilo

View File

@@ -33,6 +33,31 @@ spec:
releaseName: cilium-networkpolicy
dependsOn:
- cilium
- name: cilium-kilo
dependsOn: []
components:
- name: cilium
path: system/cilium
valuesFiles:
- values.yaml
- values-talos.yaml
- values-kilo.yaml
install:
privileged: true
namespace: cozy-cilium
releaseName: cilium
dependsOn: []
- name: kilo
path: system/kilo
valuesFiles:
- values.yaml
- values-cilium.yaml
install:
privileged: true
namespace: cozy-kilo
releaseName: kilo
dependsOn:
- cilium
# Generic Cilium variant for non-Talos clusters (kubeadm, k3s, RKE2, etc.)
- name: cilium-generic
dependsOn: []

View File

@@ -11,6 +11,7 @@
{{include "cozystack.platform.package.default" (list "cozystack.mongodb-operator" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.clickhouse-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.foundationdb-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.harbor-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.kafka-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.mariadb-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.mongodb-application" $) }}

View File

@@ -5,8 +5,8 @@ sourceRef:
path: /
migrations:
enabled: false
image: ghcr.io/cozystack/cozystack/platform-migrations:v1.0.0-beta.4@sha256:f404d05834907b9b2695bbb37732bd1ae2e79b03da77dc91bb3fbc0fbc53dcbf
targetVersion: 30
image: ghcr.io/cozystack/cozystack/platform-migrations:v1.0.0-beta.6@sha256:37c78dafcedbdad94acd9912550db0b4875897150666b8a06edfa894de99064e
targetVersion: 32
# Bundle deployment configuration
bundles:
system:

View File

@@ -25,24 +25,17 @@ image-e2e-sandbox:
yq -i '.e2e.image = strenv(IMAGE)' values.yaml
rm -f images/e2e-sandbox.json
test: test-cluster test-openapi test-apps ## Run the end-to-end tests in existing sandbox
test: test-openapi test-apps ## Run the end-to-end tests in existing sandbox
copy-nocloud-image:
docker cp ../../../_out/assets/nocloud-amd64.raw.xz "${SANDBOX_NAME}":/workspace/_out/assets/nocloud-amd64.raw.xz
copy-installer-manifest:
docker cp ../../../_out/assets/cozystack-crds.yaml "${SANDBOX_NAME}":/workspace/_out/assets/cozystack-crds.yaml
docker cp ../../../_out/assets/cozystack-operator-talos.yaml "${SANDBOX_NAME}":/workspace/_out/assets/cozystack-operator-talos.yaml
prepare-cluster: copy-nocloud-image
docker exec "${SANDBOX_NAME}" sh -c 'cd /workspace && hack/cozytest.sh hack/e2e-prepare-cluster.bats'
install-cozystack: copy-installer-manifest
install-cozystack:
docker exec "${SANDBOX_NAME}" sh -c 'cd /workspace && hack/cozytest.sh hack/e2e-install-cozystack.bats'
test-cluster: copy-nocloud-image copy-installer-manifest ## Run the end-to-end for creating a cluster
docker exec "${SANDBOX_NAME}" sh -c 'cd /workspace && hack/cozytest.sh hack/e2e-cluster.bats'
test-openapi:
docker exec "${SANDBOX_NAME}" sh -c 'cd /workspace && hack/cozytest.sh hack/e2e-test-openapi.bats'

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v1.0.0-beta.4@sha256:eac71ef0de3450fce96255629e77903630c63ade62b81e7055f1a689f92ee153
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v1.0.0-beta.6@sha256:09af5901abcbed2b612d2d93c163e8ad3948bc55a1d8beae714b4fb2b8f7d91d

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/matchbox:v1.0.0-beta.4@sha256:cba9a2761c40ae6645e28da4cbc6d91eb86ddc886f4523cf20b3c0e40597d593
ghcr.io/cozystack/cozystack/matchbox:v1.0.0-beta.6@sha256:212f624957447f5a932fd5d4564eb8c97694d336b7dc877a2833c1513c0d074d

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.0.0-beta.4@sha256:235b194a531b70e266a10ef78d2955d19f5b659513f23d8b3cfbbc0dff7fc1c0
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.0.0-beta.6@sha256:235b194a531b70e266a10ef78d2955d19f5b659513f23d8b3cfbbc0dff7fc1c0

View File

@@ -60,7 +60,7 @@ spec:
type: string
kind:
description: Kind is the kind of the application (e.g.,
VirtualMachine, MySQL).
VirtualMachine, MariaDB).
type: string
required:
- kind

View File

@@ -1,5 +1,5 @@
backupController:
image: "ghcr.io/cozystack/cozystack/backup-controller:v1.0.0-beta.4@sha256:2dcd5347683ee88012b0e558b3a240dec9942230e9c673a359e0196f407a0833"
image: "ghcr.io/cozystack/cozystack/backup-controller:v1.0.0-beta.6@sha256:365214a74ffc34a9314a62a7d4b491590051fc5486f6bae9913c0c1289983d43"
replicas: 2
debug: false
metrics:

View File

@@ -1,5 +1,5 @@
backupStrategyController:
image: "ghcr.io/cozystack/cozystack/backupstrategy-controller:v1.0.0-beta.4@sha256:c2c150918e5609f1d6663f56b6dcbdac47bee33154524230001fcd04165f4268"
image: "ghcr.io/cozystack/cozystack/backupstrategy-controller:v1.0.0-beta.6@sha256:aa04ee61dce11950162606fc8db2d5cbc6f5b32ba700f790b3f1eee10d65efb1"
replicas: 2
debug: false
metrics:

View File

@@ -0,0 +1,3 @@
cilium:
hostFirewall:
enabled: false

View File

@@ -1,3 +1,3 @@
cozystackAPI:
image: ghcr.io/cozystack/cozystack/cozystack-api:v1.0.0-beta.4@sha256:42f8d8120f7fd3bfe37f114eedf5ca8df62ac69a0d922d91a2c392772f3b46ee
image: ghcr.io/cozystack/cozystack/cozystack-api:v1.0.0-beta.6@sha256:d89cf68fb622d0dbef7db98db09d352efc93c2cce448d11f2d73dcd363e911b7
replicas: 2

View File

@@ -0,0 +1,118 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.4
name: cfomappings.dashboard.cozystack.io
spec:
group: dashboard.cozystack.io
names:
kind: CFOMapping
listKind: CFOMappingList
plural: cfomappings
singular: cfomapping
scope: Cluster
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: |-
ArbitrarySpec holds schemaless user data and preserves unknown fields.
We map the entire .spec to a single JSON payload to mirror the CRDs you provided.
NOTE: Using apiextensionsv1.JSON avoids losing arbitrary structure during round-trips.
type: object
x-kubernetes-preserve-unknown-fields: true
status:
description: CommonStatus is a generic Status block with Kubernetes conditions.
properties:
conditions:
description: Conditions represent the latest available observations
of an object's state.
items:
description: Condition contains details for one aspect of the current
state of this API Resource.
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from one status to another.
This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: |-
message is a human readable message indicating details about the transition.
This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: |-
observedGeneration represents the .metadata.generation that the condition was set based upon.
For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date
with respect to the current state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: |-
reason contains a programmatic identifier indicating the reason for the condition's last transition.
Producers of specific condition types may define expected values and meanings for this field,
and whether the values are considered a guaranteed API.
The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
observedGeneration:
description: ObservedGeneration reflects the most recent generation
observed by the controller.
format: int64
type: integer
type: object
required:
- spec
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -1,4 +1,4 @@
cozystackController:
image: ghcr.io/cozystack/cozystack/cozystack-controller:v1.0.0-beta.4@sha256:915d07ef61e1fc3bdf87e4bfc4b8ae3920e7e33d74082778c7735ba36a08cdef
image: ghcr.io/cozystack/cozystack/cozystack-controller:v1.0.0-beta.6@sha256:d55a3c288934b1f69a00321bc8a94776915556b5f882fe6ac615e9de2701c61f
debug: false
disableTelemetry: false

View File

@@ -1,9 +1,10 @@
# imported from https://github.com/cozystack/openapi-ui-k8s-bff
# imported from https://github.com/PRO-Robotech/openapi-ui-k8s-bff
ARG NODE_VERSION=20.18.1
FROM node:${NODE_VERSION}-alpine AS builder
WORKDIR /src
ARG COMMIT_REF=183dc9dcbb0f8a1833dad642c35faa385c71e58d
# release/1.4.0
ARG COMMIT_REF=92e4b618eb9ad17b19827b5a2b7ceab33e8cf534
RUN wget -O- https://github.com/PRO-Robotech/openapi-ui-k8s-bff/archive/${COMMIT_REF}.tar.gz | tar xzf - --strip-components=1
ENV PATH=/src/node_modules/.bin:$PATH

View File

@@ -1,12 +1,13 @@
ARG NODE_VERSION=20.18.1
# openapi-k8s-toolkit
# imported from https://github.com/cozystack/openapi-k8s-toolkit
# imported from https://github.com/PRO-Robotech/openapi-k8s-toolkit
FROM node:${NODE_VERSION}-alpine AS openapi-k8s-toolkit-builder
RUN apk add git
WORKDIR /src
ARG COMMIT=cb2f122caafaa2fd5455750213d9e633017ec555
RUN wget -O- https://github.com/cozystack/openapi-k8s-toolkit/archive/${COMMIT}.tar.gz | tar -xzvf- --strip-components=1
# release/1.4.0
ARG COMMIT=c67029cc7b7495c65ee0406033576e773a73bb01
RUN wget -O- https://github.com/PRO-Robotech/openapi-k8s-toolkit/archive/${COMMIT}.tar.gz | tar -xzvf- --strip-components=1
COPY openapi-k8s-toolkit/patches /patches
RUN git apply /patches/*.diff
@@ -17,12 +18,13 @@ RUN npm run build
# openapi-ui
# imported from https://github.com/cozystack/openapi-ui
# imported from https://github.com/PRO-Robotech/openapi-ui
FROM node:${NODE_VERSION}-alpine AS builder
#RUN apk add git
WORKDIR /src
ARG COMMIT_REF=3cfbbf2156b6a5e4a1f283a032019530c0c2d37d
# release/1.4.0
ARG COMMIT_REF=6addca6939264ef2e39801baa88c1460cc1aa53e
RUN wget -O- https://github.com/PRO-Robotech/openapi-ui/archive/${COMMIT_REF}.tar.gz | tar xzf - --strip-components=1
#COPY openapi-ui/patches /patches
@@ -56,5 +58,12 @@ COPY --from=builder2 /src/node_modules /app/node_modules
COPY --from=builder2 /src/build /app/build
EXPOSE 8080
RUN sed -i -e 's|OpenAPI UI|Cozystack|g' build/index.html
# Fix Factory component: return null while loading instead of showing "Factory Not Found" 404
RUN APP_JS=$(find build -name "App-react.js" -type f | head -1) && \
if [ -n "$APP_JS" ]; then \
sed -i 's|const { data: factoryData } = useK8sSmartResource({|const { data: factoryData, isLoading: factoryIsLoading } = useK8sSmartResource({|' "$APP_JS" && \
sed -i '/Factory Not Found/s/return /return factoryIsLoading ? null : /' "$APP_JS" && \
echo "Factory loading patch applied to $APP_JS"; \
fi
USER 1001
CMD ["node", "/app/build/index.js"]

View File

@@ -1,90 +0,0 @@
diff --git a/src/components/molecules/EnrichedTable/organisms/EnrichedTableProvider/utils.ts b/src/components/molecules/EnrichedTable/organisms/EnrichedTableProvider/utils.ts
index 87a0f12..fb2e1cc 100644
--- a/src/components/molecules/EnrichedTable/organisms/EnrichedTableProvider/utils.ts
+++ b/src/components/molecules/EnrichedTable/organisms/EnrichedTableProvider/utils.ts
@@ -134,22 +134,6 @@ export const prepare = ({
// impossible in k8s
return {}
})
- if (customFields.length > 0) {
- dataSource = dataSource.map((el: TJSON) => {
- const newFieldsForComplexJsonPath: Record<string, TJSON> = {}
- customFields.forEach(({ dataIndex, jsonPath }) => {
- const jpQueryResult = jp.query(el, `$${jsonPath}`)
- newFieldsForComplexJsonPath[dataIndex] =
- Array.isArray(jpQueryResult) && jpQueryResult.length === 1 ? jpQueryResult[0] : jpQueryResult
- })
- if (typeof el === 'object') {
- return { ...el, ...newFieldsForComplexJsonPath }
- }
- // impossible in k8s
- return { ...newFieldsForComplexJsonPath }
- })
- }
-
// Handle flatMap: expand rows for map objects
// Process all flatMap columns sequentially
if (flatMapColumns.length > 0 && dataSource) {
@@ -204,6 +188,62 @@ export const prepare = ({
currentDataSource = expandedDataSource
})
dataSource = currentDataSource
+ }
+
+ if (customFields.length > 0) {
+ dataSource = dataSource.map((el: TJSON) => {
+ const newFieldsForComplexJsonPath: Record<string, TJSON> = {}
+ customFields.forEach(({ dataIndex, jsonPath }) => {
+ let fieldValue: TJSON = null
+ let handled = false
+
+ const flatMapMatch = jsonPath.match(/^(.*)\[(_flatMap[^\]]+_Key)\](.*)$/)
+ if (flatMapMatch && el && typeof el === 'object' && !Array.isArray(el)) {
+ const basePath = flatMapMatch[1]
+ const keyField = flatMapMatch[2]
+ const tailPath = flatMapMatch[3]
+ const keyValue = (el as Record<string, unknown>)[keyField]
+ if (keyValue !== null && keyValue !== undefined) {
+ const baseResult = jp.query(el, `$${basePath}`)[0]
+ if (baseResult && typeof baseResult === 'object' && !Array.isArray(baseResult)) {
+ const baseValue = (baseResult as Record<string, unknown>)[String(keyValue)]
+ if (tailPath) {
+ const normalizedTailPath =
+ tailPath.startsWith('.') || tailPath.startsWith('[') ? tailPath : `.${tailPath}`
+ const tailResult = jp.query(baseValue, `$${normalizedTailPath}`)
+ fieldValue = Array.isArray(tailResult) && tailResult.length === 1 ? tailResult[0] : tailResult
+ } else {
+ fieldValue = baseValue as TJSON
+ }
+ handled = true
+ }
+ }
+ }
+
+ if (!handled) {
+ let resolvedJsonPath = jsonPath
+ if (el && typeof el === 'object' && !Array.isArray(el)) {
+ resolvedJsonPath = jsonPath.replace(/\[(_flatMap[^\]]+_Key)\]/g, (match, keyField) => {
+ const keyValue = (el as Record<string, unknown>)[keyField]
+ if (keyValue === null || keyValue === undefined) {
+ return match
+ }
+ const escaped = String(keyValue).replace(/'/g, "\\'")
+ return `['${escaped}']`
+ })
+ }
+ const jpQueryResult = jp.query(el, `$${resolvedJsonPath}`)
+ fieldValue = Array.isArray(jpQueryResult) && jpQueryResult.length === 1 ? jpQueryResult[0] : jpQueryResult
+ }
+
+ newFieldsForComplexJsonPath[dataIndex] = fieldValue
+ })
+ if (typeof el === 'object') {
+ return { ...el, ...newFieldsForComplexJsonPath }
+ }
+ // impossible in k8s
+ return { ...newFieldsForComplexJsonPath }
+ })
}
} else {
dataSource = dataItems.map((el: TJSON) => {

View File

@@ -33,18 +33,18 @@ index 8bcef4d..2551e92 100644
if (key === 'edit') {
+ // Special case: redirect tenantmodules from core.cozystack.io to apps.cozystack.io with plural form
+ let apiGroupAndVersion = value.apiGroupAndVersion
+ let typeName = value.typeName
+ if (apiGroupAndVersion?.startsWith('core.cozystack.io/') && typeName === 'tenantmodules') {
+ let plural = value.plural
+ if (apiGroupAndVersion?.startsWith('core.cozystack.io/') && plural === 'tenantmodules') {
+ const appsApiVersion = apiGroupAndVersion.replace('core.cozystack.io/', 'apps.cozystack.io/')
+ const pluralTypeName = getPluralForm(value.entryName)
+ const pluralName = getPluralForm(value.name)
+ apiGroupAndVersion = appsApiVersion
+ typeName = pluralTypeName
+ plural = pluralName
+ }
navigate(
`${baseprefix}/${value.cluster}${value.namespace ? `/${value.namespace}` : ''}${
value.syntheticProject ? `/${value.syntheticProject}` : ''
- }/${value.pathPrefix}/${value.apiGroupAndVersion}/${value.typeName}/${value.entryName}?backlink=${
+ }/${value.pathPrefix}/${apiGroupAndVersion}/${typeName}/${value.entryName}?backlink=${
- }/${value.pathPrefix}/${value.apiGroupAndVersion}/${value.plural}/${value.name}?backlink=${
+ }/${value.pathPrefix}/${apiGroupAndVersion}/${plural}/${value.name}?backlink=${
value.backlink
}`,
)

View File

@@ -1,6 +1,6 @@
{{- $brandingConfig := .Values._cluster.branding | default dict }}
{{- $tenantText := "v1.0.0-beta.4" }}
{{- $tenantText := "v1.0.0-beta.6" }}
{{- $footerText := "Cozystack" }}
{{- $titleText := "Cozystack Dashboard" }}
{{- $logoText := "" }}

View File

@@ -18,6 +18,8 @@ metadata:
nginx.ingress.kubernetes.io/proxy-body-size: 100m
nginx.ingress.kubernetes.io/proxy-buffer-size: 100m
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
nginx.ingress.kubernetes.io/proxy-read-timeout: "86400"
nginx.ingress.kubernetes.io/proxy-send-timeout: "86400"
name: dashboard-web-ingress
spec:
ingressClassName: {{ $exposeIngress }}

View File

@@ -32,6 +32,19 @@ data:
proxy_pass http://incloud-web-nginx.{{ .Release.Namespace }}.svc:8080;
}
location /k8s/clusters/default/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
rewrite /k8s/clusters/default/(.*) /$1 break;
proxy_pass https://kubernetes.default.svc:443;
}
location /k8s {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
@@ -40,7 +53,7 @@ data:
proxy_set_header Host $host;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
rewrite /k8s/(.*) /$1 break;
proxy_pass https://kubernetes.default.svc:443;
}

View File

@@ -45,6 +45,24 @@ spec:
value: v1alpha1
- name: BASE_NAMESPACE_FULL_PATH
value: "/apis/core.cozystack.io/v1alpha1/tenantnamespaces"
- name: BASE_NAVIGATION_RESOURCE_PLURAL
value: navigations
- name: BASE_NAVIGATION_RESOURCE_NAME
value: navigation
- name: BASE_FRONTEND_PREFIX
value: /openapi-ui
- name: BASE_FACTORY_NAMESPACED_API_KEY
value: base-factory-namespaced-api
- name: BASE_FACTORY_CLUSTERSCOPED_API_KEY
value: base-factory-clusterscoped-api
- name: BASE_FACTORY_NAMESPACED_BUILTIN_KEY
value: base-factory-namespaced-builtin
- name: BASE_FACTORY_CLUSTERSCOPED_BUILTIN_KEY
value: base-factory-clusterscoped-builtin
- name: BASE_NAMESPACE_FACTORY_KEY
value: base-factory-clusterscoped-builtin
- name: BASE_ALLOWED_AUTH_HEADERS
value: user-agent,accept,content-type,origin,referer,accept-encoding,cookie,authorization
- name: LOGGER
value: "true"
- name: LOGGER_WITH_HEADERS
@@ -73,7 +91,7 @@ spec:
name: bff
ports:
- containerPort: 64231
name: http
name: http-bff
protocol: TCP
resources:
limits:
@@ -98,7 +116,6 @@ spec:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts: null
- env:
- name: BASEPREFIX
value: /openapi-ui
@@ -108,40 +125,60 @@ spec:
value: dashboard.cozystack.io
- name: CUSTOMIZATION_API_VERSION
value: v1alpha1
- name: CUSTOMIZATION_NAVIGATION_RESOURCE
value: navigation
- name: CUSTOMIZATION_CFOMAPPING_RESOURCE_NAME
value: cfomapping
- name: CUSTOMIZATION_CFOMAPPING_RESOURCE_PLURAL
value: cfomappings
- name: CUSTOMIZATION_CFO_FALLBACK_ID
value: ""
- name: CUSTOMIZATION_NAVIGATION_RESOURCE_NAME
value: navigation
- name: CUSTOMIZATION_NAVIGATION_RESOURCE_PLURAL
value: navigations
- name: CUSTOMIZATION_SIDEBAR_FALLBACK_ID
value: stock-project-api-table
- name: CUSTOMIZATION_BREADCRUMBS_FALLBACK_ID
value: stock-project-api-table
- name: INSTANCES_API_GROUP
value: dashboard.cozystack.io
- name: INSTANCES_RESOURCE_NAME
value: instances
- name: INSTANCES_VERSION
- name: INSTANCES_API_VERSION
value: v1alpha1
- name: MARKETPLACE_GROUP
value: dashboard.cozystack.io
- name: INSTANCES_PLURAL
value: instances
- name: MARKETPLACE_KIND
value: MarketplacePanel
- name: MARKETPLACE_RESOURCE_NAME
- name: MARKETPLACE_PLURAL
value: marketplacepanels
- name: MARKETPLACE_VERSION
value: v1alpha1
- name: NAVIGATE_FROM_CLUSTERLIST
value: /openapi-ui/~recordValue~/api-table/core.cozystack.io/v1alpha1/tenantnamespaces
- name: PROJECTS_API_GROUP
value: core.cozystack.io
- name: PROJECTS_RESOURCE_NAME
value: tenantnamespaces
- name: PROJECTS_VERSION
- name: PROJECTS_API_VERSION
value: v1alpha1
- name: PROJECTS_PLURAL
value: tenantnamespaces
- name: CUSTOM_NAMESPACE_API_RESOURCE_API_GROUP
value: core.cozystack.io
- name: CUSTOM_NAMESPACE_API_RESOURCE_API_VERSION
value: v1alpha1
- name: CUSTOM_NAMESPACE_API_RESOURCE_RESOURCE_NAME
- name: CUSTOM_NAMESPACE_API_RESOURCE_PLURAL
value: tenantnamespaces
- name: BASE_FACTORY_NAMESPACED_API_KEY
value: base-factory-namespaced-api
- name: BASE_FACTORY_CLUSTERSCOPED_API_KEY
value: base-factory-clusterscoped-api
- name: BASE_FACTORY_NAMESPACED_BUILTIN_KEY
value: base-factory-namespaced-builtin
- name: BASE_FACTORY_CLUSTERSCOPED_BUILTIN_KEY
value: base-factory-clusterscoped-builtin
- name: BASE_NAMESPACE_FACTORY_KEY
value: base-factory-clusterscoped-builtin
- name: USE_NAMESPACE_NAV
value: "true"
- name: USE_NEW_NAVIGATION
value: "true"
- name: HIDE_NAVIGATION
value: "true"
- name: LOGIN_URL
value: "/oauth2/userinfo"
- name: LOGOUT_URL
@@ -225,17 +262,13 @@ spec:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts: null
dnsPolicy: ClusterFirst
enableServiceLinks: false
hostIPC: false
hostNetwork: false
hostPID: false
preemptionPolicy: null
priorityClassName: system-cluster-critical
restartPolicy: Always
runtimeClassName: null
schedulerName: default-scheduler
serviceAccountName: incloud-web-web
terminationGracePeriodSeconds: 30
volumes: null

View File

@@ -1,6 +1,6 @@
openapiUI:
image: ghcr.io/cozystack/cozystack/openapi-ui:v1.0.0-beta.4@sha256:adbd07c7bde083fbf3a2fb2625ec4adbe6718a0f6f643b2505cc0029c2c0f724
image: ghcr.io/cozystack/cozystack/openapi-ui:v1.0.0-beta.6@sha256:c333637673a9e878f6c4ed0fc96db55967bbcf94b2434d075b0f0c6fcfcf9eff
openapiUIK8sBff:
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v1.0.0-beta.4@sha256:1f7827a1978bd9c81ac924dd0e78f6a3ce834a9a64af55047e220812bc15a944
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v1.0.0-beta.6@sha256:1b3ea6d4c7dbbe6a8def3b2807fffdfab2ac4afc39d7a846e57dd491fa168f92
tokenProxy:
image: ghcr.io/cozystack/cozystack/token-proxy:v1.0.0-beta.4@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc
image: ghcr.io/cozystack/cozystack/token-proxy:v1.0.0-beta.6@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/grafana-dashboards:v1.0.0-beta.4@sha256:e866b5b3874b9d390b341183d2ee070e1387440c14cfe51af831695def6dc2ec
ghcr.io/cozystack/cozystack/grafana-dashboards:v1.0.0-beta.6@sha256:7a3c9af59f8d74d5a23750bbc845c7de64610dbd4d4f84011e10be037b3ce2a0

View File

@@ -0,0 +1,3 @@
apiVersion: v2
name: harbor-rd
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -0,0 +1,4 @@
export NAME=harbor-rd
export NAMESPACE=cozy-system
include ../../../hack/package.mk

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More