Compare commits

..

1 Commits

Author SHA1 Message Date
Timofei Larkin
1dd27f6b23 [cozystack-scheduler] Add custom scheduler as an optional system package
## What this PR does

Adds the cozystack-scheduler as an optional system package, vendored from
https://github.com/cozystack/cozystack-scheduler. The scheduler extends
the default kube-scheduler with SchedulingClass-aware affinity plugins,
allowing platform operators to define cluster-wide scheduling constraints
via a SchedulingClass CRD. Pods opt in via the
`scheduler.cozystack.io/scheduling-class` annotation.

The package includes:
- Helm chart with RBAC, ConfigMap, Deployment, and CRD
- PackageSource definition for the cozystack package system
- Optional inclusion in the platform system bundle

### Release note

```release-note
[cozystack-scheduler] Add cozystack-scheduler as an optional system
package. The custom scheduler supports SchedulingClass CRDs for
cluster-wide node affinity, pod affinity, and topology spread constraints.
```

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2026-03-10 22:43:41 +03:00
255 changed files with 31528 additions and 28143 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1 +1 @@
* @kvaps @lllamnyp @lexfrei @androndo @IvanHunters @sircthulhu
* @kvaps @lllamnyp @lexfrei @androndo @IvanHunters

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,57 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v1.0.0-rc.2
-->
> **⚠️ Release Candidate Warning**: This is a release candidate intended for final validation before the stable v1.0.0 release. Breaking changes are not expected at this stage, but please test thoroughly before deploying to production.
## Features and Improvements
* **[keycloak] Allow custom Ingress hostname via values**: Added an `ingress.host` field to the cozy-keycloak chart values, allowing operators to override the default `keycloak.<root-host>` Ingress hostname. The custom hostname is applied to both the Ingress resource and the `KC_HOSTNAME` environment variable in the StatefulSet. When left empty, the original behavior is preserved (fully backward compatible) ([**@sircthulhu**](https://github.com/sircthulhu) in #2101).
## Fixes
* **[platform] Fix upgrade issues in migrations, etcd timeout, and migration script**: Fixed multiple upgrade failures discovered during v0.41.1 → v1.0 upgrade testing. Migration 26 now uses the `cozystack.io/ui=true` label (always present on v0.41.1) instead of the new label that depends on migration 22 having run, and adds robust Helm secret deletion with fallback and verification. Migrations 28 and 29 wrap `grep` calls to prevent `pipefail` exits and fix the reconcile annotation to use RFC3339 format. Migration 27 now skips missing CRDs and adds a name-pattern fallback for Helm secret deletion. The etcd HelmRelease timeout is increased from 10m to 30m to accommodate TLS cert rotation hooks. The `migrate-to-version-1.0.sh` script gains the missing `bundle-disable`, `bundle-enable`, `expose-ingress`, and `expose-services` field mappings ([**@kvaps**](https://github.com/kvaps) in #2096).
* **[platform] Fix orphaned -rd HelmReleases after application renames**: After the `ferretdb→mongodb`, `mysql→mariadb`, and `virtual-machine→vm-disk+vm-instance` renames, the system-level `-rd` HelmReleases in `cozy-system` (`ferretdb-rd`, `mysql-rd`, `virtual-machine-rd`) were left orphaned, referencing ExternalArtifacts that no longer exist and causing persistent reconciliation failures. Migrations 28 and 29 are updated to remove these resources, and migration 33 is added as a safety net for clusters that already passed those migrations ([**@kvaps**](https://github.com/kvaps) in #2102).
* **[monitoring-agents] Fix FQDN resolution regression in tenant workload clusters**: The fix introduced in #2075 used `_cluster.cluster-domain` references in `values.yaml`, but `_cluster` values are not accessible from Helm subchart contexts — meaning fluent-bit received empty hostnames and failed to forward logs. This PR replaces the `_cluster` references with a new `global.clusterDomain` variable (empty by default for management clusters, set to the cluster domain for tenant clusters), which is correctly shared with all subcharts ([**@kvaps**](https://github.com/kvaps) in #2086).
* **[dashboard] Fix legacy templating and cluster identifier in sidebar links**: Standardized the cluster identifier used across dashboard menu links, administration links, and API request paths, resolving incorrect or broken link targets for the Backups and External IPs sidebar sections ([**@androndo**](https://github.com/androndo) in #2093).
* **[dashboard] Fix backupjobs creation form and sidebar backup category identifier**: Fixed the backup job creation form configuration, adding the required Name, Namespace, Plan Name, Application, and Backup Class fields. Fixed the sidebar backup category identifier that was causing incorrect navigation ([**@androndo**](https://github.com/androndo) in #2103).
## Documentation
* **[website] Add Helm chart development principles guide**: Added a new developer guide section documenting Cozystack's four core Helm chart principles: easy upstream updates, local-first artifacts, local dev/test workflow, and no external dependencies ([**@kvaps**](https://github.com/kvaps) in cozystack/website#418).
* **[website] Add network architecture overview**: Added comprehensive network architecture documentation covering the multi-layered networking stack — MetalLB (L2/BGP), Cilium eBPF (kube-proxy replacement), Kube-OVN (centralized IPAM), and tenant isolation with identity-based eBPF policies — with Mermaid diagrams for all major traffic flows ([**@IvanHunters**](https://github.com/IvanHunters) in cozystack/website#422).
* **[website] Update documentation to use jsonpatch for service exposure**: Improved `kubectl patch` commands throughout installation and configuration guides to use JSON Patch `add` operations for extending arrays instead of replacing them wholesale, making the documented commands safer and more precise ([**@sircthulhu**](https://github.com/sircthulhu) in cozystack/website#427).
* **[website] Update certificates section in Platform Package documentation**: Updated the certificate configuration documentation to reflect the new `solver` and `issuerName` fields introduced in v1.0.0-rc.1, replacing the legacy `issuerType` references ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in cozystack/website#429).
* **[website] Add tenant Kubernetes cluster log querying guide**: Added documentation for querying logs from tenant Kubernetes clusters in Grafana using VictoriaLogs labels (`tenant`, `kubernetes_namespace_name`, `kubernetes_pod_name`), including the `monitoringAgents` addon prerequisite and step-by-step filtering examples ([**@IvanHunters**](https://github.com/IvanHunters) in cozystack/website#430).
* **[website] Replace non-idempotent commands with idempotent alternatives**: Updated `helm install` to `helm upgrade --install`, `kubectl create -f` to `kubectl apply -f`, and `kubectl create ns` to the dry-run+apply pattern across all installation and deployment guides so commands can be safely re-run ([**@lexfrei**](https://github.com/lexfrei) in cozystack/website#431).
* **[website] Fix broken documentation links with `.md` suffix**: Fixed incorrect internal links with `.md` suffix across virtualization guides for both v0 and v1 documentation, standardizing link text to "Developer Guide" ([**@cheese**](https://github.com/cheese) in cozystack/website#432).
## Contributors
We'd like to thank all contributors who made this release possible:
* [**@androndo**](https://github.com/androndo)
* [**@cheese**](https://github.com/cheese)
* [**@IvanHunters**](https://github.com/IvanHunters)
* [**@kvaps**](https://github.com/kvaps)
* [**@lexfrei**](https://github.com/lexfrei)
* [**@myasnikovdaniil**](https://github.com/myasnikovdaniil)
* [**@sircthulhu**](https://github.com/sircthulhu)
### New Contributors
We're excited to welcome our first-time contributors:
* [**@cheese**](https://github.com/cheese) - First contribution!
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.0-rc.1...v1.0.0-rc.2

View File

@@ -1,289 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v1.0.0
-->
# Cozystack v1.0.0 — "Stable"
We are thrilled to announce **Cozystack v1.0.0**, the first stable major release of the Cozystack platform. This milestone represents a fundamental architectural evolution from the v0.x series, introducing a fully operator-driven package management system, a comprehensive backup and restore framework, a redesigned virtual machine architecture, and a rich set of new managed applications — all hardened through an extensive alpha, beta, and release-candidate cycle.
## Feature Highlights
### Package-Based Architecture with Cozystack Operator
The most significant architectural change in v1.0.0 is the replacement of HelmRelease bundle deployments with a declarative **Package** and **PackageSource** model managed by the new `cozystack-operator`. Operators now define their platform configuration in a structured `values.yaml` and the operator reconciles the desired state by managing Package and PackageSource resources across the cluster.
The operator also takes ownership of CRD lifecycle — installing and updating CRDs from embedded manifests at every startup — eliminating the stale-CRD problem that affected Helm-only installations. Flux sharding has been added to distribute tenant HelmRelease reconciliation across multiple Flux controllers, providing horizontal scalability in large multi-tenant environments.
A migration script (`hack/migrate-to-version-1.0.sh`) is provided for upgrading existing v0.x clusters, along with 33 incremental migration steps that automate resource renaming, secret cleanup, and configuration conversion.
### Comprehensive Backup and Restore System
v1.0.0 ships a fully featured, production-ready backup and restore framework built on Velero integration. Users can define **BackupClass** resources to describe backup storage targets, create **BackupPlan** schedules, and trigger **RestoreJob** resources for end-to-end application recovery.
Virtual machine backups are supported natively via the Velero KubeVirt plugin, which captures consistent VM disk snapshots alongside metadata. The backup controller and the backup strategy sub-controllers (including the VM-specific strategy) are installed by default, and a full dashboard UI allows users to monitor backup status, view backup job history, and initiate restore workflows.
### Redesigned Virtual Machine Architecture
The legacy `virtual-machine` application has been replaced with a two-resource architecture: **`vm-disk`** for managing persistent disks and **`vm-instance`** for managing VM lifecycle. This separation provides cleaner disk/instance management, allows disks to be reused across VM instances, and aligns with modern KubeVirt patterns.
New capabilities include: a `cpuModel` field for direct CPU model specification without using an instanceType; the ability to switch between `instanceType`-based and custom resource-based configurations; migration from the deprecated `running` field to `runStrategy`; and native **RWX (NFS) filesystem support** in the KubeVirt CSI driver, enabling multiple pods to mount the same persistent volume simultaneously.
### New Managed Applications
v1.0.0 expands the application catalog significantly:
- **MongoDB**: A fully managed MongoDB replica set with persistent storage, monitoring integration, and unified user/database configuration API.
- **Qdrant**: A high-performance vector database for AI and machine learning workloads, supporting single-replica and clustered modes with API key authentication and optional external LoadBalancer access.
- **Harbor**: A fully managed OCI container registry backed by CloudNativePG, Redis operator, and COSI BucketClaim (SeaweedFS). Includes Trivy vulnerability scanner, auto-generated admin credentials, and TLS via cert-manager.
- **NATS**: Enhanced with full Grafana monitoring dashboards for JetStream and server metrics, Prometheus support with TLS-aware configuration, and updated image customization options.
- **MariaDB**: The `mysql` application is renamed to `mariadb`, accurately reflecting the underlying engine. An automatic migration (migration 27) converts all existing MySQL resources to use the `mariadb` naming.
FerretDB has been removed from the catalog as it is superseded by native MongoDB support.
### Multi-Location Networking with Kilo and cilium-kilo
Cozystack v1.0.0 introduces first-class support for multi-location clusters via the **Kilo** WireGuard mesh networking package. Kilo automatically establishes encrypted WireGuard tunnels between nodes in different network segments, enabling seamless cross-region communication.
A new integrated **`cilium-kilo`** networking variant combines Cilium eBPF CNI with Kilo's WireGuard overlay in a single platform configuration selection. This variant enables `enable-ipip-termination` in Cilium and deploys Kilo with `--compatibility=cilium`, allowing Cilium network policies to function correctly over the WireGuard mesh — without any manual configuration of the two components.
### Flux Sharding for Scalable Multi-Tenancy
Tenant HelmRelease reconciliation is now distributed across multiple Flux controllers via sharding labels. Each tenant workload is assigned to a shard based on a deterministic hash, preventing a single Flux controller from becoming a bottleneck in large multi-tenant environments. The platform operator manages the shard assignment automatically, and new shards can be added by scaling the Flux deployment.
## Major Features and Improvements
### Cozystack Operator
* **[cozystack-operator] Introduce Package and PackageSource APIs**: Added new CRDs for declarative package management, defining the full API for Package and PackageSource resources ([**@kvaps**](https://github.com/kvaps) in #1740, #1741, #1755, #1756, #1760, #1761).
* **[platform] Migrate from HelmRelease bundles to Package-based deployment**: Replaced HelmRelease bundle system with Package resources managed by cozystack-operator, including restructured values.yaml with full configuration support for networking, publishing, authentication, scheduling, branding, and resources ([**@kvaps**](https://github.com/kvaps) in #1816).
* **[cozystack-operator] Add automatic CRD installation at startup**: Added `--install-crds` flag to install embedded CRD manifests on every startup via server-side apply, ensuring CRDs and the PackageSource are always up to date ([**@lexfrei**](https://github.com/lexfrei) in #2060).
* **[installer] Remove CRDs from Helm chart, delegate lifecycle to operator**: The `cozy-installer` Helm chart no longer ships CRDs; CRD lifecycle is fully managed by the Cozystack operator ([**@lexfrei**](https://github.com/lexfrei) in #2074).
* **[cozystack-operator] Preserve existing suspend field in package reconciler**: Fixed package reconciler to properly preserve the suspend field state during reconciliation ([**@sircthulhu**](https://github.com/sircthulhu) in #2043).
* **[cozystack-operator] Fix namespace privileged flag resolution and field ownership**: Fixed operator to correctly check all Packages in a namespace when determining privileged status, and resolved SSA field ownership conflicts ([**@kvaps**](https://github.com/kvaps) in #2046).
* **[platform] Add flux-plunger controller**: Added flux-plunger controller to automatically fix stuck HelmRelease errors by cleaning up failed resources and retrying reconciliation ([**@kvaps**](https://github.com/kvaps) in #1843).
* **[installer] Add variant-aware templates for generic Kubernetes support**: Extended the installer to support generic and hosted Kubernetes deployments via the `cozystackOperator.variant=generic` parameter ([**@lexfrei**](https://github.com/lexfrei) in #2010).
* **[installer] Unify operator templates**: Merged separate operator templates into a single variant-based template supporting Talos and non-Talos deployments ([**@kvaps**](https://github.com/kvaps) in #2034).
### API and Platform
* **[api] Rename CozystackResourceDefinition to ApplicationDefinition**: Renamed CRD and all related types for clarity and consistency, with migration 24 handling the transition automatically ([**@kvaps**](https://github.com/kvaps) in #1864).
* **[platform] Add DNS-1035 validation for Application names**: Added dynamic DNS-1035 label validation for Application names at creation time, preventing resources with invalid names that would fail downstream ([**@lexfrei**](https://github.com/lexfrei) in #1771).
* **[platform] Make cluster issuer name and ACME solver configurable**: Added `publishing.certificates.solver` and `publishing.certificates.issuerName` parameters to allow pointing all ingress TLS annotations at any ClusterIssuer ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2077).
* **[platform] Add cilium-kilo networking variant**: Added integrated `cilium-kilo` networking variant combining Cilium CNI with Kilo WireGuard mesh overlay ([**@kvaps**](https://github.com/kvaps) in #2064).
* **[cozystack-api] Switch from DaemonSet to Deployment**: Migrated cozystack-api to a Deployment with PreferClose topology spread constraints, reducing resource consumption while maintaining high availability ([**@kvaps**](https://github.com/kvaps) in #2041, #2048).
### Virtual Machines
* **[vm-instance] Complete migration from virtual-machine to vm-disk and vm-instance**: Fully migrated from `virtual-machine` to the new `vm-disk` and `vm-instance` architecture, with automatic migration script (migration 28) for existing VMs ([**@kvaps**](https://github.com/kvaps) in #2040).
* **[kubevirt-csi-driver] Add RWX Filesystem (NFS) support**: Added Read-Write-Many filesystem support to kubevirt-csi-driver via automatic NFS server deployment per PVC ([**@kvaps**](https://github.com/kvaps) in #2042).
* **[vm] Add cpuModel field to specify CPU model without instanceType**: Added cpuModel field to VirtualMachine API for granular CPU control ([**@sircthulhu**](https://github.com/sircthulhu) in #2007).
* **[vm] Allow switching between instancetype and custom resources**: Implemented atomic upgrade hook for switching between instanceType-based and custom resource VM configurations ([**@sircthulhu**](https://github.com/sircthulhu) in #2008).
* **[vm] Migrate to runStrategy instead of running**: Migrated VirtualMachine API from deprecated `running` field to `runStrategy` ([**@sircthulhu**](https://github.com/sircthulhu) in #2004).
* **[vm] Always expose VMs with a service**: Virtual machines are now always exposed with at least a ClusterIP service, ensuring in-cluster DNS names ([**@lllamnyp**](https://github.com/lllamnyp) in #1738, #1751).
* **[dashboard] VMInstance dropdowns for disks and instanceType**: VM instance creation form now renders API-backed dropdowns for `instanceType` and disk `name` fields ([**@sircthulhu**](https://github.com/sircthulhu) in #2071).
### Backup System
* **[backups] Implement comprehensive backup and restore functionality**: Core backup Plan controller, Velero strategy controller, RestoreJob resource with end-to-end restore workflows, and enhanced backup plans UI ([**@lllamnyp**](https://github.com/lllamnyp) in #1640, #1685, #1687, #1719, #1720, #1737, #1967; [**@androndo**](https://github.com/androndo) in #1762, #1967, #1968, #1811).
* **[backups] Add kubevirt plugin to velero**: Added KubeVirt plugin to Velero for consistent VM state and data snapshots ([**@lllamnyp**](https://github.com/lllamnyp) in #2017).
* **[backups] Install backupstrategy controller by default**: Enabled backupstrategy controller by default for automatic backup scheduling ([**@lllamnyp**](https://github.com/lllamnyp) in #2020).
* **[backups] Better selectors for VM strategy**: Improved VM backup strategy selectors for accurate and reliable backup targeting ([**@lllamnyp**](https://github.com/lllamnyp) in #2023).
* **[backups] Create RBAC for backup resources**: Added comprehensive RBAC configuration for backup operations and restore jobs ([**@lllamnyp**](https://github.com/lllamnyp) in #2018).
### Networking
* **[kilo] Introduce Kilo WireGuard mesh networking**: Added Kilo as a system package providing secure WireGuard-based VPN mesh for connecting Kubernetes nodes across different networks and regions ([**@kvaps**](https://github.com/kvaps) in #1691).
* **[kilo] Add Cilium compatibility variant**: Added `cilium` variant enabling Cilium-aware IPIP encapsulation for full network policy enforcement with Kilo mesh ([**@kvaps**](https://github.com/kvaps) in #2055).
* **[kilo] Update to v0.8.0 with configurable MTU**: Updated Kilo to v0.8.0 with configurable MTU parameter and performance improvements ([**@kvaps**](https://github.com/kvaps) in #2003, #2049, #2053).
* **[local-ccm] Add local-ccm package**: Added local cloud controller manager for managing load balancer services in bare-metal environments ([**@kvaps**](https://github.com/kvaps) in #1831).
* **[local-ccm] Add node-lifecycle-controller component**: Added optional node-lifecycle-controller that automatically deletes unreachable NotReady nodes, solving the "zombie" node problem in autoscaled clusters ([**@IvanHunters**](https://github.com/IvanHunters) in #1992).
* **[tenant] Allow egress to parent ingress pods**: Updated tenant network policies to allow egress traffic to parent cluster ingress pods ([**@lexfrei**](https://github.com/lexfrei) in #1765, #1776).
### New Applications
* **[mongodb] Add MongoDB managed application**: Added MongoDB as a fully managed database with replica sets, persistent storage, and unified user/database configuration ([**@lexfrei**](https://github.com/lexfrei) in #1822; [**@kvaps**](https://github.com/kvaps) in #1923).
* **[qdrant] Add Qdrant vector database**: Added Qdrant as a high-performance vector database for AI/ML workloads with API key authentication and optional LoadBalancer access ([**@lexfrei**](https://github.com/lexfrei) in #1987).
* **[harbor] Add managed Harbor container registry**: Added Harbor v2.14.2 as a managed tenant-level container registry with CloudNativePG, Redis operator, COSI BucketClaim storage, and Trivy scanner ([**@lexfrei**](https://github.com/lexfrei) in #2058).
* **[nats] Add monitoring**: Added Grafana dashboards for NATS JetStream and server metrics, Prometheus monitoring with TLS support ([**@klinch0**](https://github.com/klinch0) in #1381).
* **[mariadb] Rename mysql application to mariadb**: Renamed MySQL application to MariaDB with automatic migration (migration 27) for all existing resources ([**@kvaps**](https://github.com/kvaps) in #2026).
* **[ferretdb] Remove FerretDB application**: Removed FerretDB, superseded by native MongoDB support ([**@kvaps**](https://github.com/kvaps) in #2028).
### Kubernetes and System Components
* **[kubernetes] Update supported Kubernetes versions to v1.30v1.35**: Updated the tenant Kubernetes version matrix, with v1.35 as the new default. Kamaji updated to edge-26.2.4 and CAPI Kamaji provider to v0.16.0 ([**@lexfrei**](https://github.com/lexfrei) in #2073).
* **[kubernetes] Auto-enable Gateway API support in cert-manager**: Added automatic Gateway API support in cert-manager for tenant clusters ([**@kvaps**](https://github.com/kvaps) in #1997).
* **[kubernetes] Use ingress-nginx nodeport service**: Changed tenant Kubernetes clusters to use ingress-nginx NodePort service for improved compatibility ([**@sircthulhu**](https://github.com/sircthulhu) in #1948).
* **[system] Add cluster-autoscaler for Hetzner and Azure**: Added cluster-autoscaler system package for automatically scaling management cluster nodes on Hetzner and Azure ([**@kvaps**](https://github.com/kvaps) in #1964).
* **[cluster-autoscaler] Enable enforce-node-group-min-size by default**: Ensures node groups are always scaled up to their configured minimum size ([**@kvaps**](https://github.com/kvaps) in #2050).
* **[system] Add clustersecret-operator package**: Added clustersecret-operator for managing secrets across multiple namespaces ([**@sircthulhu**](https://github.com/sircthulhu) in #2025).
### Monitoring
* **[monitoring] Enable monitoring for core components**: Enhanced monitoring capabilities with dashboards and metrics for core Cozystack components ([**@IvanHunters**](https://github.com/IvanHunters) in #1937).
* **[monitoring] Add SLACK_SEVERITY_FILTER and VMAgent for tenant monitoring**: Added SLACK_SEVERITY_FILTER for Slack alert filtering and VMAgent for tenant namespace metrics scraping ([**@IvanHunters**](https://github.com/IvanHunters) in #1712).
* **[monitoring-agents] Fix FQDN resolution for tenant workload clusters**: Fixed monitoring agents in tenant clusters to use full DNS names with cluster domain suffix ([**@IvanHunters**](https://github.com/IvanHunters) in #2075; [**@kvaps**](https://github.com/kvaps) in #2086).
### Storage
* **[linstor] Move CRDs to dedicated piraeus-operator-crds chart**: Moved LINSTOR CRDs to a dedicated chart, ensuring reliable installation of all CRDs including `linstorsatellites.io` ([**@kvaps**](https://github.com/kvaps) in #2036; [**@IvanHunters**](https://github.com/IvanHunters) in #1991).
* **[seaweedfs] Increase certificate duration to 10 years**: Increased SeaweedFS certificate validity to 10 years to reduce rotation overhead ([**@IvanHunters**](https://github.com/IvanHunters) in #1986).
## Improvements
* **[dashboard] Upgrade dashboard to version 1.4.0**: Updated Cozystack dashboard to v1.4.0 with new features and improvements ([**@sircthulhu**](https://github.com/sircthulhu) in #2051).
* **[dashboard] Hide Ingresses/Services/Secrets tabs when no selectors defined**: Tabs are now conditionally shown based on whether the ApplicationDefinition has resource selectors configured, reducing UI clutter ([**@kvaps**](https://github.com/kvaps) in #2087).
* **[dashboard] Add startupProbe to prevent container restarts on slow hardware**: Added startup probe to dashboard pods to prevent unnecessary restarts ([**@kvaps**](https://github.com/kvaps) in #1996).
* **[keycloak] Allow custom Ingress hostname via values**: Added `ingress.host` field to cozy-keycloak chart values for overriding the default `keycloak.<root-host>` hostname ([**@sircthulhu**](https://github.com/sircthulhu) in #2101).
* **[branding] Separate values for Keycloak**: Separated Keycloak branding values for better customization capabilities ([**@nbykov0**](https://github.com/nbykov0) in #1947).
* **[rbac] Use hierarchical naming scheme**: Refactored RBAC to use hierarchical naming for cluster roles and role bindings ([**@lllamnyp**](https://github.com/lllamnyp) in #2019).
* **[tenant,rbac] Use shared clusterroles**: Refactored tenant RBAC to use shared ClusterRoles for improved consistency ([**@lllamnyp**](https://github.com/lllamnyp) in #1999).
* **[kubernetes] Increase default apiServer resourcesPreset to large**: Increased kube-apiserver resource preset to `large` for more reliable operation under higher workloads ([**@kvaps**](https://github.com/kvaps) in #1875).
* **[kubernetes] Increase kube-apiserver startup probe threshold**: Increased startup probe threshold to allow more time for API server readiness ([**@kvaps**](https://github.com/kvaps) in #1876).
* **[etcd] Increase probe thresholds for better recovery**: Increased etcd probe thresholds to improve cluster resilience during temporary slowdowns ([**@kvaps**](https://github.com/kvaps) in #1874).
* **[etcd-operator] Add vertical-pod-autoscaler dependency**: Added VPA as a dependency to etcd-operator for proper resource scaling ([**@sircthulhu**](https://github.com/sircthulhu) in #2047).
* **[cilium] Change cilium-operator replicas to 1**: Reduced Cilium operator replicas to decrease resource consumption in smaller deployments ([**@IvanHunters**](https://github.com/IvanHunters) in #1784).
* **[keycloak-configure,dashboard] Enable insecure TLS verification by default**: Made SSL certificate verification configurable with insecure mode enabled by default for local development ([**@IvanHunters**](https://github.com/IvanHunters) in #2005).
* **[platform] Split telemetry between operator and controller**: Separated telemetry collection for better metrics isolation ([**@kvaps**](https://github.com/kvaps) in #1869).
* **[system] Add resource requests and limits to etcd-defrag**: Added resource requests and limits to etcd-defrag job to prevent resource contention ([**@matthieu-robin**](https://github.com/matthieu-robin) in #1785, #1786).
## Fixes
* **[dashboard] Fix sidebar visibility on cluster-level pages**: Fixed broken URLs with double `//` on cluster-level pages by hiding namespace-scoped sidebar items when no tenant is selected ([**@sircthulhu**](https://github.com/sircthulhu) in #2106).
* **[platform] Fix upgrade issues in migrations, etcd timeout, and migration script**: Fixed multiple upgrade failures discovered during v0.41.1 → v1.0 upgrade testing, including migration 26-29 fixes, RFC3339 format for annotations, and extended etcd HelmRelease timeout to 30m ([**@kvaps**](https://github.com/kvaps) in #2096).
* **[platform] Fix orphaned -rd HelmReleases after application renames**: Migrations 28-29 updated to remove orphaned `-rd` HelmReleases in `cozy-system` after `ferretdb→mongodb`, `mysql→mariadb`, and `virtual-machine→vm-disk+vm-instance` renames, with migration 33 as a safety net ([**@kvaps**](https://github.com/kvaps) in #2102).
* **[platform] Adopt tenant-root into cozystack-basics during migration**: Added migration 31 to adopt existing `tenant-root` Namespace and HelmRelease into `cozystack-basics` for a safe v0.41.x → v1.0 upgrade path ([**@kvaps**](https://github.com/kvaps) in #2065).
* **[platform] Preserve tenant-root HelmRelease during migration**: Fixed data-loss risk during migration where `tenant-root` HelmRelease could be deleted ([**@sircthulhu**](https://github.com/sircthulhu) in #2063).
* **[platform] Fix cozystack-values secret race condition**: Fixed race condition in cozystack-values secret creation that could cause initialization failures ([**@lllamnyp**](https://github.com/lllamnyp) in #2024).
* **[cozystack-basics] Preserve existing HelmRelease values during reconciliations**: Fixed data-loss bug where changes to `tenant-root` HelmRelease were dropped on the next reconciliation ([**@sircthulhu**](https://github.com/sircthulhu) in #2068).
* **[cozystack-basics] Deny resourcequotas deletion for tenant admin**: Fixed `cozy:tenant:admin:base` ClusterRole to explicitly deny deletion of ResourceQuota objects ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2076).
* **[dashboard] Fix legacy templating and cluster identifier in sidebar links**: Standardized cluster identifier across dashboard menu links resolving broken link targets for Backups and External IPs ([**@androndo**](https://github.com/androndo) in #2093).
* **[dashboard] Fix backupjobs creation form and sidebar backup category identifier**: Fixed backup job creation form fields and fixed sidebar backup category identifier ([**@androndo**](https://github.com/androndo) in #2103).
* **[kubevirt] Update KubeVirt to v1.6.4 and CDI to v1.64.0, fix VM pod initialization**: Updated KubeVirt and CDI and disabled serial console logging globally to fix the `guest-console-log` init container blocking virt-launcher pods ([**@nbykov0**](https://github.com/nbykov0) in #1833; [**@kvaps**](https://github.com/kvaps)).
* **[linstor] Fix DRBD+LUKS+STORAGE resource creation failure**: Applied upstream fix for all newly created encrypted volumes failing due to missing `setExists(true)` call in `LuksLayer` ([**@kvaps**](https://github.com/kvaps) in #2072).
* **[platform] Clean up Helm secrets for removed releases**: Added cleanup logic to migration 23 to remove orphaned Helm secrets from removed `-rd` releases ([**@kvaps**](https://github.com/kvaps) in #2035).
* **[monitoring] Fix YAML parse error in vmagent template**: Fixed YAML parsing error in monitoring-agents vmagent template ([**@kvaps**](https://github.com/kvaps) in #2037).
* **[monitoring] Remove cozystack-controller dependency**: Fixed monitoring package to remove unnecessary cozystack-controller dependency ([**@IvanHunters**](https://github.com/IvanHunters) in #1990).
* **[monitoring] Remove duplicate dashboards.list**: Fixed duplicate dashboards.list configuration in extra/monitoring package ([**@IvanHunters**](https://github.com/IvanHunters) in #2016).
* **[linstor] Update piraeus-server patches with critical fixes**: Backported critical patches fixing edge cases in device management and DRBD resource handling ([**@kvaps**](https://github.com/kvaps) in #1850).
* **[apiserver] Fix Watch resourceVersion and bookmark handling**: Fixed Watch API handling of resourceVersion and bookmarks for proper event streaming ([**@kvaps**](https://github.com/kvaps) in #1860).
* **[bootbox] Auto-create bootbox-application as dependency**: Fixed bootbox package to automatically create required bootbox-application dependency ([**@kvaps**](https://github.com/kvaps) in #1974).
* **[postgres-operator] Correct PromQL syntax in CNPGClusterOffline alert**: Fixed incorrect PromQL syntax in the CNPGClusterOffline Prometheus alert ([**@mattia-eleuteri**](https://github.com/mattia-eleuteri) in #1981).
* **[coredns] Fix serviceaccount to match kubernetes bootstrap RBAC**: Fixed CoreDNS service account to correctly match Kubernetes bootstrap RBAC requirements ([**@mattia-eleuteri**](https://github.com/mattia-eleuteri) in #1958).
* **[dashboard] Verify JWT token**: Added JWT token verification to dashboard for improved security ([**@lllamnyp**](https://github.com/lllamnyp) in #1980).
* **[codegen] Fix missing gen_client in update-codegen.sh**: Fixed build error in `pkg/generated/applyconfiguration/utils.go` by including `gen_client` in the codegen script ([**@lexfrei**](https://github.com/lexfrei) in #2061).
* **[kubevirt-operator] Fix typo in VMNotRunningFor10Minutes alert**: Fixed typo in VM alert name ensuring proper alert triggering ([**@lexfrei**](https://github.com/lexfrei) in #1770, #1775).
## Security
* **[dashboard] Verify JWT token**: Added JWT token verification to the dashboard for improved authentication security ([**@lllamnyp**](https://github.com/lllamnyp) in #1980).
## Dependencies
* **[cilium] Update to v1.18.6**: Updated Cilium CNI to v1.18.6 with security fixes and performance improvements ([**@sircthulhu**](https://github.com/sircthulhu) in #1868).
* **[kube-ovn] Update to v1.15.3**: Updated Kube-OVN CNI to v1.15.3 with performance improvements and bug fixes ([**@kvaps**](https://github.com/kvaps) in #2022).
* **[kilo] Update to v0.8.0**: Updated Kilo WireGuard mesh to v0.8.0 with performance improvements and new compatibility features ([**@kvaps**](https://github.com/kvaps) in #2053).
* **Update Talos Linux to v1.12.1**: Updated Talos Linux to v1.12.1 with latest features and security patches ([**@kvaps**](https://github.com/kvaps) in #1877).
## System Configuration
* **[vpc] Migrate subnets definition from map to array format**: Migrated VPC subnets from `map[string]Subnet` to `[]Subnet` with explicit `name` field, with automatic migration via migration 30 ([**@kvaps**](https://github.com/kvaps) in #2052).
* **[migrations] Add migrations 23-33 for v1.0 upgrade path**: Added 11 incremental migrations handling CRD ownership, resource renaming, secret cleanup, Helm adoption, and configuration conversion for the v0.41.x → v1.0.0 upgrade path ([**@kvaps**](https://github.com/kvaps) in #1975, #2035, #2036, #2040, #2026, #2065, #2052, #2102).
* **[tenant] Run cleanup job from system namespace**: Moved tenant cleanup job to system namespace for improved security and resource isolation ([**@lllamnyp**](https://github.com/lllamnyp) in #1774, #1777).
## Development, Testing, and CI/CD
* **[ci] Use GitHub Copilot CLI for changelog generation**: Automated changelog generation using GitHub Copilot CLI ([**@androndo**](https://github.com/androndo) in #1753).
* **[ci] Choose runner conditional on label**: Added conditional runner selection in CI based on PR labels ([**@lllamnyp**](https://github.com/lllamnyp) in #1998).
* **[e2e] Use helm install instead of kubectl apply for cozystack installation**: Replaced static YAML apply flow with direct `helm upgrade --install` of the installer chart in E2E tests ([**@lexfrei**](https://github.com/lexfrei) in #2060).
* **[e2e] Make kubernetes test retries effective by cleaning up stale resources**: Fixed E2E test retries by adding pre-creation cleanup and increasing deployment wait timeout to 300s ([**@lexfrei**](https://github.com/lexfrei) in #2062).
* **[e2e] Increase HelmRelease readiness timeout for kubernetes test**: Increased HelmRelease readiness timeout to prevent false failures on slower hardware ([**@lexfrei**](https://github.com/lexfrei) in #2033).
* **[ci] Improve cozyreport functionality**: Enhanced cozyreport tool with improved reporting for CI/CD pipelines ([**@lllamnyp**](https://github.com/lllamnyp) in #2032).
* **feat(cozypkg): add cross-platform build targets with version injection**: Added cross-platform build targets for cozypkg/cozyhr tool for linux/amd64, linux/arm64, darwin/amd64, darwin/arm64 ([**@kvaps**](https://github.com/kvaps) in #1862).
* **refactor: move scripts to hack directory**: Reorganized scripts to the standard `hack/` location ([**@kvaps**](https://github.com/kvaps) in #1863).
* **Update CODEOWNERS**: Updated CODEOWNERS to include new maintainers ([**@lllamnyp**](https://github.com/lllamnyp) in #1972; [**@IvanHunters**](https://github.com/IvanHunters) in #2015).
* **[talm] Skip config loading for completion subcommands**: Fixed talm CLI to skip config loading for shell completion commands ([**@kitsunoff**](https://github.com/kitsunoff) in cozystack/talm#109).
* **[talm] Fix metadata.id type casting in physical_links_info**: Fixed Prometheus query to properly cast metadata.id to string for regexMatch operations ([**@kvaps**](https://github.com/kvaps) in cozystack/talm#110).
## Documentation
* **[website] Add documentation versioning**: Implemented comprehensive documentation versioning with separate v0 and v1 documentation trees and a version selector in the UI ([**@IvanStukov**](https://github.com/IvanStukov) in cozystack/website#415).
* **[website] Describe upgrade to v1.0**: Added detailed upgrade instructions for migrating from v0.x to v1.0 ([**@nbykov0**](https://github.com/nbykov0) in cozystack/website@21bbe84).
* **[website] Migrate ConfigMap references to Platform Package in v1 docs**: Updated entire v1 documentation to replace legacy ConfigMap-based configuration with the new Platform Package API ([**@sircthulhu**](https://github.com/sircthulhu) in cozystack/website#426).
* **[website] Add generic Kubernetes deployment guide for v1**: Added installation guide for deploying Cozystack on any generic Kubernetes cluster ([**@lexfrei**](https://github.com/lexfrei) in cozystack/website#408).
* **[website] Describe operator-based and HelmRelease-based package patterns**: Added development documentation explaining operator-based and HelmRelease-based package patterns ([**@kvaps**](https://github.com/kvaps) in cozystack/website#413).
* **[website] Add Helm chart development principles guide**: Added developer guide documenting Cozystack's four core Helm chart principles ([**@kvaps**](https://github.com/kvaps) in cozystack/website#418).
* **[website] Add network architecture overview**: Added comprehensive network architecture documentation covering the multi-layered networking stack with Mermaid diagrams ([**@IvanHunters**](https://github.com/IvanHunters) in cozystack/website#422).
* **[website] Add LINSTOR disk preparation guide**: Added comprehensive documentation for preparing disks for LINSTOR storage ([**@IvanHunters**](https://github.com/IvanHunters) in cozystack/website#411).
* **[website] Add Proxmox VM migration guide**: Added detailed guide for migrating virtual machines from Proxmox to Cozystack ([**@IvanHunters**](https://github.com/IvanHunters) in cozystack/website#410).
* **[website] Add cluster autoscaler documentation**: Added documentation for Hetzner setup with Talos, vSwitch, and Kilo mesh integration ([**@kvaps**](https://github.com/kvaps) in #1964).
* **[website] Improve Azure autoscaling troubleshooting guide**: Enhanced Azure autoscaling documentation with serial console instructions and `az vmss update --custom-data` guidance ([**@kvaps**](https://github.com/kvaps) in cozystack/website#424).
* **[website] Update multi-location documentation for cilium-kilo variant**: Updated multi-location networking docs to reflect the integrated `cilium-kilo` variant selection ([**@kvaps**](https://github.com/kvaps) in cozystack/website@02d63f0).
* **[website] Update documentation to use jsonpatch for service exposure**: Improved `kubectl patch` commands to use JSON Patch `add` operations ([**@sircthulhu**](https://github.com/sircthulhu) in cozystack/website#427).
* **[website] Update certificates section in Platform Package documentation**: Updated certificate configuration docs to reflect new `solver` and `issuerName` fields ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in cozystack/website#429).
* **[website] Add tenant Kubernetes cluster log querying guide**: Added documentation for querying logs from tenant clusters in Grafana using VictoriaLogs labels ([**@IvanHunters**](https://github.com/IvanHunters) in cozystack/website#430).
* **[website] Replace non-idempotent commands with idempotent alternatives**: Updated `helm install` to `helm upgrade --install` and `kubectl create` to `kubectl apply` across all installation guides ([**@lexfrei**](https://github.com/lexfrei) in cozystack/website#431).
* **[website] Fix broken documentation links with .md suffix**: Fixed incorrect internal links across virtualization guides for v0 and v1 documentation ([**@cheese**](https://github.com/cheese) in cozystack/website#432).
* **[website] Refactor resource planning documentation**: Improved resource planning guide with clearer structure and more comprehensive coverage ([**@IvanStukov**](https://github.com/IvanStukov) in cozystack/website#423).
* **[website] Add ServiceAccount API access documentation and update FAQ**: Added documentation for ServiceAccount API access token configuration and updated FAQ ([**@IvanStukov**](https://github.com/IvanStukov) in cozystack/website#421).
* **[website] Update networking-mesh allowed-location-ips example**: Replaced provider-specific CLI with standard `kubectl` commands in multi-location networking guide ([**@kvaps**](https://github.com/kvaps) in cozystack/website#425).
* **[website] docs(storage): simplify NFS driver setup instructions**: Simplified NFS driver setup documentation ([**@kvaps**](https://github.com/kvaps) in cozystack/website#399).
* **[website] Add Hetzner RobotLB documentation**: Added documentation for configuring public IP with Hetzner RobotLB ([**@kvaps**](https://github.com/kvaps) in cozystack/website#394).
* **[website] Add documentation for creating and managing cloned VMs**: Added comprehensive guide for VM cloning operations ([**@sircthulhu**](https://github.com/sircthulhu) in cozystack/website#401).
* **[website] Update Talos installation docs for Hetzner and Servers.com**: Updated installation documentation for Hetzner and Servers.com environments ([**@kvaps**](https://github.com/kvaps) in cozystack/website#395).
* **[website] Add Hidora organization support details**: Added Hidora to the support page ([**@matthieu-robin**](https://github.com/matthieu-robin) in cozystack/website#397, cozystack/website#398).
* **[website] Check quotas before an upgrade**: Added troubleshooting documentation for checking resource quotas before upgrades ([**@nbykov0**](https://github.com/nbykov0) in cozystack/website#405).
* **[website] Update support documentation**: Updated support documentation with current contact information ([**@xrmtech-isk**](https://github.com/xrmtech-isk) in cozystack/website#420).
* **[website] Correct typo in kubeconfig reference in Kubernetes installation guide**: Fixed documentation typo in kubeconfig reference ([**@shkarface**](https://github.com/shkarface) in cozystack/website#414).
## Breaking Changes & Upgrade Notes
* **[api] CozystackResourceDefinition renamed to ApplicationDefinition**: The `CozystackResourceDefinition` CRD has been renamed to `ApplicationDefinition`. Migration 24 handles the transition automatically during upgrade ([**@kvaps**](https://github.com/kvaps) in #1864).
* **[platform] Certificate issuer configuration parameters renamed**: The `publishing.certificates.issuerType` field is renamed to `publishing.certificates.solver`, and the value `cloudflare` is renamed to `dns01`. A new `publishing.certificates.issuerName` field (default: `letsencrypt-prod`) is added. Migration 32 automatically converts existing configurations — no manual action required ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2077).
* **[vpc] VPC subnets definition migrated from map to array format**: VPC subnets are now defined as `[]Subnet` with an explicit `name` field instead of `map[string]Subnet`. Migration 30 handles the conversion automatically ([**@kvaps**](https://github.com/kvaps) in #2052).
* **[vm] virtual-machine application replaced by vm-disk and vm-instance**: The legacy `virtual-machine` application has been fully replaced. Migration 28 automatically converts existing VMs to the new architecture ([**@kvaps**](https://github.com/kvaps) in #2040).
* **[mysql] mysql application renamed to mariadb**: Existing MySQL deployments are automatically renamed to MariaDB via migration 27 ([**@kvaps**](https://github.com/kvaps) in #2026).
### Upgrade Guide
To upgrade from v0.41.x to v1.0.0:
1. **Backup your cluster** before upgrading.
2. Run the provided migration script: `hack/migrate-to-version-1.0.sh`.
3. The 33 incremental migration steps will automatically handle all resource renaming, configuration conversion, CRD adoption, and secret cleanup.
4. Refer to the [upgrade documentation](https://cozystack.io/docs/v1/upgrade) for detailed instructions and troubleshooting.
## Contributors
We'd like to thank all contributors who made this release possible:
* [**@androndo**](https://github.com/androndo)
* [**@cheese**](https://github.com/cheese)
* [**@IvanHunters**](https://github.com/IvanHunters)
* [**@IvanStukov**](https://github.com/IvanStukov)
* [**@kitsunoff**](https://github.com/kitsunoff)
* [**@klinch0**](https://github.com/klinch0)
* [**@kvaps**](https://github.com/kvaps)
* [**@lexfrei**](https://github.com/lexfrei)
* [**@lllamnyp**](https://github.com/lllamnyp)
* [**@matthieu-robin**](https://github.com/matthieu-robin)
* [**@mattia-eleuteri**](https://github.com/mattia-eleuteri)
* [**@myasnikovdaniil**](https://github.com/myasnikovdaniil)
* [**@nbykov0**](https://github.com/nbykov0)
* [**@shkarface**](https://github.com/shkarface)
* [**@sircthulhu**](https://github.com/sircthulhu)
* [**@xrmtech-isk**](https://github.com/xrmtech-isk)
### New Contributors
We're excited to welcome our first-time contributors:
* [**@cheese**](https://github.com/cheese) - First contribution!
* [**@IvanStukov**](https://github.com/IvanStukov) - First contribution!
* [**@kitsunoff**](https://github.com/kitsunoff) - First contribution!
* [**@shkarface**](https://github.com/shkarface) - First contribution!
* [**@xrmtech-isk**](https://github.com/xrmtech-isk) - First contribution!
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.41.0...v1.0.0

View File

@@ -1,21 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v1.0.1
-->
## Fixes
* **[platform] Prevent cozystack-version ConfigMap from deletion**: Added resource protection to prevent the `cozystack-version` ConfigMap from being accidentally deleted, improving platform stability and reliability ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2112, #2114).
* **[installer] Add keep annotation to Namespace and update migration script**: Added `helm.sh/resource-policy: keep` annotation to the `cozy-system` Namespace in the installer Helm chart to prevent Helm from deleting the namespace (and all HelmReleases within it) when the installer release is removed. The v1.0 migration script is also updated to annotate the `cozy-system` namespace and `cozystack-version` ConfigMap with this policy before migration ([**@kvaps**](https://github.com/kvaps) in #2122, #2123).
* **[dashboard] Add FlowSchema to exempt BFF from API throttling**: Added a `cozy-dashboard-exempt` FlowSchema to exempt the dashboard Back-End-for-Frontend (BFF) service account from Kubernetes API Priority and Fairness throttling. Previously, the BFF fell under the `workload-low` priority level, causing 429 (Too Many Requests) errors under load, resulting in dashboard unresponsiveness ([**@kvaps**](https://github.com/kvaps) in #2121, #2124).
## Documentation
* **[website] Replace bundles documentation with variants**: Renamed the "Bundles" documentation section to "Variants" to match current Cozystack terminology. Removed deprecated variants (`iaas-full`, `distro-full`, `distro-hosted`) and added new variants: `default` (PackageSources only, for manual package management via cozypkg) and `isp-full-generic` (full PaaS/IaaS on k3s, kubeadm, or RKE2). Updated all cross-references throughout the documentation ([**@kvaps**](https://github.com/kvaps) in cozystack/website#433).
* **[website] Add step to protect namespace before upgrading**: Updated the cluster upgrade guide and v0.41→v1.0 migration guide with a required step to annotate the `cozy-system` namespace and `cozystack-version` ConfigMap with `helm.sh/resource-policy=keep` before running `helm upgrade`, preventing accidental namespace deletion ([**@kvaps**](https://github.com/kvaps) in cozystack/website#435).
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.0...v1.0.1

View File

@@ -1,19 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v1.0.2
-->
## Fixes
* **[platform] Suspend cozy-proxy if it conflicts with installer release during migration**: Added a check in the v0.41→v1.0 migration script to detect and automatically suspend the `cozy-proxy` HelmRelease when its `releaseName` is set to `cozystack`, which conflicts with the installer release and would cause `cozystack-operator` deletion during the upgrade ([**@kvaps**](https://github.com/kvaps) in #2128, #2130).
* **[platform] Fix off-by-one error in run-migrations script**: Fixed a bug in the migration runner where the first required migration was always skipped due to an off-by-one error in the migration range calculation, ensuring all upgrade steps execute correctly ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2126, #2132).
* **[system] Fix Keycloak proxy configuration for v26.x**: Replaced the deprecated `KC_PROXY=edge` environment variable with `KC_PROXY_HEADERS=xforwarded` and `KC_HTTP_ENABLED=true` in the Keycloak StatefulSet template. `KC_PROXY` was removed in Keycloak 26.x, previously causing "Non-secure context detected" warnings and broken cookie handling when running behind a reverse proxy with TLS termination ([**@sircthulhu**](https://github.com/sircthulhu) in #2125, #2134).
* **[dashboard] Allow clearing instanceType field and preserve newlines in secret copy**: Added `allowEmpty: true` to the `instanceType` field in the VMInstance form so users can explicitly clear it to use custom KubeVirt resources without a named instance type. Also fixed newline preservation when copying secrets with CMD+C ([**@sircthulhu**](https://github.com/sircthulhu) in #2135, #2137).
* **[dashboard] Restore stock-instance sidebars for namespace-level pages**: Restored `stock-instance-api-form`, `stock-instance-api-table`, `stock-instance-builtin-form`, and `stock-instance-builtin-table` sidebar resources that were inadvertently removed in #2106. Without these sidebars, namespace-level pages such as Backup Plans rendered as empty pages with no interactive content ([**@sircthulhu**](https://github.com/sircthulhu) in #2136, #2138).
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.1...v1.0.2

View File

@@ -1,17 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v1.0.3
-->
## Fixes
* **[platform] Fix package name conversion in migration script**: Fixed the `migrate-to-version-1.0.sh` script to correctly prepend the `cozystack.` prefix when converting `BUNDLE_DISABLE` and `BUNDLE_ENABLE` package name lists, ensuring packages are properly identified during the v0.41→v1.0 upgrade ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2144, #2148).
## Documentation
* **[website] Add white labeling guide**: Added a comprehensive guide for configuring white labeling (branding) in Cozystack v1, covering Dashboard fields (`titleText`, `footerText`, `tenantText`, `logoText`, `logoSvg`, `iconSvg`) and Keycloak fields (`brandName`, `brandHtmlName`). Includes SVG preparation workflow with theme-aware template variables, portable base64 encoding, and migration notes from the v0 ConfigMap approach ([**@lexfrei**](https://github.com/lexfrei) in cozystack/website#441).
* **[website] Actualize backup and recovery documentation**: Reworked the backup and recovery docs to be user-focused, separating operator and tenant workflows. Added tenant-facing documentation for `BackupJob` and `Plan` resources and status inspection commands, and added a new Velero administration guide for operators covering storage credentials and backup storage configuration ([**@androndo**](https://github.com/androndo) in cozystack/website#434).
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.2...v1.0.3

View File

@@ -1,126 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v1.1.0
-->
# Cozystack v1.1.0
Cozystack v1.1.0 delivers a major expansion of the managed application catalog with **OpenBAO** (open-source HashiCorp Vault fork) for secrets management, comprehensive **tiered object storage** with SeaweedFS storage pools, a new bucket **user model** with per-user credentials and S3 login support, **RabbitMQ version selection**, and **MongoDB Grafana dashboards**. The dashboard gains storageClass dropdowns for all stateful apps. This release also incorporates all fixes from the v1.0.x patch series.
## Feature Highlights
### OpenBAO: Managed Secrets Management Service
Cozystack now ships **OpenBAO** as a fully managed PaaS application — an open-source fork of HashiCorp Vault providing enterprise-grade secrets management. Users can deploy OpenBAO instances in standalone mode (single replica with file storage) or in high-availability Raft mode (multiple replicas with integrated Raft consensus), with the mode switching automatically based on the `replicas` field.
Each OpenBAO instance gets TLS enabled by default via cert-manager self-signed certificates, with DNS SANs covering all service endpoints and pod addresses. The Vault injector and CSI provider are intentionally disabled (they are cluster-scoped components not safe for per-tenant use). OpenBAO requires manual initialization and unsealing by design — no auto-unseal is configured.
A full end-to-end E2E test covers the complete lifecycle: deploy, wait for certificate and API readiness, init, unseal, verify, and cleanup. OpenBAO is available in the application catalog for tenant namespaces.
### SeaweedFS Tiered Storage Pools
SeaweedFS now supports **tiered storage pools** — operators can define separate storage pools per disk type (SSD, HDD, NVMe) in the `volume.pools` field (Simple topology) or `volume.zones[name].pools` (MultiZone topology). Each pool creates an additional Volume StatefulSet alongside the default one, with SeaweedFS distinguishing storage via the `-disk=<type>` flag on volume servers.
Each pool automatically generates its own set of COSI resources: a standard `BucketClass`, a `-lock` BucketClass (COMPLIANCE mode, 365-day retention), a read-write `BucketAccessClass`, and a `-readonly` BucketAccessClass. This allows applications to place data on specific storage tiers and request appropriate access policies per pool.
In MultiZone topology, pools are defined per zone and each zone × pool combination creates a dedicated StatefulSet (e.g., `us-east-ssd`, `us-west-hdd`), with nodes selected via `topology.kubernetes.io/zone` labels. Existing deployments with no pools defined produce output identical to previous versions — no migration is required.
### Bucket User Model with S3 Login
The bucket application introduces a new **user model** for access management. Instead of a single implicit BucketAccess resource, operators now define a `users` map where each entry creates a dedicated `BucketAccess` with its own credentials secret and an optional `readonly` flag. The S3 Manager UI has been updated with a login screen that uses per-session credentials from the user's own secret, replacing the previous basic-auth approach.
Two new bucket parameters are available: `locking` provisions from the `-lock` BucketClass (COMPLIANCE mode, 365-day object lock retention) for write-once-read-many use cases, and `storagePool` selects a specific pool's BucketClass for tiered storage placement. The COSI driver has been updated to v0.3.0 to support the new `diskType` parameter.
**⚠️ Breaking change**: The implicit default BucketAccess resource is no longer created. Existing buckets that relied on the single auto-generated BucketAccess will need to explicitly define users in the `users` map after upgrading.
### RabbitMQ Version Selection
RabbitMQ instances now support a configurable **version selector** (`version` field with values: `v4.2`, `v4.1`, `v4.0`, `v3.13`; default `v4.2`). The chart validates the selection at deploy time and uses it to pin the runtime image, giving operators control over the RabbitMQ release channel per instance. An automatic migration backfills the `version` field on all existing RabbitMQ resources to `v4.2`.
## Major Features and Improvements
* **[apps] Add OpenBAO as a managed secrets management service**: Deployed as a PaaS application with standalone (file storage) and HA Raft modes, TLS enabled by default via cert-manager, injector and CSI provider disabled for tenant safety, and a full E2E lifecycle test ([**@lexfrei**](https://github.com/lexfrei) in #2059).
* **[seaweedfs] Add storage pools support for tiered storage**: Added `volume.pools` (Simple) and `volume.zones[name].pools` (MultiZone) for per-disk-type StatefulSets, zone overrides (`nodeSelector`, `storageClass`, `dataCenter`), per-pool COSI BucketClass and BucketAccessClass resources, and bumped seaweedfs-cosi-driver to v0.3.0 ([**@sircthulhu**](https://github.com/sircthulhu) in #2097).
* **[apps][system] Add bucket user model with locking and storage pool selection**: Replaced implicit BucketAccess with per-user `users` map, added `locking` and `storagePool` parameters, renamed COSI BucketClass suffix from `-worm` to `-lock`, added `-readonly` BucketAccessClass for all topologies, and updated S3 Manager with login screen using per-user credentials ([**@IvanHunters**](https://github.com/IvanHunters) in #2119).
* **[rabbitmq] Add version selection for RabbitMQ instances**: Added `version` field (`v4.2`, `v4.1`, `v4.0`, `v3.13`) with chart-level validation, default `v4.2`, and an automatic migration to backfill the field on existing instances ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2092).
* **[system] Add MongoDB Overview and InMemory Details Grafana dashboards**: Added two comprehensive Grafana dashboards for MongoDB monitoring — Overview (command operations, connections, cursors, query efficiency, write time) and InMemory Details (WiredTiger cache, transactions, concurrency, eviction). Dashboards are registered in `dashboards.list` for automatic GrafanaDashboard CRD generation ([**@IvanHunters**](https://github.com/IvanHunters) in #2158).
* **[dashboard] Add storageClass dropdown for all stateful apps**: Replaced the free-text `storageClass` input with an API-backed dropdown listing available StorageClasses from the cluster. Affects ClickHouse, Harbor, HTTPCache, Kubernetes, MariaDB, MongoDB, NATS, OpenBAO, Postgres, Qdrant, RabbitMQ, Redis, VMDisk (top-level `storageClass`), FoundationDB (`storage.storageClass`), and Kafka (`kafka.storageClass`, `zookeeper.storageClass`) ([**@sircthulhu**](https://github.com/sircthulhu) in #2131).
* **[bucket] Add readonly S3 access credentials**: Added a readonly `BucketAccessClass` to the SeaweedFS COSI chart and updated the bucket application to automatically provision two sets of S3 credentials per bucket: read-write (for UI) and readonly ([**@IvanHunters**](https://github.com/IvanHunters) in #2105).
* **[dashboard] Hide sidebar on cluster-level pages when no tenant selected**: Fixed broken URLs with double `//` on the main cluster page (before tenant selection) by clearing `CUSTOMIZATION_SIDEBAR_FALLBACK_ID` so no sidebar renders when no namespace is selected ([**@sircthulhu**](https://github.com/sircthulhu) in #2106).
* **[cert-manager] Update cert-manager to v1.19.3**: Upgraded cert-manager with new CRDs moved into a dedicated CRD package, added global `nodeSelector` and `hostUsers` (pod user-namespace isolation), and renamed `ServiceMonitor` targetPort default to `http-metrics` ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2070).
* **[dashboard] Add backupClasses dropdown to Plan/BackupJob forms**: Replaced free-text input for `backupClass` field with an API-backed dropdown populated with available BackupClass resources, making it easier to select the correct backup target ([**@androndo**](https://github.com/androndo) in #2104).
## Fixes
* **[platform] Fix package name conversion in migration script**: Fixed the `migrate-to-version-1.0.sh` script to correctly prepend the `cozystack.` prefix when converting `BUNDLE_DISABLE` and `BUNDLE_ENABLE` package name lists, ensuring packages are properly identified during the v0.41→v1.0 upgrade ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2144, #2148).
* **[backups] Fix RBAC for backup controllers**: Updated RBAC permissions for the backup strategy controller to support enhanced backup and restore capabilities, including Velero integration and status management ([**@androndo**](https://github.com/androndo) in #2145).
* **[kubernetes] Set explicit MTU for Cilium in tenant clusters**: Set explicit MTU 1350 for Cilium in KubeVirt-based tenant Kubernetes clusters to prevent packet drops caused by VXLAN encapsulation overhead. Cilium's auto-detection does not account for VXLAN overhead (50 bytes) when the VM interface inherits MTU 1400 from the parent OVN/Geneve overlay, causing intermittent connectivity issues and HTTP 499 errors under load ([**@IvanHunters**](https://github.com/IvanHunters) in #2147).
* **[platform] Prevent cozystack-version ConfigMap from deletion**: Added resource protection annotations to prevent the `cozystack-version` ConfigMap from being accidentally deleted, improving platform stability ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2112, #2114).
* **[installer] Add keep annotation to Namespace and update migration script**: Added `helm.sh/resource-policy: keep` annotation to the `cozy-system` Namespace in the installer Helm chart to prevent Helm from deleting the namespace and all HelmReleases within it when the installer release is removed. The v1.0 migration script is also updated to annotate the namespace and `cozystack-version` ConfigMap before migration ([**@kvaps**](https://github.com/kvaps) in #2122, #2123).
* **[dashboard] Add FlowSchema to exempt BFF from API throttling**: Added a `cozy-dashboard-exempt` FlowSchema to exempt the dashboard Back-End-for-Frontend service account from Kubernetes API Priority and Fairness throttling, preventing 429 errors under load ([**@kvaps**](https://github.com/kvaps) in #2121, #2124).
* **[platform] Suspend cozy-proxy if it conflicts with installer release during migration**: Added a check in the v0.41→v1.0 migration script to detect and suspend the `cozy-proxy` HelmRelease when its `releaseName` is set to `cozystack`, which conflicts with the installer release and would cause `cozystack-operator` deletion during the upgrade ([**@kvaps**](https://github.com/kvaps) in #2128, #2130).
* **[platform] Fix off-by-one error in run-migrations script**: Fixed a bug in the migration runner where the first required migration was always skipped due to an off-by-one error in the migration range calculation ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2126, #2132).
* **[system] Fix Keycloak proxy configuration for v26.x**: Replaced the deprecated `KC_PROXY=edge` environment variable with `KC_PROXY_HEADERS=xforwarded` and `KC_HTTP_ENABLED=true` in the Keycloak StatefulSet. `KC_PROXY` was removed in Keycloak 26.x, previously causing "Non-secure context detected" warnings and broken cookie handling behind a reverse proxy with TLS termination ([**@sircthulhu**](https://github.com/sircthulhu) in #2125, #2134).
* **[dashboard] Allow clearing instanceType field and preserve newlines in secret copy**: Added `allowEmpty: true` to the `instanceType` field in the VMInstance form so users can explicitly clear it to use custom KubeVirt resources without a named instance type. Also fixed newline preservation when copying secrets with CMD+C ([**@sircthulhu**](https://github.com/sircthulhu) in #2135, #2137).
* **[dashboard] Restore stock-instance sidebars for namespace-level pages**: Restored `stock-instance-api-form`, `stock-instance-api-table`, `stock-instance-builtin-form`, and `stock-instance-builtin-table` sidebar resources that were inadvertently removed in #2106. Without these sidebars, namespace-level pages such as Backup Plans rendered as empty pages ([**@sircthulhu**](https://github.com/sircthulhu) in #2136, #2138).
## System Configuration
* **[platform] Disable private key rotation in CA certs**: Set `rotationPolicy: Never` for all CA/root certificates used by system components (ingress-nginx, linstor, linstor-scheduler, seaweedfs, victoria-metrics-operator, kubeovn-webhook, lineage-controller-webhook, cozystack-api, etcd, linstor API/internal) to prevent trust chain problems when CA certificates are reissued ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2113).
## Development, Testing, and CI/CD
* **[ci] Add debug improvements for CI tests**: Added extra debug commands for Kubernetes startup diagnostics and improved error output in CI test runs ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2111).
## Documentation
* **[website] Add object storage guide (pools, buckets, users)**: Added a comprehensive guide covering SeaweedFS object storage configuration including storage pools for tiered storage, bucket creation with access classes, per-user credential management, and credential rotation procedures ([**@sircthulhu**](https://github.com/sircthulhu) in cozystack/website#438).
* **[website] Add Build Your Own Platform (BYOP) guide**: Added a new "Build Your Own Platform" guide and split the installation documentation into platform installation and BYOP sub-pages, with cross-references throughout the documentation ([**@kvaps**](https://github.com/kvaps) in cozystack/website#437).
* **[website] Add white labeling guide**: Added a comprehensive guide for configuring white labeling (branding) in Cozystack v1, covering Dashboard fields (`titleText`, `footerText`, `tenantText`, `logoText`, `logoSvg`, `iconSvg`) and Keycloak fields (`brandName`, `brandHtmlName`). Includes SVG preparation workflow with theme-aware template variables and portable base64 encoding ([**@lexfrei**](https://github.com/lexfrei) in cozystack/website#441).
* **[website] Actualize backup and recovery documentation**: Reworked the backup and recovery docs to be user-focused, separating operator and tenant workflows. Added tenant-facing documentation for `BackupJob` and `Plan` resources and a new Velero administration guide for operators ([**@androndo**](https://github.com/androndo) in cozystack/website#434).
* **[website] Add step to protect namespace before upgrading**: Updated the cluster upgrade guide and v0.41→v1.0 migration guide with a required step to annotate the `cozy-system` namespace and `cozystack-version` ConfigMap with `helm.sh/resource-policy=keep` before running `helm upgrade` ([**@kvaps**](https://github.com/kvaps) in cozystack/website#435).
* **[website] Replace bundles documentation with variants**: Renamed the "Bundles" documentation section to "Variants" to match current Cozystack terminology. Removed deprecated variants and added new ones: `default` and `isp-full-generic` ([**@kvaps**](https://github.com/kvaps) in cozystack/website#433).
* **[website] Fix component values override instructions**: Corrected the component values override documentation to reflect current configuration patterns ([**@kvaps**](https://github.com/kvaps) in cozystack/website#436).
## Breaking Changes & Upgrade Notes
* **[bucket] Bucket user model now requires explicit user definitions**: The implicit default `BucketAccess` resource is no longer created automatically. Existing buckets that relied on a single auto-generated credential secret will need to define users explicitly in the `users` map after upgrading. Each user entry creates its own `BucketAccess` resource and credential secret (optionally with `readonly: true`). The COSI BucketClass suffix has also been renamed from `-worm` to `-lock` ([**@IvanHunters**](https://github.com/IvanHunters) in #2119).
## Contributors
We'd like to thank all contributors who made this release possible:
* [**@androndo**](https://github.com/androndo)
* [**@IvanHunters**](https://github.com/IvanHunters)
* [**@kvaps**](https://github.com/kvaps)
* [**@lexfrei**](https://github.com/lexfrei)
* [**@myasnikovdaniil**](https://github.com/myasnikovdaniil)
* [**@sircthulhu**](https://github.com/sircthulhu)
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.0...v1.1.0

View File

@@ -10,11 +10,7 @@ PATTERN=${2:-*}
LINE='----------------------------------------------------------------'
cols() { stty size 2>/dev/null | awk '{print $2}' || echo 80; }
if [ -t 1 ]; then
MAXW=$(( $(cols) - 12 )); [ "$MAXW" -lt 40 ] && MAXW=70
else
MAXW=0 # no truncation when not a tty (e.g. CI)
fi
MAXW=$(( $(cols) - 12 )); [ "$MAXW" -lt 40 ] && MAXW=70
BEGIN=$(date +%s)
timestamp() { s=$(( $(date +%s) - BEGIN )); printf '[%02d:%02d]' $((s/60)) $((s%60)); }
@@ -49,7 +45,7 @@ run_one() {
*) out=$line ;;
esac
now=$(( $(date +%s) - START ))
[ "$MAXW" -gt 0 ] && [ ${#out} -gt "$MAXW" ] && out="$(printf '%.*s…' "$MAXW" "$out")"
[ ${#out} -gt "$MAXW" ] && out="$(printf '%.*s…' "$MAXW" "$out")"
printf '┊[%02d:%02d] %s\n' $((now/60)) $((now%60)) "$out"
done

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bats
@test "Create and Verify Seeweedfs Bucket" {
# Create the bucket resource with readwrite and readonly users
# Create the bucket resource
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
@@ -9,29 +9,21 @@ kind: Bucket
metadata:
name: ${name}
namespace: tenant-test
spec:
users:
admin: {}
viewer:
readonly: true
spec: {}
EOF
# Wait for the bucket to be ready
kubectl -n tenant-test wait hr bucket-${name} --timeout=100s --for=condition=ready
kubectl -n tenant-test wait bucketclaims.objectstorage.k8s.io bucket-${name} --timeout=300s --for=jsonpath='{.status.bucketReady}'
kubectl -n tenant-test wait bucketaccesses.objectstorage.k8s.io bucket-${name}-admin --timeout=300s --for=jsonpath='{.status.accessGranted}'
kubectl -n tenant-test wait bucketaccesses.objectstorage.k8s.io bucket-${name}-viewer --timeout=300s --for=jsonpath='{.status.accessGranted}'
kubectl -n tenant-test wait bucketaccesses.objectstorage.k8s.io bucket-${name} --timeout=300s --for=jsonpath='{.status.accessGranted}'
# Get admin (readwrite) credentials
kubectl -n tenant-test get secret bucket-${name}-admin -ojsonpath='{.data.BucketInfo}' | base64 -d > bucket-admin-credentials.json
ADMIN_ACCESS_KEY=$(jq -r '.spec.secretS3.accessKeyID' bucket-admin-credentials.json)
ADMIN_SECRET_KEY=$(jq -r '.spec.secretS3.accessSecretKey' bucket-admin-credentials.json)
BUCKET_NAME=$(jq -r '.spec.bucketName' bucket-admin-credentials.json)
# Get and decode credentials
kubectl -n tenant-test get secret bucket-${name} -ojsonpath='{.data.BucketInfo}' | base64 -d > bucket-test-credentials.json
# Get viewer (readonly) credentials
kubectl -n tenant-test get secret bucket-${name}-viewer -ojsonpath='{.data.BucketInfo}' | base64 -d > bucket-viewer-credentials.json
VIEWER_ACCESS_KEY=$(jq -r '.spec.secretS3.accessKeyID' bucket-viewer-credentials.json)
VIEWER_SECRET_KEY=$(jq -r '.spec.secretS3.accessSecretKey' bucket-viewer-credentials.json)
# Get credentials from the secret
ACCESS_KEY=$(jq -r '.spec.secretS3.accessKeyID' bucket-test-credentials.json)
SECRET_KEY=$(jq -r '.spec.secretS3.accessSecretKey' bucket-test-credentials.json)
BUCKET_NAME=$(jq -r '.spec.bucketName' bucket-test-credentials.json)
# Start port-forwarding
bash -c 'timeout 100s kubectl port-forward service/seaweedfs-s3 -n tenant-root 8333:8333 > /dev/null 2>&1 &'
@@ -39,33 +31,17 @@ EOF
# Wait for port-forward to be ready
timeout 30 sh -ec 'until nc -z localhost 8333; do sleep 1; done'
# --- Test readwrite user (admin) ---
mc alias set rw-user https://localhost:8333 $ADMIN_ACCESS_KEY $ADMIN_SECRET_KEY --insecure
# Set up MinIO alias with error handling
mc alias set local https://localhost:8333 $ACCESS_KEY $SECRET_KEY --insecure
# Admin can upload
echo "readwrite test" > /tmp/rw-test.txt
mc cp --insecure /tmp/rw-test.txt rw-user/$BUCKET_NAME/rw-test.txt
# Upload file to bucket
mc cp bucket-test-credentials.json $BUCKET_NAME/bucket-test-credentials.json
# Admin can list
mc ls --insecure rw-user/$BUCKET_NAME/rw-test.txt
# Verify file was uploaded
mc ls $BUCKET_NAME/bucket-test-credentials.json
# Admin can download
mc cp --insecure rw-user/$BUCKET_NAME/rw-test.txt /tmp/rw-test-download.txt
# Clean up uploaded file
mc rm $BUCKET_NAME/bucket-test-credentials.json
# --- Test readonly user (viewer) ---
mc alias set ro-user https://localhost:8333 $VIEWER_ACCESS_KEY $VIEWER_SECRET_KEY --insecure
# Viewer can list
mc ls --insecure ro-user/$BUCKET_NAME/rw-test.txt
# Viewer can download
mc cp --insecure ro-user/$BUCKET_NAME/rw-test.txt /tmp/ro-test-download.txt
# Viewer cannot upload (must fail with Access Denied)
echo "readonly test" > /tmp/ro-test.txt
! mc cp --insecure /tmp/ro-test.txt ro-user/$BUCKET_NAME/ro-test.txt
# --- Cleanup ---
mc rm --insecure rw-user/$BUCKET_NAME/rw-test.txt
kubectl -n tenant-test delete bucket.apps.cozystack.io ${name}
}

View File

@@ -1,59 +0,0 @@
#!/usr/bin/env bats
@test "Create OpenBAO (standalone)" {
name='test'
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: OpenBAO
metadata:
name: $name
namespace: tenant-test
spec:
replicas: 1
size: 10Gi
storageClass: ""
resourcesPreset: "small"
resources: {}
external: false
ui: true
EOF
sleep 5
kubectl -n tenant-test wait hr openbao-$name --timeout=60s --for=condition=ready
kubectl -n tenant-test wait hr openbao-$name-system --timeout=120s --for=condition=ready
# Wait for container to be started (pod Running does not guarantee container is ready for exec on slow CI)
if ! timeout 120 sh -ec "until kubectl -n tenant-test get pod openbao-$name-0 --output jsonpath='{.status.containerStatuses[0].started}' 2>/dev/null | grep -q true; do sleep 5; done"; then
echo "=== DEBUG: Container did not start in time ===" >&2
kubectl -n tenant-test describe pod openbao-$name-0 >&2 || true
kubectl -n tenant-test logs openbao-$name-0 --previous >&2 || true
kubectl -n tenant-test logs openbao-$name-0 >&2 || true
return 1
fi
# Wait for OpenBAO API to accept connections
# bao status exit codes: 0 = unsealed, 1 = error/not ready, 2 = sealed but responsive
if ! timeout 60 sh -ec "until kubectl -n tenant-test exec openbao-$name-0 -- bao status >/dev/null 2>&1; rc=\$?; test \$rc -eq 0 -o \$rc -eq 2; do sleep 3; done"; then
echo "=== DEBUG: OpenBAO API did not become responsive ===" >&2
kubectl -n tenant-test describe pod openbao-$name-0 >&2 || true
kubectl -n tenant-test logs openbao-$name-0 --previous >&2 || true
kubectl -n tenant-test logs openbao-$name-0 >&2 || true
return 1
fi
# Initialize OpenBAO (single key share for testing simplicity)
init_output=$(kubectl -n tenant-test exec openbao-$name-0 -- bao operator init -key-shares=1 -key-threshold=1 -format=json)
unseal_key=$(echo "$init_output" | jq -r '.unseal_keys_b64[0]')
if [ -z "$unseal_key" ] || [ "$unseal_key" = "null" ]; then
echo "Failed to extract unseal key. Init output: $init_output" >&2
return 1
fi
# Unseal OpenBAO
kubectl -n tenant-test exec openbao-$name-0 -- bao operator unseal "$unseal_key"
# Now wait for pod to become ready (readiness probe checks seal status)
kubectl -n tenant-test wait sts openbao-$name --timeout=90s --for=jsonpath='{.status.readyReplicas}'=1
kubectl -n tenant-test wait pvc data-openbao-$name-0 --timeout=50s --for=jsonpath='{.status.phase}'=Bound
kubectl -n tenant-test delete openbao.apps.cozystack.io $name
kubectl -n tenant-test delete pvc data-openbao-$name-0 --ignore-not-found
}

View File

@@ -102,19 +102,15 @@ EOF
done
'
# Verify the nodes are ready
if ! kubectl --kubeconfig "tenantkubeconfig-${test_name}" wait node --all --timeout=2m --for=condition=Ready; then
# Additional debug messages
kubectl --kubeconfig "tenantkubeconfig-${test_name}" describe nodes
kubectl -n tenant-test get hr
fi
kubectl --kubeconfig "tenantkubeconfig-${test_name}" wait node --all --timeout=2m --for=condition=Ready
kubectl --kubeconfig "tenantkubeconfig-${test_name}" get nodes -o wide
# Verify the kubelet version matches what we expect
versions=$(kubectl --kubeconfig "tenantkubeconfig-${test_name}" \
get nodes -o jsonpath='{.items[*].status.nodeInfo.kubeletVersion}')
node_ok=true
for v in $versions; do
case "$v" in
"${k8s_version}" | "${k8s_version}".* | "${k8s_version}"-*)
@@ -197,7 +193,7 @@ EOF
# Wait for pods readiness
kubectl wait deployment --kubeconfig "tenantkubeconfig-${test_name}" "${test_name}-backend" -n tenant-test --for=condition=Available --timeout=300s
# Wait for LoadBalancer to be provisioned (IP or hostname)
timeout 90 sh -ec "
until kubectl get svc ${test_name}-backend --kubeconfig tenantkubeconfig-${test_name} -n tenant-test \

View File

@@ -32,54 +32,6 @@ if ! kubectl get namespace "$NAMESPACE" &> /dev/null; then
exit 1
fi
# Step 0: Annotate critical resources to prevent Helm from deleting them
echo "Step 0: Protect critical resources from Helm deletion"
echo ""
echo "The following resources will be annotated with helm.sh/resource-policy=keep"
echo "to prevent Helm from deleting them when the installer release is removed:"
echo " - Namespace: $NAMESPACE"
echo " - ConfigMap: $NAMESPACE/cozystack-version"
echo ""
read -p "Do you want to annotate these resources? (y/N) " -n 1 -r
echo ""
if [[ $REPLY =~ ^[Yy]$ ]]; then
echo "Annotating namespace $NAMESPACE..."
kubectl annotate namespace "$NAMESPACE" helm.sh/resource-policy=keep --overwrite
echo "Annotating ConfigMap cozystack-version..."
kubectl annotate configmap -n "$NAMESPACE" cozystack-version helm.sh/resource-policy=keep --overwrite 2>/dev/null || echo " ConfigMap cozystack-version not found, skipping."
echo ""
echo "Resources annotated successfully."
else
echo "WARNING: Skipping annotation. If you remove the Helm installer release,"
echo "the namespace and its contents may be deleted!"
fi
echo ""
# Step 1: Check for cozy-proxy HelmRelease with conflicting releaseName
# In v0.41.x, cozy-proxy was incorrectly configured with releaseName "cozystack",
# which conflicts with the installer helm release name. If not suspended, cozy-proxy
# HelmRelease will overwrite the installer release and delete cozystack-operator.
COZY_PROXY_RELEASE_NAME=$(kubectl get hr -n "$NAMESPACE" cozy-proxy -o jsonpath='{.spec.releaseName}' 2>/dev/null || true)
if [ "$COZY_PROXY_RELEASE_NAME" = "cozystack" ]; then
echo "WARNING: HelmRelease cozy-proxy has releaseName 'cozystack', which conflicts"
echo "with the installer release. It must be suspended before proceeding, otherwise"
echo "it will overwrite the installer and delete cozystack-operator."
echo ""
read -p "Suspend HelmRelease cozy-proxy? (y/N) " -n 1 -r
echo ""
if [[ $REPLY =~ ^[Yy]$ ]]; then
kubectl -n "$NAMESPACE" patch hr cozy-proxy --type=merge --field-manager=flux-client-side-apply -p '{"spec":{"suspend":true}}'
echo "HelmRelease cozy-proxy suspended."
else
echo "ERROR: Cannot proceed with conflicting cozy-proxy HelmRelease active."
echo "Please suspend it manually:"
echo " kubectl -n $NAMESPACE patch hr cozy-proxy --type=merge -p '{\"spec\":{\"suspend\":true}}'"
exit 1
fi
echo ""
fi
# Read ConfigMap cozystack
echo "Reading ConfigMap cozystack..."
COZYSTACK_CM=$(kubectl get configmap -n "$NAMESPACE" cozystack -o json 2>/dev/null || echo "{}")
@@ -100,10 +52,6 @@ OIDC_ENABLED=$(echo "$COZYSTACK_CM" | jq -r '.data["oidc-enabled"] // "false"')
KEYCLOAK_REDIRECTS=$(echo "$COZYSTACK_CM" | jq -r '.data["extra-keycloak-redirect-uri-for-dashboard"] // ""' )
TELEMETRY_ENABLED=$(echo "$COZYSTACK_CM" | jq -r '.data["telemetry-enabled"] // "true"')
BUNDLE_NAME=$(echo "$COZYSTACK_CM" | jq -r '.data["bundle-name"] // "paas-full"')
BUNDLE_DISABLE=$(echo "$COZYSTACK_CM" | jq -r '.data["bundle-disable"] // ""')
BUNDLE_ENABLE=$(echo "$COZYSTACK_CM" | jq -r '.data["bundle-enable"] // ""')
EXPOSE_INGRESS=$(echo "$COZYSTACK_CM" | jq -r '.data["expose-ingress"] // "tenant-root"')
EXPOSE_SERVICES=$(echo "$COZYSTACK_CM" | jq -r '.data["expose-services"] // ""')
# Certificate issuer configuration (old undocumented field: clusterissuer)
OLD_CLUSTER_ISSUER=$(echo "$COZYSTACK_CM" | jq -r '.data["clusterissuer"] // ""')
@@ -151,31 +99,28 @@ else
EXTERNAL_IPS=$(echo "$EXTERNAL_IPS" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - "$0}')
fi
# Convert comma-separated lists to YAML arrays
if [ -z "$BUNDLE_DISABLE" ]; then
DISABLED_PACKAGES="[]"
else
DISABLED_PACKAGES=$(echo "$BUNDLE_DISABLE" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - cozystack."$0}')
fi
if [ -z "$BUNDLE_ENABLE" ]; then
ENABLED_PACKAGES="[]"
else
ENABLED_PACKAGES=$(echo "$BUNDLE_ENABLE" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - cozystack."$0}')
fi
if [ -z "$EXPOSE_SERVICES" ]; then
EXPOSED_SERVICES_YAML="[]"
else
EXPOSED_SERVICES_YAML=$(echo "$EXPOSE_SERVICES" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - "$0}')
fi
# Determine bundle type
case "$BUNDLE_NAME" in
paas-full|distro-full)
SYSTEM_ENABLED="true"
SYSTEM_TYPE="full"
;;
paas-hosted|distro-hosted)
SYSTEM_ENABLED="false"
SYSTEM_TYPE="hosted"
;;
*)
SYSTEM_ENABLED="false"
SYSTEM_TYPE="hosted"
;;
esac
# Update bundle naming
BUNDLE_NAME=$(echo "$BUNDLE_NAME" | sed 's/paas/isp/')
# Extract branding if available
BRANDING=$(echo "$BRANDING_CM" | jq -r '.data // {} | to_entries[] | "\(.key): \"\(.value)\""')
if [ -z "$BRANDING" ]; then
if [ -z "$BRANDING" ]; then
BRANDING="{}"
else
BRANDING=$(echo "$BRANDING" | awk 'BEGIN{print}{print " " $0}')
@@ -196,6 +141,8 @@ echo " Root Host: $ROOT_HOST"
echo " API Server Endpoint: $API_SERVER_ENDPOINT"
echo " OIDC Enabled: $OIDC_ENABLED"
echo " Bundle Name: $BUNDLE_NAME"
echo " System Enabled: $SYSTEM_ENABLED"
echo " System Type: $SYSTEM_TYPE"
echo " Certificate Solver: ${SOLVER:-http01 (default)}"
echo " Issuer Name: ${ISSUER_NAME:-letsencrypt-prod (default)}"
echo ""
@@ -213,8 +160,15 @@ spec:
platform:
values:
bundles:
disabledPackages: $DISABLED_PACKAGES
enabledPackages: $ENABLED_PACKAGES
system:
enabled: $SYSTEM_ENABLED
type: "$SYSTEM_TYPE"
iaas:
enabled: true
paas:
enabled: true
naas:
enabled: true
networking:
clusterDomain: "$CLUSTER_DOMAIN"
podCIDR: "$POD_CIDR"
@@ -223,8 +177,6 @@ spec:
joinCIDR: "$JOIN_CIDR"
publishing:
host: "$ROOT_HOST"
ingressName: "$EXPOSE_INGRESS"
exposedServices: $EXPOSED_SERVICES_YAML
apiServerEndpoint: "$API_SERVER_ENDPOINT"
externalIPs: $EXTERNAL_IPS
${CERTIFICATES_SECTION}

View File

@@ -156,7 +156,7 @@ menuItems = append(menuItems, map[string]any{
map[string]any{
"key": "{plural}",
"label": "{ResourceLabel}",
"link": "/openapi-ui/{cluster}/{namespace}/api-table/{group}/{version}/{plural}",
"link": "/openapi-ui/{clusterName}/{namespace}/api-table/{group}/{version}/{plural}",
},
},
}),
@@ -174,7 +174,7 @@ menuItems = append(menuItems, map[string]any{
**Important Notes**:
- The sidebar tag (`{lowercase-kind}-sidebar`) must match what the Factory uses
- The link format: `/openapi-ui/{cluster}/{namespace}/api-table/{group}/{version}/{plural}`
- The link format: `/openapi-ui/{clusterName}/{namespace}/api-table/{group}/{version}/{plural}`
- All sidebars share the same `keysAndTags` and `menuItems`, so changes affect all sidebar instances
### Step 4: Verify Integration

View File

@@ -195,7 +195,6 @@ func applyListInputOverrides(schema map[string]any, kind string, openAPIProps ma
"valueUri": "/api/clusters/{cluster}/k8s/apis/instancetype.kubevirt.io/v1beta1/virtualmachineclusterinstancetypes",
"keysToValue": []any{"metadata", "name"},
"keysToLabel": []any{"metadata", "name"},
"allowEmpty": true,
},
}
if prop, _ := openAPIProps["instanceType"].(map[string]any); prop != nil {
@@ -215,34 +214,6 @@ func applyListInputOverrides(schema map[string]any, kind string, openAPIProps ma
"keysToLabel": []any{"metadata", "name"},
},
}
case "ClickHouse", "Harbor", "HTTPCache", "Kubernetes", "MariaDB", "MongoDB",
"NATS", "OpenBAO", "Postgres", "Qdrant", "RabbitMQ", "Redis", "VMDisk":
specProps := ensureSchemaPath(schema, "spec")
specProps["storageClass"] = storageClassListInput()
case "FoundationDB":
storageProps := ensureSchemaPath(schema, "spec", "storage")
storageProps["storageClass"] = storageClassListInput()
case "Kafka":
kafkaProps := ensureSchemaPath(schema, "spec", "kafka")
kafkaProps["storageClass"] = storageClassListInput()
zkProps := ensureSchemaPath(schema, "spec", "zookeeper")
zkProps["storageClass"] = storageClassListInput()
}
}
// storageClassListInput returns a listInput field config for a storageClass dropdown
// backed by the cluster's available StorageClasses.
func storageClassListInput() map[string]any {
return map[string]any{
"type": "listInput",
"customProps": map[string]any{
"valueUri": "/api/clusters/{cluster}/k8s/apis/storage.k8s.io/v1/storageclasses",
"keysToValue": []any{"metadata", "name"},
"keysToLabel": []any{"metadata", "name"},
},
}
}

View File

@@ -202,10 +202,6 @@ func TestApplyListInputOverrides_VMInstance(t *testing.T) {
t.Errorf("expected valueUri %s, got %v", expectedURI, customProps["valueUri"])
}
if customProps["allowEmpty"] != true {
t.Errorf("expected allowEmpty true, got %v", customProps["allowEmpty"])
}
// Check disks[].name is a listInput
disks, ok := specProps["disks"].(map[string]any)
if !ok {
@@ -236,72 +232,6 @@ func TestApplyListInputOverrides_VMInstance(t *testing.T) {
}
}
func TestApplyListInputOverrides_StorageClassSimple(t *testing.T) {
for _, kind := range []string{
"ClickHouse", "Harbor", "HTTPCache", "Kubernetes", "MariaDB", "MongoDB",
"NATS", "OpenBAO", "Postgres", "Qdrant", "RabbitMQ", "Redis", "VMDisk",
} {
t.Run(kind, func(t *testing.T) {
schema := map[string]any{}
applyListInputOverrides(schema, kind, map[string]any{})
specProps := schema["properties"].(map[string]any)["spec"].(map[string]any)["properties"].(map[string]any)
sc, ok := specProps["storageClass"].(map[string]any)
if !ok {
t.Fatalf("storageClass not found in spec.properties for kind %s", kind)
}
assertStorageClassListInput(t, sc)
})
}
}
func TestApplyListInputOverrides_StorageClassFoundationDB(t *testing.T) {
schema := map[string]any{}
applyListInputOverrides(schema, "FoundationDB", map[string]any{})
storageProps := schema["properties"].(map[string]any)["spec"].(map[string]any)["properties"].(map[string]any)["storage"].(map[string]any)["properties"].(map[string]any)
sc, ok := storageProps["storageClass"].(map[string]any)
if !ok {
t.Fatal("storageClass not found in spec.storage.properties")
}
assertStorageClassListInput(t, sc)
}
func TestApplyListInputOverrides_StorageClassKafka(t *testing.T) {
schema := map[string]any{}
applyListInputOverrides(schema, "Kafka", map[string]any{})
specProps := schema["properties"].(map[string]any)["spec"].(map[string]any)["properties"].(map[string]any)
kafkaSC, ok := specProps["kafka"].(map[string]any)["properties"].(map[string]any)["storageClass"].(map[string]any)
if !ok {
t.Fatal("storageClass not found in spec.kafka.properties")
}
assertStorageClassListInput(t, kafkaSC)
zkSC, ok := specProps["zookeeper"].(map[string]any)["properties"].(map[string]any)["storageClass"].(map[string]any)
if !ok {
t.Fatal("storageClass not found in spec.zookeeper.properties")
}
assertStorageClassListInput(t, zkSC)
}
// assertStorageClassListInput verifies that a field is a correctly configured storageClass listInput.
func assertStorageClassListInput(t *testing.T, field map[string]any) {
t.Helper()
if field["type"] != "listInput" {
t.Errorf("expected type listInput, got %v", field["type"])
}
customProps, ok := field["customProps"].(map[string]any)
if !ok {
t.Fatal("customProps not found")
}
expectedURI := "/api/clusters/{cluster}/k8s/apis/storage.k8s.io/v1/storageclasses"
if customProps["valueUri"] != expectedURI {
t.Errorf("expected valueUri %s, got %v", expectedURI, customProps["valueUri"])
}
}
func TestApplyListInputOverrides_UnknownKind(t *testing.T) {
schema := map[string]any{}
applyListInputOverrides(schema, "SomeOtherKind", map[string]any{})

View File

@@ -582,14 +582,15 @@ type factoryFlags struct {
Secrets bool
}
// factoryFeatureFlags determines which tabs to show based on whether the
// ApplicationDefinition has non-empty Include resource selectors.
// Workloads tab is always shown.
// factoryFeatureFlags tries several conventional locations so you can evolve the API
// without breaking the controller. Defaults are false (hidden).
func factoryFeatureFlags(crd *cozyv1alpha1.ApplicationDefinition) factoryFlags {
return factoryFlags{
Workloads: true,
Ingresses: len(crd.Spec.Ingresses.Include) > 0,
Services: len(crd.Spec.Services.Include) > 0,
Secrets: len(crd.Spec.Secrets.Include) > 0,
}
var f factoryFlags
f.Workloads = true
f.Ingresses = true
f.Services = true
f.Secrets = true
return f
}

View File

@@ -299,6 +299,10 @@ func (m *Manager) buildExpectedResourceSet(crds []cozyv1alpha1.ApplicationDefini
// Add other stock sidebars that are created for each CRD
stockSidebars := []string{
"stock-instance-api-form",
"stock-instance-api-table",
"stock-instance-builtin-form",
"stock-instance-builtin-table",
"stock-project-factory-marketplace",
"stock-project-factory-workloadmonitor-details",
"stock-project-api-form",
@@ -307,10 +311,6 @@ func (m *Manager) buildExpectedResourceSet(crds []cozyv1alpha1.ApplicationDefini
"stock-project-builtin-table",
"stock-project-crd-form",
"stock-project-crd-table",
"stock-instance-api-form",
"stock-instance-api-table",
"stock-instance-builtin-form",
"stock-instance-builtin-table",
}
for _, sidebarID := range stockSidebars {
expected["Sidebar"][sidebarID] = true

View File

@@ -17,7 +17,8 @@ import (
// ensureSidebar creates/updates multiple Sidebar resources that share the same menu:
// - The "details" sidebar tied to the current kind (stock-project-factory-<kind>-details)
// - The stock-project sidebars: api-form, api-table, builtin-form, builtin-table, crd-form, crd-table
// - The stock-instance sidebars: api-form, api-table, builtin-form, builtin-table
// - The stock-project sidebars: api-form, api-table, builtin-form, builtin-table, crd-form, crd-table
//
// Menu rules:
// - The first section is "Marketplace" with two hardcoded entries:
@@ -175,23 +176,23 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
// Add hardcoded Backups section
menuItems = append(menuItems, map[string]any{
"key": "backups-category",
"key": "backups",
"label": "Backups",
"children": []any{
map[string]any{
"key": "plans",
"label": "Plans",
"link": "/openapi-ui/{cluster}/{namespace}/api-table/backups.cozystack.io/v1alpha1/plans",
"link": "/openapi-ui/{clusterName}/{namespace}/api-table/backups.cozystack.io/v1alpha1/plans",
},
map[string]any{
"key": "backupjobs",
"label": "BackupJobs",
"link": "/openapi-ui/{cluster}/{namespace}/api-table/backups.cozystack.io/v1alpha1/backupjobs",
"link": "/openapi-ui/{clusterName}/{namespace}/api-table/backups.cozystack.io/v1alpha1/backupjobs",
},
map[string]any{
"key": "backups",
"label": "Backups",
"link": "/openapi-ui/{cluster}/{namespace}/api-table/backups.cozystack.io/v1alpha1/backups",
"link": "/openapi-ui/{clusterName}/{namespace}/api-table/backups.cozystack.io/v1alpha1/backups",
},
},
})
@@ -214,7 +215,7 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
map[string]any{
"key": "loadbalancer-services",
"label": "External IPs",
"link": "/openapi-ui/{cluster}/{namespace}/factory/external-ips",
"link": "/openapi-ui/{clusterName}/{namespace}/factory/external-ips",
},
map[string]any{
"key": "tenants",
@@ -227,7 +228,13 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
// 6) Prepare the list of Sidebar IDs to upsert with the SAME content
// Create sidebars for ALL CRDs with dashboard config
targetIDs := []string{
// stock-project sidebars (namespace-level, full menu)
// stock-instance sidebars
"stock-instance-api-form",
"stock-instance-api-table",
"stock-instance-builtin-form",
"stock-instance-builtin-table",
// stock-project sidebars
"stock-project-factory-marketplace",
"stock-project-factory-workloadmonitor-details",
"stock-project-factory-kube-service-details",
@@ -243,11 +250,6 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
"stock-project-builtin-table",
"stock-project-crd-form",
"stock-project-crd-table",
// stock-instance sidebars (namespace-level pages after namespace is selected)
"stock-instance-api-form",
"stock-instance-api-table",
"stock-instance-builtin-form",
"stock-instance-builtin-table",
}
// Add details sidebars for all CRDs with dashboard config

View File

@@ -503,27 +503,18 @@ func CreateAllCustomFormsOverrides() []*dashboardv1alpha1.CustomFormsOverride {
createFormItem("metadata.namespace", "Namespace", "text"),
createFormItem("spec.applicationRef.kind", "Application Kind", "text"),
createFormItem("spec.applicationRef.name", "Application Name", "text"),
createFormItemWithAPI("spec.backupClassName", "Backup Class", "select", map[string]any{
"api": map[string]any{
"fetchUrl": "/api/clusters/{clusterName}/k8s/apis/backups.cozystack.io/v1alpha1/backupclasses",
"pathToItems": []any{"items"},
"pathToValue": []any{"metadata", "name"},
"pathToLabel": []any{"metadata", "name"},
"clusterNameVar": "clusterName",
},
}),
createFormItem("spec.schedule.type", "Schedule Type", "text"),
createFormItem("spec.schedule.cron", "Schedule Cron", "text"),
},
"schema": createSchema(map[string]any{
"backupClassName": listInputScemaItemBackupClass(),
}),
}),
// BackupJobs form override - backups.cozystack.io/v1alpha1
createCustomFormsOverride("default-/backups.cozystack.io/v1alpha1/backupjobs", map[string]any{
"formItems": []any{
createFormItem("metadata.name", "Name", "text"),
createFormItem("metadata.namespace", "Namespace", "text"),
createFormItem("spec.planRef.name", "Plan Name (optional)", "text"),
createFormItem("spec.applicationRef.apiGroup", "Application API Group", "text"),
createFormItem("spec.applicationRef.kind", "Application Kind", "text"),
createFormItem("spec.applicationRef.name", "Application Name", "text"),
},
"schema": createSchema(map[string]any{
"backupClassName": listInputScemaItemBackupClass(),
}),
}),
}
}
@@ -2051,9 +2042,9 @@ func createCustomFormsOverride(customizationId string, spec map[string]any) *das
"strategy": "merge",
}
// Merge into newSpec caller-provided fields without: customizationId, hidden, strategy
// Merge caller-provided fields (like formItems) into newSpec
for key, value := range spec {
if key != "customizationId" && key != "hidden" && key != "strategy" {
if key != "customizationId" && key != "hidden" && key != "schema" && key != "strategy" {
newSpec[key] = value
}
}
@@ -2098,28 +2089,6 @@ func createNavigation(name string, spec map[string]any) *dashboardv1alpha1.Navig
}
}
func listInputScemaItemBackupClass() map[string]any {
return map[string]any{
"type": "listInput",
"customProps": map[string]any{
"valueUri": "/api/clusters/{cluster}/k8s/apis/backups.cozystack.io/v1alpha1/backupclasses",
"keysToValue": []any{"metadata", "name"},
"keysToLabel": []any{"metadata", "name"},
},
}
}
// backupClassSchema returns the schema for spec.backupClassName as listInput (BackupJob/Plan).
func createSchema(customProps map[string]any) map[string]any {
return map[string]any{
"properties": map[string]any{
"spec": map[string]any{
"properties": customProps,
},
},
}
}
// createFormItem creates a form item for CustomFormsOverride
func createFormItem(path, label, fieldType string) map[string]any {
return map[string]any{

View File

@@ -2,4 +2,5 @@ include ../../../hack/package.mk
generate:
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
yq -o json -i '.properties = {}' values.schema.json
../../../hack/update-crd.sh

View File

@@ -1,13 +1,3 @@
# S3 bucket
## Parameters
### Parameters
| Name | Description | Type | Value |
| ---------------------- | -------------------------------------------------------------------------- | ------------------- | ------- |
| `locking` | Provisions bucket from the `-lock` BucketClass (with object lock enabled). | `bool` | `false` |
| `storagePool` | Selects a specific BucketClass by storage pool name. | `string` | `""` |
| `users` | Users configuration map. | `map[string]object` | `{}` |
| `users[name].readonly` | Whether the user has read-only access. | `bool` | `false` |

View File

@@ -1,22 +1,19 @@
{{- $seaweedfs := .Values._namespace.seaweedfs }}
{{- $pool := .Values.storagePool }}
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketClaim
metadata:
name: {{ .Release.Name }}
spec:
bucketClassName: {{ $seaweedfs }}{{- if $pool }}-{{ $pool }}{{- end }}{{- if .Values.locking }}-lock{{- end }}
bucketClassName: {{ $seaweedfs }}
protocols:
- s3
{{- range $name, $user := .Values.users }}
---
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketAccess
metadata:
name: {{ $.Release.Name }}-{{ $name }}
name: {{ .Release.Name }}
spec:
bucketAccessClassName: {{ $seaweedfs }}{{- if $pool }}-{{ $pool }}{{- end }}{{- if $user.readonly }}-readonly{{- end }}
bucketClaimName: {{ $.Release.Name }}
credentialsSecretName: {{ $.Release.Name }}-{{ $name }}
bucketAccessClassName: {{ $seaweedfs }}
bucketClaimName: {{ .Release.Name }}
credentialsSecretName: {{ .Release.Name }}
protocol: s3
{{- end }}

View File

@@ -8,9 +8,8 @@ rules:
resources:
- secrets
resourceNames:
{{- range $name, $user := .Values.users }}
- {{ $.Release.Name }}-{{ $name }}-credentials
{{- end }}
- {{ .Release.Name }}
- {{ .Release.Name }}-credentials
verbs: ["get", "list", "watch"]
- apiGroups:
- networking.k8s.io

View File

@@ -23,4 +23,3 @@ spec:
name: cozystack-values
values:
bucketName: {{ .Release.Name }}
users: {{ .Values.users | toJson }}

View File

@@ -1,30 +1,5 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"locking": {
"description": "Provisions bucket from the `-lock` BucketClass (with object lock enabled).",
"type": "boolean",
"default": false
},
"storagePool": {
"description": "Selects a specific BucketClass by storage pool name.",
"type": "string",
"default": ""
},
"users": {
"description": "Users configuration map.",
"type": "object",
"default": {},
"additionalProperties": {
"type": "object",
"properties": {
"readonly": {
"description": "Whether the user has read-only access.",
"type": "boolean"
}
}
}
}
}
}
"properties": {}
}

View File

@@ -1,11 +1 @@
## @param {bool} locking=false - Provisions bucket from the `-lock` BucketClass (with object lock enabled).
locking: false
## @param {string} [storagePool] - Selects a specific BucketClass by storage pool name.
storagePool: ""
## @typedef {struct} User - Bucket user configuration.
## @field {bool} [readonly] - Whether the user has read-only access.
## @param {map[string]User} users - Users configuration map.
users: {}
{}

View File

@@ -1,4 +1,4 @@
# Managed FoundationDB Service
# FoundationDB
A managed FoundationDB service for Cozystack.

View File

@@ -1,6 +1,6 @@
# Managed Harbor Container Registry
Harbor is an open-source trusted cloud-native registry project that stores, signs, and scans content.
Harbor is an open source trusted cloud native registry project that stores, signs, and scans content.
## Parameters

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.0.0@sha256:3753b735b0315bee90de54cb25cfebc63bd2cc90ad11ca4fdc0e70439abd5096
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.0.0@sha256:7deeee117e7eec599cb453836ca95eadd131dfc8c875dc457ef29dc1433395e0

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:faaa6bcdb68196edb4baafe643679bd7d2ef35f910c639b71e06a4ecc034f232
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:604561e23df1b8eb25c24cf73fd93c7aaa6d1e7c56affbbda5c6f0f83424e4b1

View File

@@ -3,7 +3,6 @@ cilium:
k8sServiceHost: {{ .Release.Name }}.{{ .Release.Namespace }}.svc
k8sServicePort: 6443
routingMode: tunnel
MTU: 1350
enableIPv4Masquerade: true
ipv4NativeRoutingCIDR: ""
{{- if $.Values.addons.gatewayAPI.enabled }}

View File

@@ -15,7 +15,7 @@ This managed service is controlled by mariadb-operator, ensuring efficient manag
### How to switch master/slave replica
```bash
kubectl edit mariadb <instance>
kubectl edit mariadb <instnace>
```
update:
@@ -54,11 +54,11 @@ more details:
- **Replication can't be finished with various errors**
- **Replication can't be finished in case if `binlog` purged**
Until `mariadbbackup` is not used to bootstrap a node by mariadb-operator (this feature is not implemented yet), follow these manual steps to fix it:
Until `mariadbbackup` is not used to bootstrap a node by mariadb-operator (this feature is not inmplemented yet), follow these manual steps to fix it:
https://github.com/mariadb-operator/mariadb-operator/issues/141#issuecomment-1804760231
- **Corrupted indices**
Sometimes some indices can be corrupted on master replica, you can recover them from slave:
- **Corrupted indicies**
Sometimes some indecies can be corrupted on master replica, you can recover them from slave:
```bash
mysqldump -h <slave> -P 3306 -u<user> -p<password> --column-statistics=0 <database> <table> ~/tmp/fix-table.sql

View File

@@ -1,3 +0,0 @@
.helmignore
/logos
/Makefile

View File

@@ -1,7 +0,0 @@
apiVersion: v2
name: openbao
description: Managed OpenBAO secrets management service
icon: /logos/openbao.svg
type: application
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process
appVersion: "2.5.0"

View File

@@ -1,5 +0,0 @@
include ../../../hack/package.mk
generate:
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
../../../hack/update-crd.sh

View File

@@ -1,27 +0,0 @@
# Managed OpenBAO Service
OpenBAO is an open-source secrets management solution forked from HashiCorp Vault.
It provides identity-based secrets and encryption management for cloud infrastructure.
## Parameters
### Common parameters
| Name | Description | Type | Value |
| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
| `replicas` | Number of OpenBAO replicas. HA with Raft is automatically enabled when replicas > 1. Switching between standalone (file storage) and HA (Raft storage) modes requires data migration. | `int` | `1` |
| `resources` | Explicit CPU and memory configuration for each OpenBAO replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `size` | Persistent Volume Claim size for data storage. | `quantity` | `10Gi` |
| `storageClass` | StorageClass used to store the data. | `string` | `""` |
| `external` | Enable external access from outside the cluster. | `bool` | `false` |
### Application-specific parameters
| Name | Description | Type | Value |
| ---- | -------------------------- | ------ | ------ |
| `ui` | Enable the OpenBAO web UI. | `bool` | `true` |

View File

@@ -1 +0,0 @@
../../../library/cozy-lib

View File

@@ -1,11 +0,0 @@
<svg width="144" height="144" viewBox="0 0 144 144" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="144" height="144" rx="24" fill="url(#paint0_linear)"/>
<rect width="144" height="144" rx="24" fill="black" fill-opacity="0.3"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M72 30C53.222 30 38 45.222 38 64v8c-3.314 0-6 2.686-6 6v30c0 3.314 2.686 6 6 6h68c3.314 0 6-2.686 6-6V78c0-3.314-2.686-6-6-6v-8C106 45.222 90.778 30 72 30zm-8 42v-8c0-4.418 3.582-8 8-8s8 3.582 8 8v8H64zm26 0v-8c0-8.837-7.163-16-16-16s-16 7.163-16 16v8h-2v28h60V72H90zm-22 14a4 4 0 118 0 4 4 0 01-8 0zm4-8a8 8 0 100 16 8 8 0 000-16z" fill="white"/>
<defs>
<linearGradient id="paint0_linear" x1="10" y1="15.5" x2="144" y2="131.5" gradientUnits="userSpaceOnUse">
<stop stop-color="#87d6be"/>
<stop offset="1" stop-color="#79c0ab"/>
</linearGradient>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 852 B

View File

@@ -1,49 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "128Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "256Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "512Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "1Gi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1" "memory" "2Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "2Gi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "2" "memory" "4Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "4Gi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "4" "memory" "8Gi" "ephemeral-storage" "50Mi")
"limits" (dict "memory" "8Gi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -1,31 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-dashboard-resources
rules:
- apiGroups:
- ""
resources:
- services
resourceNames:
- {{ .Release.Name }}
- {{ .Release.Name }}-internal
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -1,99 +0,0 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-system
labels:
sharding.fluxcd.io/key: tenants
spec:
chartRef:
kind: ExternalArtifact
name: cozystack-openbao-application-default-openbao-system
namespace: cozy-system
interval: 5m
timeout: 10m
install:
remediation:
retries: -1
upgrade:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
values:
openbao:
fullnameOverride: {{ .Release.Name }}
global:
tlsDisable: true
server:
podManagementPolicy: Parallel
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resourcesPreset .Values.resources $) | nindent 10 }}
dataStorage:
enabled: true
size: {{ .Values.size }}
{{- with .Values.storageClass }}
storageClass: {{ . }}
{{- end }}
{{- if gt (int .Values.replicas) 1 }}
standalone:
enabled: false
ha:
enabled: true
replicas: {{ .Values.replicas }}
raft:
enabled: true
setNodeId: true
config: |
ui = {{ .Values.ui }}
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_disable = true
}
storage "raft" {
path = "/openbao/data"
{{- range $i := until (int $.Values.replicas) }}
retry_join {
leader_api_addr = "http://{{ $.Release.Name }}-{{ $i }}.{{ $.Release.Name }}-internal:8200"
}
{{- end }}
}
service_registration "kubernetes" {}
{{- else }}
standalone:
enabled: true
config: |
ui = {{ .Values.ui }}
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_disable = true
}
storage "file" {
path = "/openbao/data"
}
# Note: service_registration "kubernetes" {} is intentionally omitted
# in standalone mode — it requires an HA-capable storage backend and
# causes a fatal error with storage "file".
ha:
enabled: false
{{- end }}
{{- if .Values.external }}
service:
type: LoadBalancer
{{- end }}
ui:
enabled: {{ .Values.ui }}
{{- if .Values.external }}
serviceType: LoadBalancer
{{- end }}
injector:
enabled: false
csi:
enabled: false

View File

@@ -1,13 +0,0 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: openbao
type: openbao
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}-system
version: {{ $.Chart.Version }}

View File

@@ -1,87 +0,0 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"description": "Enable external access from outside the cluster.",
"type": "boolean",
"default": false
},
"replicas": {
"description": "Number of OpenBAO replicas. HA with Raft is automatically enabled when replicas \u003e 1. Switching between standalone (file storage) and HA (Raft storage) modes requires data migration.",
"type": "integer",
"default": 1
},
"resources": {
"description": "Explicit CPU and memory configuration for each OpenBAO replica. When omitted, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"default": {},
"properties": {
"cpu": {
"description": "CPU available to each replica.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Memory (RAM) available to each replica.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted.",
"type": "string",
"default": "small",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"size": {
"description": "Persistent Volume Claim size for data storage.",
"default": "10Gi",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"storageClass": {
"description": "StorageClass used to store the data.",
"type": "string",
"default": ""
},
"ui": {
"description": "Enable the OpenBAO web UI.",
"type": "boolean",
"default": true
}
}
}

View File

@@ -1,41 +0,0 @@
##
## @section Common parameters
##
## @typedef {struct} Resources - Explicit CPU and memory configuration for each OpenBAO replica.
## @field {quantity} [cpu] - CPU available to each replica.
## @field {quantity} [memory] - Memory (RAM) available to each replica.
## @enum {string} ResourcesPreset - Default sizing preset.
## @value nano
## @value micro
## @value small
## @value medium
## @value large
## @value xlarge
## @value 2xlarge
## @param {int} replicas - Number of OpenBAO replicas. HA with Raft is automatically enabled when replicas > 1. Switching between standalone (file storage) and HA (Raft storage) modes requires data migration.
replicas: 1
## @param {Resources} [resources] - Explicit CPU and memory configuration for each OpenBAO replica. When omitted, the preset defined in `resourcesPreset` is applied.
resources: {}
## @param {ResourcesPreset} resourcesPreset="small" - Default sizing preset used when `resources` is omitted.
resourcesPreset: "small"
## @param {quantity} size - Persistent Volume Claim size for data storage.
size: 10Gi
## @param {string} storageClass - StorageClass used to store the data.
storageClass: ""
## @param {bool} external - Enable external access from outside the cluster.
external: false
##
## @section Application-specific parameters
##
## @param {bool} ui - Enable the OpenBAO web UI.
ui: true

View File

@@ -4,4 +4,4 @@ description: Managed RabbitMQ service
icon: /logos/rabbitmq.svg
type: application
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process
appVersion: "4.2.4"
appVersion: "3.13.2"

View File

@@ -3,7 +3,3 @@ include ../../../hack/package.mk
generate:
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
../../../hack/update-crd.sh
update:
hack/update-versions.sh
make generate

View File

@@ -23,7 +23,6 @@ The service utilizes official RabbitMQ operator. This ensures the reliability an
| `size` | Persistent Volume Claim size available for application data. | `quantity` | `10Gi` |
| `storageClass` | StorageClass used to store the data. | `string` | `""` |
| `external` | Enable external access from outside the cluster. | `bool` | `false` |
| `version` | RabbitMQ major.minor version to deploy | `string` | `v4.2` |
### Application-specific parameters

View File

@@ -1,4 +0,0 @@
"v4.2": "4.2.4"
"v4.1": "4.1.8"
"v4.0": "4.0.9"
"v3.13": "3.13.7"

View File

@@ -1,129 +0,0 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
RABBITMQ_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
VALUES_FILE="${RABBITMQ_DIR}/values.yaml"
VERSIONS_FILE="${RABBITMQ_DIR}/files/versions.yaml"
GITHUB_API_URL="https://api.github.com/repos/rabbitmq/rabbitmq-server/releases"
# Check if jq is installed
if ! command -v jq &> /dev/null; then
echo "Error: jq is not installed. Please install jq and try again." >&2
exit 1
fi
# Fetch releases from GitHub API
echo "Fetching releases from GitHub API..."
RELEASES_JSON=$(curl -sSL "${GITHUB_API_URL}?per_page=100")
if [ -z "$RELEASES_JSON" ]; then
echo "Error: Could not fetch releases from GitHub API" >&2
exit 1
fi
# Extract stable release tags (format: v3.13.7, v4.0.3, etc.)
# Filter out pre-releases and draft releases
RELEASE_TAGS=$(echo "$RELEASES_JSON" | jq -r '.[] | select(.prerelease == false) | select(.draft == false) | .tag_name' | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | sort -V)
if [ -z "$RELEASE_TAGS" ]; then
echo "Error: Could not find any stable release tags" >&2
exit 1
fi
echo "Found release tags: $(echo "$RELEASE_TAGS" | tr '\n' ' ')"
# Supported major.minor versions (newest first)
# We support the last few minor releases of each active major
SUPPORTED_MAJORS=("4.2" "4.1" "4.0" "3.13")
# Build versions map: major.minor -> latest patch version
declare -A VERSION_MAP
MAJOR_VERSIONS=()
for major_minor in "${SUPPORTED_MAJORS[@]}"; do
# Find the latest patch version for this major.minor
MATCHING=$(echo "$RELEASE_TAGS" | grep -E "^v${major_minor//./\\.}\.[0-9]+$" | tail -n1)
if [ -n "$MATCHING" ]; then
# Strip the 'v' prefix for the value (Docker tag format is e.g. 3.13.7)
TAG_VERSION="${MATCHING#v}"
VERSION_MAP["v${major_minor}"]="${TAG_VERSION}"
MAJOR_VERSIONS+=("v${major_minor}")
echo "Found version: v${major_minor} -> ${TAG_VERSION}"
else
echo "Warning: No stable releases found for ${major_minor}, skipping..." >&2
fi
done
if [ ${#MAJOR_VERSIONS[@]} -eq 0 ]; then
echo "Error: No matching versions found" >&2
exit 1
fi
echo "Major versions to add: ${MAJOR_VERSIONS[*]}"
# Create/update versions.yaml file
echo "Updating $VERSIONS_FILE..."
{
for major_ver in "${MAJOR_VERSIONS[@]}"; do
echo "\"${major_ver}\": \"${VERSION_MAP[$major_ver]}\""
done
} > "$VERSIONS_FILE"
echo "Successfully updated $VERSIONS_FILE"
# Update values.yaml - enum with major.minor versions only
TEMP_FILE=$(mktemp)
trap "rm -f $TEMP_FILE" EXIT
# Build new version section
NEW_VERSION_SECTION="## @enum {string} Version"
for major_ver in "${MAJOR_VERSIONS[@]}"; do
NEW_VERSION_SECTION="${NEW_VERSION_SECTION}
## @value $major_ver"
done
NEW_VERSION_SECTION="${NEW_VERSION_SECTION}
## @param {Version} version - RabbitMQ major.minor version to deploy
version: ${MAJOR_VERSIONS[0]}"
# Check if version section already exists
if grep -q "^## @enum {string} Version" "$VALUES_FILE"; then
# Version section exists, update it using awk
echo "Updating existing version section in $VALUES_FILE..."
awk -v new_section="$NEW_VERSION_SECTION" '
/^## @enum {string} Version/ {
in_section = 1
print new_section
next
}
in_section && /^version: / {
in_section = 0
next
}
in_section {
next
}
{ print }
' "$VALUES_FILE" > "$TEMP_FILE.tmp"
mv "$TEMP_FILE.tmp" "$VALUES_FILE"
else
# Version section doesn't exist, insert it before Application-specific parameters section
echo "Inserting new version section in $VALUES_FILE..."
awk -v new_section="$NEW_VERSION_SECTION" '
/^## @section Application-specific parameters/ {
print new_section
print ""
}
{ print }
' "$VALUES_FILE" > "$TEMP_FILE.tmp"
mv "$TEMP_FILE.tmp" "$VALUES_FILE"
fi
echo "Successfully updated $VALUES_FILE with major.minor versions: ${MAJOR_VERSIONS[*]}"

View File

@@ -1,8 +0,0 @@
{{- define "rabbitmq.versionMap" }}
{{- $versionMap := .Files.Get "files/versions.yaml" | fromYaml }}
{{- if not (hasKey $versionMap .Values.version) }}
{{- printf `RabbitMQ version %s is not supported, allowed versions are %s` $.Values.version (keys $versionMap) | fail }}
{{- end }}
{{- index $versionMap .Values.version }}
{{- end }}

View File

@@ -7,7 +7,6 @@ metadata:
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicas }}
image: 'rabbitmq:{{ include "rabbitmq.versionMap" $ }}-management'
{{- if .Values.external }}
service:
type: LoadBalancer

View File

@@ -92,17 +92,6 @@
}
}
},
"version": {
"description": "RabbitMQ major.minor version to deploy",
"type": "string",
"default": "v4.2",
"enum": [
"v4.2",
"v4.1",
"v4.0",
"v3.13"
]
},
"vhosts": {
"description": "Virtual hosts configuration map.",
"type": "object",

View File

@@ -34,15 +34,6 @@ storageClass: ""
external: false
##
## @enum {string} Version
## @value v4.2
## @value v4.1
## @value v4.0
## @value v3.13
## @param {Version} version - RabbitMQ major.minor version to deploy
version: v4.2
## @section Application-specific parameters
##

View File

@@ -18,7 +18,7 @@ spec:
name: cozystack-etcd-application-default-etcd
namespace: cozy-system
interval: 5m
timeout: 30m
timeout: 10m
install:
remediation:
retries: -1

View File

@@ -6,7 +6,7 @@ metadata:
name: {{ include "virtual-machine.fullname" $ }}-ssh-keys
stringData:
{{- range $k, $v := .Values.sshKeys }}
key{{ $k }}: {{ quote $v }}
key{{ $k }}: {{ quote $v }}
{{- end }}
{{- end }}
{{- if or .Values.cloudInit .Values.sshKeys }}
@@ -27,21 +27,7 @@ stringData:
#cloud-config
ssh_authorized_keys:
{{- range .Values.sshKeys }}
- {{ quote . }}
- {{ quote . }}
{{- end }}
{{- end }}
networkdata: |
{{- /*
Provide network config without MAC addresses so the VM can be restored/cloned
with a new MAC without breaking DHCP. Interface names are stable by PCI slot:
enp1s0 = default (pod) NIC, enp2s0+ = additional subnet NICs.
*/}}
version: 2
ethernets:
enp1s0:
dhcp4: true
{{- range $i, $subnet := .Values.subnets }}
enp{{ add $i 2 }}s0:
dhcp4: true
{{- end }}
{{- end }}

View File

@@ -113,8 +113,6 @@ spec:
cloudInitNoCloud:
secretRef:
name: {{ include "virtual-machine.fullname" . }}-cloud-init
networkDataSecretRef:
name: {{ include "virtual-machine.fullname" . }}-cloud-init
{{- end }}
networks:
- name: default

View File

@@ -10,8 +10,6 @@ metadata:
labels:
cozystack.io/system: "true"
pod-security.kubernetes.io/enforce: privileged
annotations:
helm.sh/resource-policy: keep
---
apiVersion: v1
kind: ServiceAccount

View File

@@ -1,9 +1,9 @@
cozystackOperator:
# Deployment variant: talos, generic, hosted
variant: talos
image: ghcr.io/cozystack/cozystack/cozystack-operator:v1.1.0@sha256:9367001a8d1d2dcf08ae74a42ac234eaa6af18f1af64ac28ce8a5946af9c5d3f
image: ghcr.io/cozystack/cozystack/cozystack-operator:v1.0.0-rc.1@sha256:5c0148116b2ab425106f6b86bbc1dfec593a83c993947c24eae92946d1c6116a
platformSourceUrl: 'oci://ghcr.io/cozystack/cozystack/cozystack-packages'
platformSourceRef: 'digest=sha256:7c6da38e7b99ec80d35ba2cef721ea1579f8a0824989454544fa85318bb7bf15'
platformSourceRef: 'digest=sha256:b4ee831911b9c259a073f00390559f0bd5d8c78e22e48427a64ef05ed90ca008'
# Generic variant configuration (only used when cozystackOperator.variant=generic)
cozystack:
# Kubernetes API server host (IP only, no protocol/port)

View File

@@ -2,7 +2,6 @@
# Migration 26 --> 27
# Migrate monitoring resources from extra/monitoring to system/monitoring
# This migration re-labels resources so they become owned by monitoring-system HelmRelease
# and deletes old helm release secrets so that helm does not diff old vs new chart manifests.
set -euo pipefail
@@ -36,39 +35,10 @@ relabel_resources() {
done
}
# Delete all helm release secrets for a given release name in a namespace.
# Uses both label selector and name-pattern matching to ensure complete cleanup.
delete_helm_secrets() {
local ns="$1"
local release="$2"
# Primary: delete by label selector
kubectl delete secrets -n "$ns" -l "name=${release},owner=helm" --ignore-not-found
# Fallback: find and delete by name pattern (in case labels were modified)
local remaining
remaining=$(kubectl get secrets -n "$ns" -o name | { grep "^secret/sh\.helm\.release\.v1\.${release}\." || true; })
if [ -n "$remaining" ]; then
echo " Found secrets not matched by label selector, deleting by name..."
echo "$remaining" | while IFS= read -r secret; do
echo " Deleting $secret"
kubectl delete -n "$ns" "$secret" --ignore-not-found
done
fi
# Verify all secrets are gone
remaining=$(kubectl get secrets -n "$ns" -o name | { grep "^secret/sh\.helm\.release\.v1\.${release}\." || true; })
if [ -n "$remaining" ]; then
echo " ERROR: Failed to delete helm release secrets:"
echo "$remaining"
return 1
fi
}
# Find all tenant namespaces with monitoring HelmRelease
echo "Finding tenant namespaces with monitoring HelmRelease..."
NAMESPACES=$(kubectl get hr --all-namespaces -l cozystack.io/ui=true --field-selector=metadata.name=monitoring \
-o jsonpath='{range .items[*]}{.metadata.namespace}{"\n"}{end}' | sort -u)
NAMESPACES=$(kubectl get hr --all-namespaces -l apps.cozystack.io/application.kind=Monitoring \
-o jsonpath='{range .items[*]}{.metadata.namespace}{"\n"}{end}' 2>/dev/null | sort -u || true)
if [ -z "$NAMESPACES" ]; then
echo "No monitoring HelmReleases found in tenant namespaces, skipping migration"
@@ -96,7 +66,7 @@ for ns in $NAMESPACES; do
# Step 1: Suspend the HelmRelease
echo ""
echo "Step 1: Suspending HelmRelease monitoring..."
kubectl patch hr -n "$ns" monitoring --type=merge -p '{"spec":{"suspend":true}}'
kubectl patch hr -n "$ns" monitoring --type=merge -p '{"spec":{"suspend":true}}' 2>/dev/null || true
# Wait a moment for reconciliation to stop
sleep 2
@@ -104,7 +74,7 @@ for ns in $NAMESPACES; do
# Step 2: Delete helm secrets for the monitoring release
echo ""
echo "Step 2: Deleting helm secrets for monitoring release..."
delete_helm_secrets "$ns" "monitoring"
kubectl delete secrets -n "$ns" -l name=monitoring,owner=helm --ignore-not-found
# Step 3: Relabel resources to be owned by monitoring-system
echo ""
@@ -151,9 +121,7 @@ for ns in $NAMESPACES; do
echo "Processing Cozystack resources..."
relabel_resources "$ns" "workloadmonitors.cozystack.io"
# Step 4: Delete the suspended HelmRelease
# Helm secrets are already gone, so flux finalizer will find no release to uninstall
# and will simply remove the finalizer without deleting any resources.
# Step 4: Delete the suspended HelmRelease (Flux won't delete resources when HR is suspended)
echo ""
echo "Step 4: Deleting suspended HelmRelease monitoring..."
kubectl delete hr -n "$ns" monitoring --ignore-not-found

View File

@@ -5,24 +5,10 @@ set -euo pipefail
# Migrate Piraeus CRDs to piraeus-operator-crds Helm release
for crd in linstorclusters.piraeus.io linstornodeconnections.piraeus.io linstorsatelliteconfigurations.piraeus.io linstorsatellites.piraeus.io; do
if kubectl get crd "$crd" >/dev/null 2>&1; then
echo " Relabeling CRD $crd"
kubectl annotate crd "$crd" meta.helm.sh/release-namespace=cozy-linstor meta.helm.sh/release-name=piraeus-operator-crds --overwrite
kubectl label crd "$crd" app.kubernetes.io/managed-by=Helm helm.toolkit.fluxcd.io/namespace=cozy-linstor helm.toolkit.fluxcd.io/name=piraeus-operator-crds --overwrite
else
echo " CRD $crd not found, skipping"
fi
kubectl annotate crd "$crd" meta.helm.sh/release-namespace=cozy-linstor meta.helm.sh/release-name=piraeus-operator-crds --overwrite
kubectl label crd "$crd" app.kubernetes.io/managed-by=Helm helm.toolkit.fluxcd.io/namespace=cozy-linstor helm.toolkit.fluxcd.io/name=piraeus-operator-crds --overwrite
done
# Delete old piraeus-operator helm secrets (by label and by name pattern)
kubectl delete secret -n cozy-linstor -l name=piraeus-operator,owner=helm --ignore-not-found
remaining=$(kubectl get secrets -n cozy-linstor -o name 2>/dev/null | { grep "^secret/sh\.helm\.release\.v1\.piraeus-operator\." || true; })
if [ -n "$remaining" ]; then
echo " Deleting remaining piraeus-operator helm secrets by name..."
echo "$remaining" | while IFS= read -r secret; do
kubectl delete -n cozy-linstor "$secret" --ignore-not-found
done
fi
# Stamp version
kubectl create configmap -n cozy-system cozystack-version \

View File

@@ -348,7 +348,7 @@ PVCEOF
# --- 3g: Clone Secrets ---
echo " --- Clone Secrets ---"
for secret in $(kubectl -n "$NAMESPACE" get secret -o name 2>/dev/null \
| { grep "secret/${OLD_NAME}" || true; } | { grep -v "sh.helm.release" || true; }); do
| grep "secret/${OLD_NAME}" | grep -v "sh.helm.release"); do
old_secret_name="${secret#secret/}"
new_secret_name="${NEW_NAME}${old_secret_name#${OLD_NAME}}"
clone_resource "$NAMESPACE" "secret" "$old_secret_name" "$new_secret_name" "$OLD_NAME" "$NEW_NAME"
@@ -357,7 +357,7 @@ PVCEOF
# --- 3h: Clone ConfigMaps ---
echo " --- Clone ConfigMaps ---"
for cm in $(kubectl -n "$NAMESPACE" get configmap -o name 2>/dev/null \
| { grep "configmap/${OLD_NAME}" || true; }); do
| grep "configmap/${OLD_NAME}"); do
old_cm_name="${cm#configmap/}"
new_cm_name="${NEW_NAME}${old_cm_name#${OLD_NAME}}"
clone_resource "$NAMESPACE" "configmap" "$old_cm_name" "$new_cm_name" "$OLD_NAME" "$NEW_NAME"
@@ -468,13 +468,13 @@ PVCEOF
fi
for secret in $(kubectl -n "$NAMESPACE" get secret -o name 2>/dev/null \
| { grep "secret/${OLD_NAME}" || true; } | { grep -v "sh.helm.release" || true; }); do
| grep "secret/${OLD_NAME}" | grep -v "sh.helm.release"); do
old_secret_name="${secret#secret/}"
delete_resource "$NAMESPACE" "secret" "$old_secret_name"
done
for cm in $(kubectl -n "$NAMESPACE" get configmap -o name 2>/dev/null \
| { grep "configmap/${OLD_NAME}" || true; }); do
| grep "configmap/${OLD_NAME}"); do
old_cm_name="${cm#configmap/}"
delete_resource "$NAMESPACE" "configmap" "$old_cm_name"
done
@@ -611,19 +611,6 @@ done
echo ""
echo "=== Migration complete (${#INSTANCES[@]} instance(s)) ==="
# ============================================================
# STEP 8: Clean up orphaned mysql-rd system HelmRelease
# ============================================================
echo ""
echo "--- Step 8: Clean up orphaned mysql-rd HelmRelease ---"
if kubectl -n cozy-system get hr mysql-rd --no-headers 2>/dev/null | grep -q .; then
echo " [DELETE] hr/mysql-rd"
kubectl -n cozy-system delete hr mysql-rd --wait=false
else
echo " [SKIP] hr/mysql-rd already gone"
fi
kubectl -n cozy-system delete secret -l "owner=helm,name=mysql-rd" --ignore-not-found
# Stamp version
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=29 --dry-run=client -o yaml | kubectl apply -f-

View File

@@ -9,6 +9,8 @@ set -euo pipefail
OLD_PREFIX="virtual-machine"
NEW_DISK_PREFIX="vm-disk"
NEW_INSTANCE_PREFIX="vm-instance"
PROTECTION_WEBHOOK_NAME="protection-webhook"
PROTECTION_WEBHOOK_NS="protection-webhook"
CDI_APISERVER_NS="cozy-kubevirt-cdi"
CDI_APISERVER_DEPLOY="cdi-apiserver"
CDI_VALIDATING_WEBHOOKS="cdi-api-datavolume-validate cdi-api-dataimportcron-validate cdi-api-populator-validate cdi-api-validate"
@@ -86,6 +88,7 @@ echo " Total: ${#INSTANCES[@]} instance(s)"
# STEP 2: Migrate each instance
# ============================================================
ALL_PV_NAMES=()
ALL_PROTECTED_RESOURCES=()
for entry in "${INSTANCES[@]}"; do
NAMESPACE="${entry%%/*}"
@@ -312,7 +315,7 @@ PVCEOF
# --- 2i: Clone Secrets ---
echo " --- Clone Secrets ---"
kubectl -n "$NAMESPACE" get secret -o name 2>/dev/null \
| { grep "secret/${OLD_NAME}" || true; } | { grep -v "sh.helm.release" || true; } | { grep -v "values" || true; } \
| grep "secret/${OLD_NAME}" | grep -v "sh.helm.release" | grep -v "values" \
| while IFS= read -r secret; do
old_secret_name="${secret#secret/}"
suffix="${old_secret_name#${OLD_NAME}}"
@@ -539,7 +542,7 @@ SVCEOF
# --- 2q: Delete old resources ---
echo " --- Delete old resources ---"
kubectl -n "$NAMESPACE" get secret -o name 2>/dev/null \
| { grep "secret/${OLD_NAME}" || true; } | { grep -v "sh.helm.release" || true; } | { grep -v "values" || true; } \
| grep "secret/${OLD_NAME}" | grep -v "sh.helm.release" | grep -v "values" \
| while IFS= read -r secret; do
old_secret_name="${secret#secret/}"
delete_resource "$NAMESPACE" "secret" "$old_secret_name"
@@ -561,17 +564,71 @@ SVCEOF
delete_resource "$NAMESPACE" "secret" "$VALUES_SECRET"
fi
# Delete old service (if exists)
# Collect protected resources for batch deletion
if resource_exists "$NAMESPACE" "svc" "$OLD_NAME"; then
delete_resource "$NAMESPACE" "svc" "$OLD_NAME"
ALL_PROTECTED_RESOURCES+=("${NAMESPACE}:svc/${OLD_NAME}")
fi
done
# ============================================================
# STEP 3: Restore PV reclaim policies
# STEP 3: Delete protected resources (Services)
# ============================================================
echo ""
echo "--- Step 3: Restore PV reclaim policies ---"
echo "--- Step 3: Delete protected resources ---"
if [ ${#ALL_PROTECTED_RESOURCES[@]} -gt 0 ]; then
WEBHOOK_EXISTS=false
if kubectl -n "$PROTECTION_WEBHOOK_NS" get deploy "$PROTECTION_WEBHOOK_NAME" --no-headers 2>/dev/null | grep -q .; then
WEBHOOK_EXISTS=true
fi
if [ "$WEBHOOK_EXISTS" = "true" ]; then
echo " --- Temporarily disabling protection-webhook ---"
WEBHOOK_REPLICAS=$(kubectl -n "$PROTECTION_WEBHOOK_NS" get deploy "$PROTECTION_WEBHOOK_NAME" \
-o jsonpath='{.spec.replicas}' 2>/dev/null || echo "1")
echo " [SCALE] ${PROTECTION_WEBHOOK_NAME} -> 0 (was ${WEBHOOK_REPLICAS})"
kubectl -n "$PROTECTION_WEBHOOK_NS" scale deploy "$PROTECTION_WEBHOOK_NAME" --replicas=0
echo " [PATCH] Set failurePolicy=Ignore on ValidatingWebhookConfiguration/${PROTECTION_WEBHOOK_NAME}"
kubectl get validatingwebhookconfiguration "$PROTECTION_WEBHOOK_NAME" -o json | \
jq '.webhooks[].failurePolicy = "Ignore"' | \
kubectl apply -f - 2>/dev/null || true
echo " Waiting for webhook pods to terminate..."
kubectl -n "$PROTECTION_WEBHOOK_NS" wait --for=delete pod \
-l app.kubernetes.io/name=protection-webhook --timeout=60s 2>/dev/null || true
sleep 3
fi
for entry in "${ALL_PROTECTED_RESOURCES[@]}"; do
ns="${entry%%:*}"
res="${entry#*:}"
echo " [DELETE] ${ns}/${res}"
kubectl -n "$ns" delete "$res" --wait=false 2>/dev/null || true
done
if [ "$WEBHOOK_EXISTS" = "true" ]; then
echo " [PATCH] Set failurePolicy=Fail on ValidatingWebhookConfiguration/${PROTECTION_WEBHOOK_NAME}"
kubectl get validatingwebhookconfiguration "$PROTECTION_WEBHOOK_NAME" -o json | \
jq '.webhooks[].failurePolicy = "Fail"' | \
kubectl apply -f - 2>/dev/null || true
echo " [SCALE] ${PROTECTION_WEBHOOK_NAME} -> ${WEBHOOK_REPLICAS}"
kubectl -n "$PROTECTION_WEBHOOK_NS" scale deploy "$PROTECTION_WEBHOOK_NAME" \
--replicas="$WEBHOOK_REPLICAS"
echo " --- protection-webhook restored ---"
fi
else
echo " [SKIP] No protected resources to delete"
fi
# ============================================================
# STEP 4: Restore PV reclaim policies
# ============================================================
echo ""
echo "--- Step 4: Restore PV reclaim policies ---"
for pv_name in "${ALL_PV_NAMES[@]}"; do
if [ -n "$pv_name" ]; then
current_policy=$(kubectl get pv "$pv_name" \
@@ -586,7 +643,7 @@ for pv_name in "${ALL_PV_NAMES[@]}"; do
done
# ============================================================
# STEP 4: Temporarily disable CDI datavolume webhooks
# STEP 5: Temporarily disable CDI datavolume webhooks
# ============================================================
# CDI's datavolume-validate webhook rejects DataVolume creation when a PVC
# with the same name already exists. We must disable it so that vm-disk
@@ -595,7 +652,7 @@ done
# cdi-apiserver (which serves the webhooks), then delete webhook configs.
# Both are restored after vm-disk HRs reconcile.
echo ""
echo "--- Step 4: Temporarily disable CDI webhooks ---"
echo "--- Step 5: Temporarily disable CDI webhooks ---"
CDI_OPERATOR_REPLICAS=$(kubectl -n "$CDI_APISERVER_NS" get deploy cdi-operator \
-o jsonpath='{.spec.replicas}' 2>/dev/null || echo "1")
@@ -628,10 +685,10 @@ done
sleep 2
# ============================================================
# STEP 5: Unsuspend vm-disk HelmReleases first
# STEP 6: Unsuspend vm-disk HelmReleases first
# ============================================================
echo ""
echo "--- Step 5: Unsuspend vm-disk HelmReleases ---"
echo "--- Step 6: Unsuspend vm-disk HelmReleases ---"
for entry in "${INSTANCES[@]}"; do
ns="${entry%%/*}"
instance="${entry#*/}"
@@ -648,7 +705,7 @@ for entry in "${INSTANCES[@]}"; do
# Force immediate reconciliation
echo " [TRIGGER] Reconcile ${ns}/hr/${disk_name}"
kubectl -n "$ns" annotate hr "$disk_name" --overwrite \
"reconcile.fluxcd.io/requestedAt=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" 2>/dev/null || true
"reconcile.fluxcd.io/requestedAt=$(date +%s)" 2>/dev/null || true
fi
done
@@ -672,12 +729,12 @@ for entry in "${INSTANCES[@]}"; do
done
# ============================================================
# STEP 6: Restore CDI webhooks
# STEP 7: Restore CDI webhooks
# ============================================================
# Scale cdi-operator and cdi-apiserver back up.
# cdi-apiserver will recreate webhook configurations automatically on start.
echo ""
echo "--- Step 6: Restore CDI webhooks ---"
echo "--- Step 7: Restore CDI webhooks ---"
echo " [SCALE] cdi-operator -> ${CDI_OPERATOR_REPLICAS}"
kubectl -n "$CDI_APISERVER_NS" scale deploy cdi-operator \
@@ -692,10 +749,10 @@ kubectl -n "$CDI_APISERVER_NS" rollout status deploy "$CDI_APISERVER_DEPLOY" --t
echo " --- CDI webhooks restored ---"
# ============================================================
# STEP 7: Unsuspend vm-instance HelmReleases
# STEP 8: Unsuspend vm-instance HelmReleases
# ============================================================
echo ""
echo "--- Step 7: Unsuspend vm-instance HelmReleases ---"
echo "--- Step 8: Unsuspend vm-instance HelmReleases ---"
for entry in "${INSTANCES[@]}"; do
ns="${entry%%/*}"
instance="${entry#*/}"
@@ -715,19 +772,6 @@ done
echo ""
echo "=== Migration complete (${#INSTANCES[@]} instance(s)) ==="
# ============================================================
# STEP 8: Clean up orphaned virtual-machine-rd system HelmRelease
# ============================================================
echo ""
echo "--- Step 8: Clean up orphaned virtual-machine-rd HelmRelease ---"
if kubectl -n cozy-system get hr virtual-machine-rd --no-headers 2>/dev/null | grep -q .; then
echo " [DELETE] hr/virtual-machine-rd"
kubectl -n cozy-system delete hr virtual-machine-rd --wait=false
else
echo " [SKIP] hr/virtual-machine-rd already gone"
fi
kubectl -n cozy-system delete secret -l "owner=helm,name=virtual-machine-rd" --ignore-not-found
# Stamp version
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=30 --dry-run=client -o yaml | kubectl apply -f-

View File

@@ -1,30 +0,0 @@
#!/bin/sh
# Migration 33 --> 34
# Clean up orphaned system -rd HelmReleases left after application renames.
#
# These HelmReleases reference ExternalArtifacts that no longer exist:
# ferretdb-rd -> replaced by mongodb-rd
# mysql-rd -> replaced by mariadb-rd (migration 28 handled user HRs only)
# virtual-machine-rd -> replaced by vm-disk-rd + vm-instance-rd (migration 29 handled user HRs only)
#
# Idempotent: safe to re-run.
set -euo pipefail
echo "=== Cleaning up orphaned -rd HelmReleases ==="
for hr_name in ferretdb-rd mysql-rd virtual-machine-rd; do
if kubectl -n cozy-system get hr "$hr_name" --no-headers 2>/dev/null | grep -q .; then
echo " [DELETE] hr/${hr_name}"
kubectl -n cozy-system delete hr "$hr_name" --wait=false
else
echo " [SKIP] hr/${hr_name} already gone"
fi
kubectl -n cozy-system delete secret -l "owner=helm,name=${hr_name}" --ignore-not-found
done
echo "=== Cleanup complete ==="
# Stamp version
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=34 --dry-run=client -o yaml | kubectl apply -f-

View File

@@ -1,46 +0,0 @@
#!/bin/sh
# Migration 34 --> 35
# Backfill spec.version on rabbitmq.apps.cozystack.io resources.
#
# Before this migration RabbitMQ had no user-selectable version; the
# operator always used its built-in default image (v3.x). A version field
# was added in this release. Without this migration every existing cluster
# would be upgraded to the new default (v4.2) on the next reconcile.
#
# Set spec.version to "v3.13" for any rabbitmq app resource that does not
# already have it set.
set -euo pipefail
DEFAULT_VERSION="v3.13"
# Skip if the CRD does not exist (rabbitmq was never installed)
if ! kubectl api-resources --api-group=apps.cozystack.io -o name 2>/dev/null | grep -q '^rabbitmqs\.'; then
echo "CRD rabbitmqs.apps.cozystack.io not found, skipping migration"
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=35 --dry-run=client -o yaml | kubectl apply -f-
exit 0
fi
RABBITMQS=$(kubectl get rabbitmqs.apps.cozystack.io -A -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}{"\n"}{end}')
for resource in $RABBITMQS; do
NS="${resource%%/*}"
APP_NAME="${resource##*/}"
# Skip if spec.version is already set
CURRENT_VER=$(kubectl get rabbitmqs.apps.cozystack.io -n "$NS" "$APP_NAME" \
-o jsonpath='{.spec.version}')
if [ -n "$CURRENT_VER" ]; then
echo "SKIP $NS/$APP_NAME: spec.version already set to '$CURRENT_VER'"
continue
fi
echo "Patching rabbitmq/$APP_NAME in $NS: setting version=$DEFAULT_VERSION"
kubectl patch rabbitmqs.apps.cozystack.io -n "$NS" "$APP_NAME" --type=merge \
--patch "{\"spec\":{\"version\":\"${DEFAULT_VERSION}\"}}"
done
# Stamp version
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=35 --dry-run=client -o yaml | kubectl apply -f-

View File

@@ -24,7 +24,7 @@ if [ "$CURRENT_VERSION" -ge "$TARGET_VERSION" ]; then
fi
# Run migrations sequentially from current version to target version
for i in $(seq $CURRENT_VERSION $((TARGET_VERSION - 1))); do
for i in $(seq $((CURRENT_VERSION + 1)) $TARGET_VERSION); do
if [ -f "/migrations/$i" ]; then
echo "Running migration $i"
chmod +x /migrations/$i

View File

@@ -18,5 +18,5 @@ spec:
path: system/backupstrategy-controller
install:
privileged: true
namespace: cozy-backup-controller
namespace: cozy-backupstrategy-controller
releaseName: backupstrategy-controller

View File

@@ -0,0 +1,19 @@
---
apiVersion: cozystack.io/v1alpha1
kind: PackageSource
metadata:
name: cozystack.cozystack-scheduler
spec:
sourceRef:
kind: OCIRepository
name: cozystack-packages
namespace: cozy-system
path: /
variants:
- name: default
components:
- name: cozystack-scheduler
path: system/cozystack-scheduler
install:
namespace: kube-system
releaseName: cozystack-scheduler

View File

@@ -1,29 +0,0 @@
---
apiVersion: cozystack.io/v1alpha1
kind: PackageSource
metadata:
name: cozystack.openbao-application
spec:
sourceRef:
kind: OCIRepository
name: cozystack-packages
namespace: cozy-system
path: /
variants:
- name: default
dependsOn:
- cozystack.networking
libraries:
- name: cozy-lib
path: library/cozy-lib
components:
- name: openbao-system
path: system/openbao
- name: openbao
path: apps/openbao
libraries: ["cozy-lib"]
- name: openbao-rd
path: system/openbao-rd
install:
namespace: cozy-system
releaseName: openbao-rd

View File

@@ -16,7 +16,6 @@
{{include "cozystack.platform.package.default" (list "cozystack.mariadb-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.mongodb-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.nats-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.openbao-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.postgres-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.qdrant-application" $) }}
{{include "cozystack.platform.package.default" (list "cozystack.rabbitmq-application" $) }}

View File

@@ -155,5 +155,6 @@
{{include "cozystack.platform.package.default" (list "cozystack.bootbox" $) }}
{{- end }}
{{include "cozystack.platform.package.optional.default" (list "cozystack.hetzner-robotlb" $) }}
{{include "cozystack.platform.package.optional.default" (list "cozystack.cozystack-scheduler" $) }}
{{- end }}

View File

@@ -6,8 +6,6 @@ kind: ConfigMap
metadata:
name: cozystack-version
namespace: {{ .Release.Namespace }}
annotations:
helm.sh/resource-policy: keep
data:
version: {{ .Values.migrations.targetVersion | quote }}
{{- end }}

View File

@@ -5,8 +5,8 @@ sourceRef:
path: /
migrations:
enabled: false
image: ghcr.io/cozystack/cozystack/platform-migrations:v1.1.0@sha256:d7e8955c1ad8c8fbd4ce42b014c0f849d73d0c3faf0cedaac8e15d647fb2f663
targetVersion: 35
image: ghcr.io/cozystack/cozystack/platform-migrations:v1.0.0-rc.1@sha256:21a09c9f8dfd0a0c9b8c14c70029a39bfce021c66f1d4cacad9764c35dce6e8f
targetVersion: 33
# Bundle deployment configuration
bundles:
system:

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v1.1.0@sha256:0eae9f519669667d60b160ebb93c127843c470ad9ca3447fceaa54604503a7ba
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v1.0.0-rc.1@sha256:0eae9f519669667d60b160ebb93c127843c470ad9ca3447fceaa54604503a7ba

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/matchbox:v1.1.0@sha256:e4c872f6dadc2bbcb9200d04a1d9878f62502f74e979b4eae6c7203abc6d8fa6
ghcr.io/cozystack/cozystack/matchbox:v1.0.0-rc.1@sha256:3306de19f1ad49a02c735d16b82d7c2ec015c8e0563f120f216274e9a3804431

View File

@@ -104,7 +104,6 @@ spec:
- {{ .Release.Name }}
secretName: etcd-peer-ca-tls
privateKey:
rotationPolicy: Never
algorithm: RSA
size: 4096
issuerRef:
@@ -131,7 +130,6 @@ spec:
- {{ .Release.Name }}
secretName: etcd-ca-tls
privateKey:
rotationPolicy: Never
algorithm: RSA
size: 4096
issuerRef:

View File

@@ -1,4 +1,4 @@
# Managed SeaweedFS Service
# Managed NATS Service
## Parameters
@@ -13,68 +13,46 @@
### SeaweedFS Components Configuration
| Name | Description | Type | Value |
| ------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ------------------- | ------- |
| `db` | Database configuration. | `object` | `{}` |
| `db.replicas` | Number of database replicas. | `int` | `2` |
| `db.size` | Persistent Volume size. | `quantity` | `10Gi` |
| `db.storageClass` | StorageClass used to store the data. | `string` | `""` |
| `db.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `db.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `db.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `db.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `master` | Master service configuration. | `object` | `{}` |
| `master.replicas` | Number of master replicas. | `int` | `3` |
| `master.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `master.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `master.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `master.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `filer` | Filer service configuration. | `object` | `{}` |
| `filer.replicas` | Number of filer replicas. | `int` | `2` |
| `filer.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `filer.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `filer.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `filer.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `filer.grpcHost` | The hostname used to expose or access the filer service externally. | `string` | `""` |
| `filer.grpcPort` | The port used to access the filer service externally. | `int` | `443` |
| `filer.whitelist` | A list of IP addresses or CIDR ranges that are allowed to access the filer service. | `[]string` | `[]` |
| `volume` | Volume service configuration. | `object` | `{}` |
| `volume.replicas` | Number of volume replicas. | `int` | `2` |
| `volume.size` | Persistent Volume size. | `quantity` | `10Gi` |
| `volume.storageClass` | StorageClass used to store the data. | `string` | `""` |
| `volume.diskType` | SeaweedFS disk type tag for the default volume servers (e.g., "hdd", "ssd"). | `string` | `""` |
| `volume.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `volume.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `volume.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `volume.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `volume.zones` | A map of zones for MultiZone topology. Each zone can have its own number of replicas and size. | `map[string]object` | `{}` |
| `volume.zones[name].replicas` | Number of replicas in the zone. | `int` | `0` |
| `volume.zones[name].size` | Zone storage size. | `quantity` | `""` |
| `volume.zones[name].dataCenter` | SeaweedFS data center name for this zone. Defaults to the zone name. | `string` | `""` |
| `volume.zones[name].nodeSelector` | YAML nodeSelector for this zone (default: topology.kubernetes.io/zone: <zoneName>). | `string` | `""` |
| `volume.zones[name].storageClass` | StorageClass used to store zone data. Defaults to volume.storageClass. | `string` | `""` |
| `volume.zones[name].pools` | A map of storage pools for this zone. Each pool creates a separate Volume StatefulSet per zone. | `map[string]object` | `{}` |
| `volume.zones[name].pools[name].diskType` | SeaweedFS disk type tag (e.g., "ssd", "hdd", "nvme"). | `string` | `""` |
| `volume.zones[name].pools[name].replicas` | Number of volume replicas. Defaults to volume.replicas (Simple) or zone.replicas/volume.replicas (MultiZone). | `int` | `0` |
| `volume.zones[name].pools[name].size` | Persistent Volume size. Defaults to volume.size (Simple) or zone.size/volume.size (MultiZone). | `quantity` | `""` |
| `volume.zones[name].pools[name].storageClass` | Kubernetes StorageClass for the pool. Defaults to volume.storageClass. | `string` | `""` |
| `volume.zones[name].pools[name].resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `volume.zones[name].pools[name].resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `volume.zones[name].pools[name].resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `volume.zones[name].pools[name].resourcesPreset` | Default sizing preset used when `resources` is omitted. Defaults to volume.resourcesPreset. | `string` | `{}` |
| `volume.pools` | A map of storage pools. Each pool creates a separate Volume StatefulSet with its own disk type. | `map[string]object` | `{}` |
| `volume.pools[name].diskType` | SeaweedFS disk type tag (e.g., "ssd", "hdd", "nvme"). | `string` | `""` |
| `volume.pools[name].replicas` | Number of volume replicas. Defaults to volume.replicas (Simple) or zone.replicas/volume.replicas (MultiZone). | `int` | `0` |
| `volume.pools[name].size` | Persistent Volume size. Defaults to volume.size (Simple) or zone.size/volume.size (MultiZone). | `quantity` | `""` |
| `volume.pools[name].storageClass` | Kubernetes StorageClass for the pool. Defaults to volume.storageClass. | `string` | `""` |
| `volume.pools[name].resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `volume.pools[name].resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `volume.pools[name].resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `volume.pools[name].resourcesPreset` | Default sizing preset used when `resources` is omitted. Defaults to volume.resourcesPreset. | `string` | `{}` |
| `s3` | S3 service configuration. | `object` | `{}` |
| `s3.replicas` | Number of S3 replicas. | `int` | `2` |
| `s3.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `s3.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `s3.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `s3.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| Name | Description | Type | Value |
| ----------------------------- | -------------------------------------------------------------------------------------------------------- | ------------------- | ------- |
| `db` | Database configuration. | `object` | `{}` |
| `db.replicas` | Number of database replicas. | `int` | `2` |
| `db.size` | Persistent Volume size. | `quantity` | `10Gi` |
| `db.storageClass` | StorageClass used to store the data. | `string` | `""` |
| `db.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `db.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `db.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `db.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `master` | Master service configuration. | `object` | `{}` |
| `master.replicas` | Number of master replicas. | `int` | `3` |
| `master.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `master.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `master.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `master.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `filer` | Filer service configuration. | `object` | `{}` |
| `filer.replicas` | Number of filer replicas. | `int` | `2` |
| `filer.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `filer.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `filer.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `filer.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `filer.grpcHost` | The hostname used to expose or access the filer service externally. | `string` | `""` |
| `filer.grpcPort` | The port used to access the filer service externally. | `int` | `443` |
| `filer.whitelist` | A list of IP addresses or CIDR ranges that are allowed to access the filer service. | `[]string` | `[]` |
| `volume` | Volume service configuration. | `object` | `{}` |
| `volume.replicas` | Number of volume replicas. | `int` | `2` |
| `volume.size` | Persistent Volume size. | `quantity` | `10Gi` |
| `volume.storageClass` | StorageClass used to store the data. | `string` | `""` |
| `volume.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `volume.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `volume.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `volume.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
| `volume.zones` | A map of zones for MultiZone topology. Each zone can have its own number of replicas and size. | `map[string]object` | `{}` |
| `volume.zones[name].replicas` | Number of replicas in the zone. | `int` | `0` |
| `volume.zones[name].size` | Zone storage size. | `quantity` | `""` |
| `s3` | S3 service configuration. | `object` | `{}` |
| `s3.replicas` | Number of S3 replicas. | `int` | `2` |
| `s3.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
| `s3.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
| `s3.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
| `s3.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.1.0@sha256:2a3595cd88b30af55b2000d3ca204899beecef0012b0e0402754c3914aad1f7f
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.0.0-rc.1@sha256:235b194a531b70e266a10ef78d2955d19f5b659513f23d8b3cfbbc0dff7fc1c0

View File

@@ -1 +1 @@
ghcr.io/seaweedfs/seaweedfs-cosi-driver:v0.3.0
ghcr.io/seaweedfs/seaweedfs-cosi-driver:v0.2.0

View File

@@ -7,32 +7,10 @@ metadata:
driverName: {{ .Release.Namespace }}.seaweedfs.objectstorage.k8s.io
deletionPolicy: Delete
---
kind: BucketClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ .Release.Namespace }}-lock
driverName: {{ .Release.Namespace }}.seaweedfs.objectstorage.k8s.io
deletionPolicy: Retain
parameters:
objectLockEnabled: "true"
objectLockRetentionMode: "COMPLIANCE"
objectLockRetentionDays: "365"
---
kind: BucketAccessClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ .Release.Namespace }}
driverName: {{ .Release.Namespace }}.seaweedfs.objectstorage.k8s.io
authenticationType: KEY
parameters:
accessPolicy: readwrite
---
kind: BucketAccessClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ .Release.Namespace }}-readonly
driverName: {{ .Release.Namespace }}.seaweedfs.objectstorage.k8s.io
authenticationType: KEY
parameters:
accessPolicy: readonly
{{- end }}

View File

@@ -25,21 +25,8 @@ rules:
resourceNames:
- {{ $.Release.Name }}-master
- {{ $.Release.Name }}-filer
- {{ $.Release.Name }}-db
- {{ $.Release.Name }}-s3
{{- if eq .Values.topology "Simple" }}
- {{ $.Release.Name }}-volume
{{- range $poolName, $pool := .Values.volume.pools }}
- {{ $.Release.Name }}-volume-{{ $poolName }}
{{- end }}
{{- else if eq .Values.topology "MultiZone" }}
{{- range $zoneName, $zone := .Values.volume.zones }}
- {{ $.Release.Name }}-volume-{{ $zoneName }}
{{- range $poolName, $pool := (dig "pools" dict $zone) }}
- {{ $.Release.Name }}-volume-{{ $zoneName }}-{{ $poolName }}
{{- end }}
{{- end }}
{{- end }}
- {{ $.Release.Name }}-db
verbs: ["get", "list", "watch"]
{{- end }}

View File

@@ -16,65 +16,6 @@
{{- fail "replicationFactor must be less than or equal to the number of zones defined in .Values.volume.zones." }}
{{- end }}
{{- end }}
{{- if and (eq .Values.topology "Client") (gt (len .Values.volume.pools) 0) }}
{{- fail "volume.pools is not supported with Client topology." }}
{{- end }}
{{- if and (eq .Values.topology "MultiZone") (gt (len .Values.volume.pools) 0) }}
{{- fail "volume.pools is not supported with MultiZone topology. Use volume.zones[name].pools instead." }}
{{- end }}
{{- if and .Values.volume.diskType (not (regexMatch "^[a-z0-9]+$" .Values.volume.diskType)) }}
{{- fail (printf "volume.diskType must be lowercase alphanumeric (got: %s)." .Values.volume.diskType) }}
{{- end }}
{{- /* Collect and validate all pools from volume.pools and zones[].pools */ -}}
{{- $allPools := dict }}
{{- range $poolName, $pool := .Values.volume.pools }}
{{- if not (regexMatch "^[a-z0-9]([a-z0-9-]*[a-z0-9])?$" $poolName) }}
{{- fail (printf "volume.pools key '%s' must be a valid DNS label (lowercase alphanumeric and hyphens, no dots)." $poolName) }}
{{- end }}
{{- if or (hasSuffix "-lock" $poolName) (hasSuffix "-readonly" $poolName) }}
{{- fail (printf "volume.pools key '%s' must not end with '-lock' or '-readonly' (reserved suffixes for COSI resources)." $poolName) }}
{{- end }}
{{- if not $pool.diskType }}
{{- fail (printf "volume.pools.%s.diskType is required." $poolName) }}
{{- end }}
{{- if not (regexMatch "^[a-z0-9]+$" $pool.diskType) }}
{{- fail (printf "volume.pools.%s.diskType must be lowercase alphanumeric (got: %s)." $poolName $pool.diskType) }}
{{- end }}
{{- if and $.Values.volume.diskType (eq $pool.diskType $.Values.volume.diskType) }}
{{- fail (printf "volume.pools.%s.diskType '%s' must differ from volume.diskType." $poolName $pool.diskType) }}
{{- end }}
{{- $_ := set $allPools $poolName $pool.diskType }}
{{- end }}
{{- if eq .Values.topology "MultiZone" }}
{{- range $zoneName, $zone := .Values.volume.zones }}
{{- range $poolName, $pool := (dig "pools" dict $zone) }}
{{- if not (regexMatch "^[a-z0-9]([a-z0-9-]*[a-z0-9])?$" $poolName) }}
{{- fail (printf "volume.zones.%s.pools key '%s' must be a valid DNS label." $zoneName $poolName) }}
{{- end }}
{{- if or (hasSuffix "-lock" $poolName) (hasSuffix "-readonly" $poolName) }}
{{- fail (printf "volume.zones.%s.pools key '%s' must not end with '-lock' or '-readonly' (reserved suffixes for COSI resources)." $zoneName $poolName) }}
{{- end }}
{{- if not $pool.diskType }}
{{- fail (printf "volume.zones.%s.pools.%s.diskType is required." $zoneName $poolName) }}
{{- end }}
{{- if not (regexMatch "^[a-z0-9]+$" $pool.diskType) }}
{{- fail (printf "volume.zones.%s.pools.%s.diskType must be lowercase alphanumeric (got: %s)." $zoneName $poolName $pool.diskType) }}
{{- end }}
{{- if and $.Values.volume.diskType (eq $pool.diskType $.Values.volume.diskType) }}
{{- fail (printf "volume.zones.%s.pools.%s.diskType '%s' must differ from volume.diskType." $zoneName $poolName $pool.diskType) }}
{{- end }}
{{- if and (hasKey $allPools $poolName) (ne (get $allPools $poolName) $pool.diskType) }}
{{- fail (printf "Pool '%s' has inconsistent diskType across zones (expected '%s', got '%s' in zone '%s')." $poolName (get $allPools $poolName) $pool.diskType $zoneName) }}
{{- end }}
{{- $_ := set $allPools $poolName $pool.diskType }}
{{- $composedName := printf "%s-%s" $zoneName $poolName }}
{{- if hasKey $.Values.volume.zones $composedName }}
{{- fail (printf "Composed volume name '%s' (from zone '%s' and pool '%s') collides with an existing zone name." $composedName $zoneName $poolName) }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- $detectedTopology := "Unknown" }}
{{- $configMap := lookup "v1" "ConfigMap" .Release.Namespace (printf "%s-deployed-topology" .Release.Name) }}
@@ -153,77 +94,30 @@ spec:
storageClass: {{ . }}
{{- end }}
maxVolumes: 0
{{- if .Values.volume.diskType }}
extraArgs:
- "-disk={{ .Values.volume.diskType }}"
{{- end }}
{{- if or (and (eq .Values.topology "Simple") (gt (len .Values.volume.pools) 0)) (eq .Values.topology "MultiZone") }}
{{ if eq .Values.topology "MultiZone" }}
volumes:
{{- if eq .Values.topology "Simple" }}
{{- range $poolName, $pool := .Values.volume.pools }}
{{ $poolName }}:
replicas: {{ ternary $pool.replicas $.Values.volume.replicas (hasKey $pool "replicas") }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list ($pool.resourcesPreset | default $.Values.volume.resourcesPreset) (default dict $pool.resources) $) | nindent 12 }}
dataDirs:
- name: data1
type: "persistentVolumeClaim"
size: "{{ $pool.size | default $.Values.volume.size }}"
{{- with ($pool.storageClass | default $.Values.volume.storageClass) }}
storageClass: "{{ . }}"
{{- end }}
maxVolumes: 0
extraArgs:
- "-disk={{ $pool.diskType }}"
{{- end }}
{{- else if eq .Values.topology "MultiZone" }}
{{- range $zoneName, $zone := .Values.volume.zones }}
{{ $zoneName }}:
replicas: {{ ternary $zone.replicas $.Values.volume.replicas (hasKey $zone "replicas") }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list $.Values.volume.resourcesPreset $.Values.volume.resources $) | nindent 12 }}
dataDirs:
- name: data1
type: "persistentVolumeClaim"
size: "{{ $zone.size | default $.Values.volume.size }}"
{{- with ($zone.storageClass | default $.Values.volume.storageClass) }}
storageClass: "{{ . }}"
{{- end }}
maxVolumes: 0
nodeSelector: |
{{- with $zone.nodeSelector }}
{{ . | indent 12 }}
{{- else }}
topology.kubernetes.io/zone: {{ $zoneName }}
{{- end }}
dataCenter: {{ $zone.dataCenter | default $zoneName }}
{{- if $.Values.volume.diskType }}
extraArgs:
- "-disk={{ $.Values.volume.diskType }}"
{{ with $zone.replicas }}
replicas: {{ . }}
{{- end }}
{{- end }}
{{- range $zoneName, $zone := .Values.volume.zones }}
{{- range $poolName, $pool := (dig "pools" dict $zone) }}
{{ $zoneName }}-{{ $poolName }}:
replicas: {{ ternary $pool.replicas (ternary $zone.replicas $.Values.volume.replicas (hasKey $zone "replicas")) (hasKey $pool "replicas") }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list ($pool.resourcesPreset | default $.Values.volume.resourcesPreset) (default dict $pool.resources) $) | nindent 12 }}
dataDirs:
- name: data1
type: "persistentVolumeClaim"
size: "{{ $pool.size | default $zone.size | default $.Values.volume.size }}"
{{- with ($pool.storageClass | default $zone.storageClass | default $.Values.volume.storageClass) }}
storageClass: "{{ . }}"
{{- if $zone.size }}
size: "{{ $zone.size }}"
{{- else }}
size: "{{ $.Values.volume.size }}"
{{- end }}
{{- if $zone.storageClass }}
storageClass: {{ $zone.storageClass }}
{{- else if $.Values.volume.storageClass }}
storageClass: {{ $.Values.volume.storageClass }}
{{- end }}
maxVolumes: 0
nodeSelector: |
{{- with $zone.nodeSelector }}
{{ . | indent 12 }}
{{- else }}
topology.kubernetes.io/zone: {{ $zoneName }}
{{- end }}
dataCenter: {{ $zone.dataCenter | default $zoneName }}
extraArgs:
- "-disk={{ $pool.diskType }}"
{{- end }}
{{- end }}
{{- end }}
{{- end }}
filer:
@@ -305,22 +199,6 @@ spec:
app.kubernetes.io/component: volume
app.kubernetes.io/name: seaweedfs
version: {{ $.Chart.Version }}
{{- range $poolName, $pool := .Values.volume.pools }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-volume-{{ $poolName }}
spec:
replicas: {{ ternary $pool.replicas $.Values.volume.replicas (hasKey $pool "replicas") }}
minReplicas: 1
kind: seaweedfs
type: volume
selector:
app.kubernetes.io/component: volume-{{ $poolName }}
app.kubernetes.io/name: seaweedfs
version: {{ $.Chart.Version }}
{{- end }}
{{- else if eq .Values.topology "MultiZone" }}
{{- range $zoneName, $zoneSpec := .Values.volume.zones }}
---
@@ -329,7 +207,7 @@ kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-volume-{{ $zoneName }}
spec:
replicas: {{ ternary $zoneSpec.replicas $.Values.volume.replicas (hasKey $zoneSpec "replicas") }}
replicas: {{ default $.Values.volume.replicas $zoneSpec.replicas }}
minReplicas: 1
kind: seaweedfs
type: volume
@@ -337,22 +215,6 @@ spec:
app.kubernetes.io/component: volume-{{ $zoneName }}
app.kubernetes.io/name: seaweedfs
version: {{ $.Chart.Version }}
{{- range $poolName, $pool := (dig "pools" dict $zoneSpec) }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-volume-{{ $zoneName }}-{{ $poolName }}
spec:
replicas: {{ ternary $pool.replicas (ternary $zoneSpec.replicas $.Values.volume.replicas (hasKey $zoneSpec "replicas")) (hasKey $pool "replicas") }}
minReplicas: 1
kind: seaweedfs
type: volume
selector:
app.kubernetes.io/component: volume-{{ $zoneName }}-{{ $poolName }}
app.kubernetes.io/name: seaweedfs
version: {{ $.Chart.Version }}
{{- end }}
{{- end }}
{{- end }}
---

View File

@@ -1,55 +0,0 @@
{{- if ne .Values.topology "Client" }}
{{- /* Collect unique pools from volume.pools and zones[].pools */ -}}
{{- $uniquePools := dict }}
{{- range $poolName, $pool := .Values.volume.pools }}
{{- $_ := set $uniquePools $poolName $pool.diskType }}
{{- end }}
{{- if eq .Values.topology "MultiZone" }}
{{- range $zoneName, $zone := .Values.volume.zones }}
{{- range $poolName, $pool := (dig "pools" dict $zone) }}
{{- $_ := set $uniquePools $poolName $pool.diskType }}
{{- end }}
{{- end }}
{{- end }}
{{- range $poolName, $diskType := $uniquePools }}
---
kind: BucketClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ $.Release.Namespace }}-{{ $poolName }}
driverName: {{ $.Release.Namespace }}.seaweedfs.objectstorage.k8s.io
deletionPolicy: Delete
parameters:
disk: {{ $diskType }}
---
kind: BucketClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ $.Release.Namespace }}-{{ $poolName }}-lock
driverName: {{ $.Release.Namespace }}.seaweedfs.objectstorage.k8s.io
deletionPolicy: Retain
parameters:
disk: {{ $diskType }}
objectLockEnabled: "true"
objectLockRetentionMode: "COMPLIANCE"
objectLockRetentionDays: "365"
---
kind: BucketAccessClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ $.Release.Namespace }}-{{ $poolName }}
driverName: {{ $.Release.Namespace }}.seaweedfs.objectstorage.k8s.io
authenticationType: KEY
parameters:
accessPolicy: readwrite
---
kind: BucketAccessClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ $.Release.Namespace }}-{{ $poolName }}-readonly
driverName: {{ $.Release.Namespace }}.seaweedfs.objectstorage.k8s.io
authenticationType: KEY
parameters:
accessPolicy: readonly
{{- end }}
{{- end }}

View File

@@ -300,94 +300,6 @@
"type": "object",
"default": {},
"properties": {
"diskType": {
"description": "SeaweedFS disk type tag for the default volume servers (e.g., \"hdd\", \"ssd\").",
"type": "string",
"default": ""
},
"pools": {
"description": "A map of storage pools. Each pool creates a separate Volume StatefulSet with its own disk type.",
"type": "object",
"default": {},
"additionalProperties": {
"type": "object",
"required": [
"diskType"
],
"properties": {
"diskType": {
"description": "SeaweedFS disk type tag (e.g., \"ssd\", \"hdd\", \"nvme\").",
"type": "string"
},
"replicas": {
"description": "Number of volume replicas. Defaults to volume.replicas (Simple) or zone.replicas/volume.replicas (MultiZone).",
"type": "integer"
},
"resources": {
"description": "Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"properties": {
"cpu": {
"description": "Number of CPU cores allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Amount of memory allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted. Defaults to volume.resourcesPreset.",
"type": "string",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"size": {
"description": "Persistent Volume size. Defaults to volume.size (Simple) or zone.size/volume.size (MultiZone).",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"storageClass": {
"description": "Kubernetes StorageClass for the pool. Defaults to volume.storageClass.",
"type": "string"
}
}
}
},
"replicas": {
"description": "Number of volume replicas.",
"type": "integer",
@@ -466,96 +378,6 @@
"additionalProperties": {
"type": "object",
"properties": {
"dataCenter": {
"description": "SeaweedFS data center name for this zone. Defaults to the zone name.",
"type": "string"
},
"nodeSelector": {
"description": "YAML nodeSelector for this zone (default: topology.kubernetes.io/zone: \u003czoneName\u003e).",
"type": "string"
},
"pools": {
"description": "A map of storage pools for this zone. Each pool creates a separate Volume StatefulSet per zone.",
"type": "object",
"additionalProperties": {
"type": "object",
"required": [
"diskType"
],
"properties": {
"diskType": {
"description": "SeaweedFS disk type tag (e.g., \"ssd\", \"hdd\", \"nvme\").",
"type": "string"
},
"replicas": {
"description": "Number of volume replicas. Defaults to volume.replicas (Simple) or zone.replicas/volume.replicas (MultiZone).",
"type": "integer"
},
"resources": {
"description": "Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.",
"type": "object",
"properties": {
"cpu": {
"description": "Number of CPU cores allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"memory": {
"description": "Amount of memory allocated.",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
}
}
},
"resourcesPreset": {
"description": "Default sizing preset used when `resources` is omitted. Defaults to volume.resourcesPreset.",
"type": "string",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"size": {
"description": "Persistent Volume size. Defaults to volume.size (Simple) or zone.size/volume.size (MultiZone).",
"pattern": "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
"anyOf": [
{
"type": "integer"
},
{
"type": "string"
}
],
"x-kubernetes-int-or-string": true
},
"storageClass": {
"description": "Kubernetes StorageClass for the pool. Defaults to volume.storageClass.",
"type": "string"
}
}
}
},
"replicas": {
"description": "Number of replicas in the zone.",
"type": "integer"
@@ -572,10 +394,6 @@
}
],
"x-kubernetes-int-or-string": true
},
"storageClass": {
"description": "StorageClass used to store zone data. Defaults to volume.storageClass.",
"type": "string"
}
}
}

View File

@@ -76,49 +76,26 @@ filer:
grpcPort: 443
whitelist: []
## @typedef {struct} StoragePool - Storage pool configuration for separating buckets by disk type.
## @field {string} diskType - SeaweedFS disk type tag (e.g., "ssd", "hdd", "nvme").
## @field {int} [replicas] - Number of volume replicas. Defaults to volume.replicas (Simple) or zone.replicas/volume.replicas (MultiZone).
## @field {quantity} [size] - Persistent Volume size. Defaults to volume.size (Simple) or zone.size/volume.size (MultiZone).
## @field {string} [storageClass] - Kubernetes StorageClass for the pool. Defaults to volume.storageClass.
## @field {Resources} [resources] - Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.
## @field {ResourcesPreset} [resourcesPreset] - Default sizing preset used when `resources` is omitted. Defaults to volume.resourcesPreset.
## @typedef {struct} Zone - Zone configuration.
## @field {int} [replicas] - Number of replicas in the zone.
## @field {quantity} [size] - Zone storage size.
## @field {string} [dataCenter] - SeaweedFS data center name for this zone. Defaults to the zone name.
## @field {string} [nodeSelector] - YAML nodeSelector for this zone (default: topology.kubernetes.io/zone: <zoneName>).
## @field {string} [storageClass] - StorageClass used to store zone data. Defaults to volume.storageClass.
## @field {map[string]StoragePool} [pools] - A map of storage pools for this zone. Each pool creates a separate Volume StatefulSet per zone.
## NOTE: Zone-level resources/resourcesPreset are inherited from volume.* settings. Pools within a zone can define their own resources.
## @typedef {struct} Volume - Volume service configuration.
## @field {int} [replicas] - Number of volume replicas.
## @field {quantity} [size] - Persistent Volume size.
## @field {string} [storageClass] - StorageClass used to store the data.
## @field {string} [diskType] - SeaweedFS disk type tag for the default volume servers (e.g., "hdd", "ssd").
## @field {Resources} [resources] - Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied.
## @field {ResourcesPreset} [resourcesPreset] - Default sizing preset used when `resources` is omitted.
## @field {map[string]Zone} [zones] - A map of zones for MultiZone topology. Each zone can have its own number of replicas and size.
## @field {map[string]StoragePool} [pools] - A map of storage pools. Each pool creates a separate Volume StatefulSet with its own disk type.
## @param {Volume} [volume] - Volume service configuration.
volume:
replicas: 2
size: 10Gi
storageClass: ""
diskType: ""
resources: {}
resourcesPreset: "small"
zones: {}
pools: {}
#pools:
# fast:
# diskType: ssd
# replicas: 2
# size: 50Gi
# storageClass: "local-nvme"
## @typedef {struct} S3 - S3 service configuration.
## @field {int} [replicas] - Number of S3 replicas.

View File

@@ -3,21 +3,30 @@ apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: backups.cozystack.io:core-controller
rules:
# Plan: reconcile schedule and update status
- apiGroups: ["backups.cozystack.io"]
resources: ["plans"]
verbs: ["get", "list", "watch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["plans/status"]
verbs: ["get", "update", "patch"]
# BackupJob: create when schedule fires (status is updated by backupstrategy-controller)
- apiGroups: ["backups.cozystack.io"]
resources: ["backupjobs"]
verbs: ["create", "get", "list", "watch", "update", "patch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["backupjobs/status"]
verbs: ["get", "update", "patch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["backups"]
verbs: ["create", "get", "list", "watch"]
# Leader election (--leader-elect)
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["apps.cozystack.io"]
resources: ["buckets", "bucketaccesses", "virtualmachines"]
verbs: ["get", "list", "watch"]
- apiGroups: ["objectstorage.k8s.io"]
resources: ["buckets", "bucketaccesses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
resources: ["secrets"]
verbs: ["create", "get", "list", "watch", "update", "patch"]
- apiGroups: ["kubevirt.io"]
resources: ["virtualmachines"]
verbs: ["get", "list", "watch"]
- apiGroups: ["velero.io"]
resources: ["backups", "backupstoragelocations", "volumesnapshotlocations", "restores"]
verbs: ["create", "get", "list", "watch", "update", "patch"]

View File

@@ -1,5 +1,5 @@
backupController:
image: "ghcr.io/cozystack/cozystack/backup-controller:v1.1.0@sha256:8e42e29f5d30ecbef1f05cb0601c32703c5f9572b89d2c9032c1dff186e9a526"
image: "ghcr.io/cozystack/cozystack/backup-controller:v1.0.0-rc.1@sha256:0bb4173bdcd3d917a7bd358ecc2c6a053a06ab0bd1fcdb89d1016a66173e6dfb"
replicas: 2
debug: false
metrics:

View File

@@ -3,38 +3,9 @@ apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: backups.cozystack.io:strategy-controller
rules:
# Strategy types (Velero, Job)
- apiGroups: ["strategy.backups.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
# BackupClass: resolve strategy per application
- apiGroups: ["backups.cozystack.io"]
resources: ["backupclasses"]
verbs: ["get", "list", "watch"]
# BackupJob / RestoreJob: reconcile and update status
- apiGroups: ["backups.cozystack.io"]
resources: ["backupjobs", "restorejobs"]
verbs: ["get", "list", "watch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["backupjobs/status", "restorejobs/status"]
verbs: ["get", "update", "patch"]
# Backup: create after Velero backup completes
- apiGroups: ["backups.cozystack.io"]
resources: ["backups"]
verbs: ["create", "get", "list", "watch"]
# Application refs (e.g. VMInstance, VirtualMachine) for backup/restore scope
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
# Velero Backup/Restore in cozy-velero namespace
- apiGroups: ["velero.io"]
resources: ["backups", "restores"]
verbs: ["create", "get", "list", "watch", "update", "patch"]
# Events from Recorder.Event() calls
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
# Leader election (--leader-elect)
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list", "watch", "create", "update", "patch"]

View File

@@ -1,5 +1,5 @@
backupStrategyController:
image: "ghcr.io/cozystack/cozystack/backupstrategy-controller:v1.1.0@sha256:508e3bd5a83a316732cfb84fe598064e3092482d941cfc53738ca21237642e6f"
image: "ghcr.io/cozystack/cozystack/backupstrategy-controller:v1.0.0-rc.1@sha256:c2d975574ea9edcd785b533e01add37909959a64ef815529162dfe1f472ea702"
replicas: 2
debug: false
metrics:

View File

@@ -8,7 +8,7 @@ spec:
plural: buckets
singular: bucket
openAPISchema: |-
{"title":"Chart Values","type":"object","properties":{"locking":{"description":"Provisions bucket from the `-lock` BucketClass (with object lock enabled).","type":"boolean","default":false},"storagePool":{"description":"Selects a specific BucketClass by storage pool name.","type":"string","default":""},"users":{"description":"Users configuration map.","type":"object","default":{},"additionalProperties":{"type":"object","properties":{"readonly":{"description":"Whether the user has read-only access.","type":"boolean"}}}}}}
{"title":"Chart Values","type":"object","properties":{}}
release:
prefix: bucket-
labels:
@@ -26,14 +26,13 @@ spec:
tags:
- storage
icon: PHN2ZyB3aWR0aD0iMTQ0IiBoZWlnaHQ9IjE0NCIgdmlld0JveD0iMCAwIDE0NCAxNDQiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxyZWN0IHdpZHRoPSIxNDQiIGhlaWdodD0iMTQ0IiByeD0iMjQiIGZpbGw9InVybCgjcGFpbnQwX2xpbmVhcl82ODNfMzA5MSkiLz4KPHBhdGggZmlsbC1ydWxlPSJldmVub2RkIiBjbGlwLXJ1bGU9ImV2ZW5vZGQiIGQ9Ik03MiAzMC4xNjQxTDExNy45ODMgMzYuNzc4OVY0MC42NzM5QzExNy45ODMgNDYuNDY1MyA5Ny4zODYyIDUxLjEzMzIgNzEuOTgyNyA1MS4xMzMyQzQ2LjU3OTIgNTEuMTMzMiAyNiA0Ni40NjUzIDI2IDQwLjY3MzlWMzYuNDQzMUw3MiAzMC4xNjQxWk03MiA1OC4yNjc4QzkxLjIwODQgNTguMjY3OCAxMDcuNjU4IDU1LjU5ODYgMTE0LjU0NyA1MS44MDQ4TDExNi44MDMgNDguMTExTDExNy43MjMgNDQuNzUzVjQ4LjkxNzFMMTAyLjY3OSAxMTEuMDMzQzEwMi42NzkgMTE0Ljg5NSA4OC45NTMzIDExOCA3Mi4wMTcyIDExOEM1NS4wODEyIDExOCA0MS4zNzQzIDExNC44OTUgNDEuMzc0MyAxMTEuMDMzTDI2LjMzIDQ4LjkxNzFWNDQuODM2OUwyOS44MDA3IDUxLjkzODJDMzYuNzA2NSA1NS42NjUzIDUyLjk5OTcgNTguMjY3OCA3MiA1OC4yNjc4WiIgZmlsbD0iIzhDMzEyMyIvPgo8cGF0aCBmaWxsLXJ1bGU9ImV2ZW5vZGQiIGNsaXAtcnVsZT0iZXZlbm9kZCIgZD0iTTcyLjAwMDMgMjZDOTcuNDAzOCAyNiAxMTggMzAuNjgzOSAxMTggMzYuNDQyQzExOCA0Mi4yIDk3LjM4NjYgNDYuODUwNyA3Mi4wMDAzIDQ2Ljg1MDdDNDYuNjE0MSA0Ni44NTA3IDI2LjAxNzYgNDIuMjM0NSAyNi4wMTc2IDM2LjQ0MkMyNi4wMTc2IDMwLjY0OTQgNDYuNTk2OCAyNiA3Mi4wMDAzIDI2Wk03Mi4wMDAzIDU0LjEwMzdDOTUuNjg1NyA1NC4xMDM3IDExNS4xNzIgNTAuMDU4IDExNy43MDYgNDQuODE5N0wxMDIuNjYyIDEwNi45MzdDMTAyLjY2MiAxMTAuNzk5IDg4LjkzNjQgMTEzLjkwNSA3Mi4wMDAzIDExMy45MDVDNTUuMDY0MyAxMTMuOTA1IDQxLjMzOSAxMTAuODE2IDQxLjMzOSAxMDYuOTU0TDI2LjI5NTkgNDQuODM3QzI4Ljg0NjYgNTAuMDU4IDQ4LjMzMzMgNTQuMTAzNyA3Mi4wMDAzIDU0LjEwMzdaIiBmaWxsPSIjRTA1MjQzIi8+CjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJNNjEuMTcyNSA2MC4wMjkzSDgxLjA5MjhWNzkuMTY3Nkg2MS4xNzI1VjYwLjAyOTNaTTQ1LjMzMDEgOTUuMzY4OEM0NS4zMzAxIDkwLjE0MiA0OS43MTA0IDg1LjkzNDIgNTUuMTUxMSA4NS45MzQyQzYwLjU5MTcgODUuOTM0MiA2NC45NzIxIDkwLjE0MiA2NC45NzIxIDk1LjM2ODhDNjQuOTcyMSAxMDAuNTk2IDYwLjU5MTcgMTA0LjgwMyA1NS4xNTExIDEwNC44MDNDNDkuNzEwNCAxMDQuODAzIDQ1LjMzMDEgMTAwLjU5NiA0NS4zMzAxIDk1LjM2ODhaTTk2LjQ0ODcgMTA0LjM2OEg3Ni43NzIyTDg2LjYxMDUgODYuNzczN0w5Ni40NDg3IDEwNC4zNjhaIiBmaWxsPSJ3aGl0ZSIvPgo8ZGVmcz4KPGxpbmVhckdyYWRpZW50IGlkPSJwYWludDBfbGluZWFyXzY4M18zMDkxIiB4MT0iMCIgeTE9IjAiIHgyPSIxNTEiIHkyPSIxODAiIGdyYWRpZW50VW5pdHM9InVzZXJTcGFjZU9uVXNlIj4KPHN0b3Agc3RvcC1jb2xvcj0iI0ZGRjBFRSIvPgo8c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiNFQzg4N0QiLz4KPC9saW5lYXJHcmFkaWVudD4KPC9kZWZzPgo8L3N2Zz4K
keysOrder: [["apiVersion"], ["appVersion"], ["kind"], ["metadata"], ["metadata", "name"], ["spec", "locking"], ["spec", "storagePool"], ["spec", "users"]]
keysOrder: [["apiVersion"], ["appVersion"], ["kind"], ["metadata"], ["metadata", "name"]]
secrets:
exclude: []
include:
- resourceNames:
- bucket-{{ .name }}
- bucket-{{ .name }}-credentials
- matchLabels:
apps.cozystack.io/user-secret: "true"
ingresses:
exclude: []
include:

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:5a7cae722ff6b424bdfbc4aba9d072c11b6930e2ee0f5fa97c3a565bd1c8dc88
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:291427de7db54a1d19dc9c2c807bdcc664a14caa9538786f31317e8c01a4a008

View File

@@ -9,7 +9,6 @@ WORKDIR /usr/src/app
RUN wget -O- https://github.com/cloudlena/s3manager/archive/9a7c8e446b422f8973b8c461990f39fdafee9c27.tar.gz | tar -xzf- --strip 1
ADD cozystack.patch /
RUN git apply /cozystack.patch
RUN go mod tidy
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0 go build -ldflags="-s -w" -a -installsuffix cgo -o bin/s3manager
FROM docker.io/library/alpine:latest

View File

@@ -1,235 +1,3 @@
diff --git a/go.mod b/go.mod
index b5d8540..6ede8e8 100644
--- a/go.mod
+++ b/go.mod
@@ -1,10 +1,11 @@
module github.com/cloudlena/s3manager
-go 1.22.5
+go 1.23
require (
github.com/cloudlena/adapters v0.0.0-20240708203353-a39be02cc801
github.com/gorilla/mux v1.8.1
+ github.com/gorilla/sessions v1.4.0
github.com/matryer/is v1.4.1
github.com/minio/minio-go/v7 v7.0.74
github.com/spf13/viper v1.19.0
@@ -16,6 +17,7 @@ require (
github.com/go-ini/ini v1.67.0 // indirect
github.com/goccy/go-json v0.10.3 // indirect
github.com/google/uuid v1.6.0 // indirect
+ github.com/gorilla/securecookie v1.1.2 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/klauspost/cpuid/v2 v2.2.8 // indirect
diff --git a/go.sum b/go.sum
index 1ea1b16..d7866ce 100644
--- a/go.sum
+++ b/go.sum
@@ -16,10 +16,16 @@ github.com/goccy/go-json v0.10.3 h1:KZ5WoDbxAIgm2HNbYckL0se1fHD6rz5j4ywS6ebzDqA=
github.com/goccy/go-json v0.10.3/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
+github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
+github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
+github.com/gorilla/securecookie v1.1.2 h1:YCIWL56dvtr73r6715mJs5ZvhtnY73hBvEF8kXD8ePA=
+github.com/gorilla/securecookie v1.1.2/go.mod h1:NfCASbcHqRSY+3a8tlWJwsQap2VX5pwzwo4h3eOamfo=
+github.com/gorilla/sessions v1.4.0 h1:kpIYOp/oi6MG/p5PgxApU8srsSw9tuFbt46Lt7auzqQ=
+github.com/gorilla/sessions v1.4.0/go.mod h1:FLWm50oby91+hl7p/wRxDth9bWSuk0qVL2emc7lT5ik=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
diff --git a/main.go b/main.go
index 2ffe8ab..723a1b8 100644
--- a/main.go
+++ b/main.go
@@ -41,10 +41,12 @@ type configuration struct {
Timeout int32
SseType string
SseKey string
+ LoginMode bool
}
func parseConfiguration() configuration {
var accessKeyID, secretAccessKey, iamEndpoint string
+ var loginMode bool
viper.AutomaticEnv()
@@ -57,13 +59,10 @@ func parseConfiguration() configuration {
iamEndpoint = viper.GetString("IAM_ENDPOINT")
} else {
accessKeyID = viper.GetString("ACCESS_KEY_ID")
- if len(accessKeyID) == 0 {
- log.Fatal("please provide ACCESS_KEY_ID")
- }
-
secretAccessKey = viper.GetString("SECRET_ACCESS_KEY")
- if len(secretAccessKey) == 0 {
- log.Fatal("please provide SECRET_ACCESS_KEY")
+ if len(accessKeyID) == 0 || len(secretAccessKey) == 0 {
+ log.Println("ACCESS_KEY_ID or SECRET_ACCESS_KEY not set, starting in login mode")
+ loginMode = true
}
}
@@ -115,6 +114,7 @@ func parseConfiguration() configuration {
Timeout: timeout,
SseType: sseType,
SseKey: sseKey,
+ LoginMode: loginMode,
}
}
@@ -135,57 +135,96 @@ func main() {
log.Fatal(err)
}
- // Set up S3 client
- opts := &minio.Options{
- Secure: configuration.UseSSL,
- }
- if configuration.UseIam {
- opts.Creds = credentials.NewIAM(configuration.IamEndpoint)
- } else {
- var signatureType credentials.SignatureType
-
- switch configuration.SignatureType {
- case "V2":
- signatureType = credentials.SignatureV2
- case "V4":
- signatureType = credentials.SignatureV4
- case "V4Streaming":
- signatureType = credentials.SignatureV4Streaming
- case "Anonymous":
- signatureType = credentials.SignatureAnonymous
- default:
- log.Fatalf("Invalid SIGNATURE_TYPE: %s", configuration.SignatureType)
+ // Set up router
+ r := mux.NewRouter()
+ r.PathPrefix("/static/").Handler(http.StripPrefix("/static/", http.FileServer(http.FS(statics)))).Methods(http.MethodGet)
+
+ if configuration.LoginMode {
+ // Login mode: no pre-configured S3 client, per-session credentials
+ sessionCfg := &s3manager.SessionConfig{
+ Store: s3manager.NewSessionStore(),
+ Endpoint: configuration.Endpoint,
+ UseSSL: configuration.UseSSL,
+ SkipSSLVerify: configuration.SkipSSLVerification,
+ AllowDelete: configuration.AllowDelete,
+ ForceDownload: configuration.ForceDownload,
+ ListRecursive: configuration.ListRecursive,
+ SseInfo: sseType,
+ Templates: templates,
}
- opts.Creds = credentials.NewStatic(configuration.AccessKeyID, configuration.SecretAccessKey, "", signatureType)
- }
+ // Public routes (no auth required)
+ r.Handle("/login", s3manager.HandleLoginView(templates)).Methods(http.MethodGet)
+ r.Handle("/login", s3manager.HandleLogin(sessionCfg)).Methods(http.MethodPost)
+ r.Handle("/logout", s3manager.HandleLogout(sessionCfg)).Methods(http.MethodPost)
+
+ // Protected routes (auth required via middleware)
+ protected := mux.NewRouter()
+ protected.Handle("/", http.RedirectHandler("/buckets", http.StatusPermanentRedirect)).Methods(http.MethodGet)
+ protected.Handle("/buckets", s3manager.HandleBucketsViewDynamic(templates, configuration.AllowDelete)).Methods(http.MethodGet)
+ protected.PathPrefix("/buckets/").Handler(s3manager.HandleBucketViewDynamic(templates, configuration.AllowDelete, configuration.ListRecursive)).Methods(http.MethodGet)
+ protected.Handle("/api/buckets", s3manager.HandleCreateBucketDynamic()).Methods(http.MethodPost)
+ if configuration.AllowDelete {
+ protected.Handle("/api/buckets/{bucketName}", s3manager.HandleDeleteBucketDynamic()).Methods(http.MethodDelete)
+ }
+ protected.Handle("/api/buckets/{bucketName}/objects", s3manager.HandleCreateObjectDynamic(sseType)).Methods(http.MethodPost)
+ protected.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}/url", s3manager.HandleGenerateUrlDynamic()).Methods(http.MethodGet)
+ protected.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleGetObjectDynamic(configuration.ForceDownload)).Methods(http.MethodGet)
+ if configuration.AllowDelete {
+ protected.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleDeleteObjectDynamic()).Methods(http.MethodDelete)
+ }
- if configuration.Region != "" {
- opts.Region = configuration.Region
- }
- if configuration.UseSSL && configuration.SkipSSLVerification {
- opts.Transport = &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}} //nolint:gosec
- }
- s3, err := minio.New(configuration.Endpoint, opts)
- if err != nil {
- log.Fatalln(fmt.Errorf("error creating s3 client: %w", err))
- }
+ r.PathPrefix("/").Handler(s3manager.RequireAuth(sessionCfg, protected))
+ } else {
+ // Pre-configured mode: existing behavior with static S3 client
+ opts := &minio.Options{
+ Secure: configuration.UseSSL,
+ }
+ if configuration.UseIam {
+ opts.Creds = credentials.NewIAM(configuration.IamEndpoint)
+ } else {
+ var signatureType credentials.SignatureType
+
+ switch configuration.SignatureType {
+ case "V2":
+ signatureType = credentials.SignatureV2
+ case "V4":
+ signatureType = credentials.SignatureV4
+ case "V4Streaming":
+ signatureType = credentials.SignatureV4Streaming
+ case "Anonymous":
+ signatureType = credentials.SignatureAnonymous
+ default:
+ log.Fatalf("Invalid SIGNATURE_TYPE: %s", configuration.SignatureType)
+ }
+
+ opts.Creds = credentials.NewStatic(configuration.AccessKeyID, configuration.SecretAccessKey, "", signatureType)
+ }
- // Set up router
- r := mux.NewRouter()
- r.Handle("/", http.RedirectHandler("/buckets", http.StatusPermanentRedirect)).Methods(http.MethodGet)
- r.PathPrefix("/static/").Handler(http.StripPrefix("/static/", http.FileServer(http.FS(statics)))).Methods(http.MethodGet)
- r.Handle("/buckets", s3manager.HandleBucketsView(s3, templates, configuration.AllowDelete)).Methods(http.MethodGet)
- r.PathPrefix("/buckets/").Handler(s3manager.HandleBucketView(s3, templates, configuration.AllowDelete, configuration.ListRecursive)).Methods(http.MethodGet)
- r.Handle("/api/buckets", s3manager.HandleCreateBucket(s3)).Methods(http.MethodPost)
- if configuration.AllowDelete {
- r.Handle("/api/buckets/{bucketName}", s3manager.HandleDeleteBucket(s3)).Methods(http.MethodDelete)
- }
- r.Handle("/api/buckets/{bucketName}/objects", s3manager.HandleCreateObject(s3, sseType)).Methods(http.MethodPost)
- r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}/url", s3manager.HandleGenerateUrl(s3)).Methods(http.MethodGet)
- r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleGetObject(s3, configuration.ForceDownload)).Methods(http.MethodGet)
- if configuration.AllowDelete {
- r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleDeleteObject(s3)).Methods(http.MethodDelete)
+ if configuration.Region != "" {
+ opts.Region = configuration.Region
+ }
+ if configuration.UseSSL && configuration.SkipSSLVerification {
+ opts.Transport = &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}} //nolint:gosec
+ }
+ s3, err := minio.New(configuration.Endpoint, opts)
+ if err != nil {
+ log.Fatalln(fmt.Errorf("error creating s3 client: %w", err))
+ }
+
+ r.Handle("/", http.RedirectHandler("/buckets", http.StatusPermanentRedirect)).Methods(http.MethodGet)
+ r.Handle("/buckets", s3manager.HandleBucketsView(s3, templates, configuration.AllowDelete)).Methods(http.MethodGet)
+ r.PathPrefix("/buckets/").Handler(s3manager.HandleBucketView(s3, templates, configuration.AllowDelete, configuration.ListRecursive)).Methods(http.MethodGet)
+ r.Handle("/api/buckets", s3manager.HandleCreateBucket(s3)).Methods(http.MethodPost)
+ if configuration.AllowDelete {
+ r.Handle("/api/buckets/{bucketName}", s3manager.HandleDeleteBucket(s3)).Methods(http.MethodDelete)
+ }
+ r.Handle("/api/buckets/{bucketName}/objects", s3manager.HandleCreateObject(s3, sseType)).Methods(http.MethodPost)
+ r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}/url", s3manager.HandleGenerateUrl(s3)).Methods(http.MethodGet)
+ r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleGetObject(s3, configuration.ForceDownload)).Methods(http.MethodGet)
+ if configuration.AllowDelete {
+ r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleDeleteObject(s3)).Methods(http.MethodDelete)
+ }
}
lr := logging.Handler(os.Stdout)(r)
diff --git a/web/template/bucket.html.tmpl b/web/template/bucket.html.tmpl
index e2f8d28..87add13 100644
--- a/web/template/bucket.html.tmpl
@@ -256,298 +24,3 @@ index c7ea184..fb1dce7 100644
</div>
</nav>
diff --git a/internal/app/s3manager/auth.go b/internal/app/s3manager/auth.go
new file mode 100644
index 0000000..58589e2
--- /dev/null
+++ b/internal/app/s3manager/auth.go
@@ -0,0 +1,237 @@
+package s3manager
+
+import (
+ "context"
+ "crypto/rand"
+ "crypto/tls"
+ "fmt"
+ "html/template"
+ "io/fs"
+ "log"
+ "net/http"
+
+ "github.com/gorilla/sessions"
+ "github.com/minio/minio-go/v7"
+ "github.com/minio/minio-go/v7/pkg/credentials"
+)
+
+type contextKey string
+
+const s3ContextKey contextKey = "s3client"
+
+// SessionConfig holds session store and S3 connection settings for login mode.
+type SessionConfig struct {
+ Store *sessions.CookieStore
+ Endpoint string
+ UseSSL bool
+ SkipSSLVerify bool
+ AllowDelete bool
+ ForceDownload bool
+ ListRecursive bool
+ SseInfo SSEType
+ Templates fs.FS
+}
+
+// NewSessionStore creates a CookieStore with a random encryption key.
+func NewSessionStore() *sessions.CookieStore {
+ key := make([]byte, 32)
+ if _, err := rand.Read(key); err != nil {
+ log.Fatal("failed to generate session key:", err)
+ }
+ store := sessions.NewCookieStore(key)
+ store.Options = &sessions.Options{
+ Path: "/",
+ MaxAge: 86400,
+ HttpOnly: true,
+ Secure: true,
+ SameSite: http.SameSiteLaxMode,
+ }
+ return store
+}
+
+// NewS3Client creates a minio client from user-provided credentials.
+func NewS3Client(endpoint, accessKey, secretKey string, useSSL, skipSSLVerify bool) (*minio.Client, error) {
+ opts := &minio.Options{
+ Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
+ Secure: useSSL,
+ }
+ if useSSL && skipSSLVerify {
+ opts.Transport = &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}} //nolint:gosec
+ }
+ return minio.New(endpoint, opts)
+}
+
+// S3FromContext retrieves the S3 client stored in request context.
+func S3FromContext(ctx context.Context) S3 {
+ if s3, ok := ctx.Value(s3ContextKey).(S3); ok {
+ return s3
+ }
+ return nil
+}
+
+func contextWithS3(ctx context.Context, s3 S3) context.Context {
+ return context.WithValue(ctx, s3ContextKey, s3)
+}
+
+// RequireAuth is middleware that validates session credentials and injects
+// an S3 client into the request context. Redirects to /login if no session.
+func RequireAuth(cfg *SessionConfig, next http.Handler) http.Handler {
+ return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ session, _ := cfg.Store.Get(r, "s3session")
+ accessKey, ok1 := session.Values["accessKey"].(string)
+ secretKey, ok2 := session.Values["secretKey"].(string)
+ if !ok1 || !ok2 || accessKey == "" || secretKey == "" {
+ http.Redirect(w, r, "/login", http.StatusFound)
+ return
+ }
+
+ s3, err := NewS3Client(cfg.Endpoint, accessKey, secretKey, cfg.UseSSL, cfg.SkipSSLVerify)
+ if err != nil {
+ // Session has bad credentials — clear and redirect to login
+ session.Options.MaxAge = -1
+ _ = session.Save(r, w)
+ http.Redirect(w, r, "/login", http.StatusFound)
+ return
+ }
+
+ ctx := contextWithS3(r.Context(), s3)
+ next.ServeHTTP(w, r.WithContext(ctx))
+ })
+}
+
+// HandleLoginView renders the login page.
+func HandleLoginView(templates fs.FS) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ errorMsg := r.URL.Query().Get("error")
+
+ data := struct {
+ Error string
+ }{
+ Error: errorMsg,
+ }
+
+ t, err := template.ParseFS(templates, "layout.html.tmpl", "login.html.tmpl")
+ if err != nil {
+ handleHTTPError(w, fmt.Errorf("error parsing login template: %w", err))
+ return
+ }
+ err = t.ExecuteTemplate(w, "layout", data)
+ if err != nil {
+ handleHTTPError(w, fmt.Errorf("error executing login template: %w", err))
+ return
+ }
+ }
+}
+
+// HandleLogin processes the login form POST.
+func HandleLogin(cfg *SessionConfig) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ accessKey := r.FormValue("accessKey")
+ secretKey := r.FormValue("secretKey")
+
+ if accessKey == "" || secretKey == "" {
+ http.Redirect(w, r, "/login?error=credentials+required", http.StatusFound)
+ return
+ }
+
+ // Validate credentials by attempting ListBuckets
+ s3, err := NewS3Client(cfg.Endpoint, accessKey, secretKey, cfg.UseSSL, cfg.SkipSSLVerify)
+ if err != nil {
+ http.Redirect(w, r, "/login?error=connection+failed", http.StatusFound)
+ return
+ }
+ _, err = s3.ListBuckets(r.Context())
+ if err != nil {
+ http.Redirect(w, r, "/login?error=invalid+credentials", http.StatusFound)
+ return
+ }
+
+ // Save credentials to session
+ session, _ := cfg.Store.Get(r, "s3session")
+ session.Values["accessKey"] = accessKey
+ session.Values["secretKey"] = secretKey
+ err = session.Save(r, w)
+ if err != nil {
+ handleHTTPError(w, fmt.Errorf("error saving session: %w", err))
+ return
+ }
+
+ http.Redirect(w, r, "/buckets", http.StatusFound)
+ }
+}
+
+// HandleLogout destroys the session and redirects to login.
+func HandleLogout(cfg *SessionConfig) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ session, _ := cfg.Store.Get(r, "s3session")
+ session.Options.MaxAge = -1
+ _ = session.Save(r, w)
+ http.Redirect(w, r, "/login", http.StatusFound)
+ }
+}
+
+// Dynamic handler wrappers — extract S3 from context, delegate to original handlers.
+
+// HandleBucketsViewDynamic wraps HandleBucketsView for login mode.
+func HandleBucketsViewDynamic(templates fs.FS, allowDelete bool) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleBucketsView(s3, templates, allowDelete).ServeHTTP(w, r)
+ }
+}
+
+// HandleBucketViewDynamic wraps HandleBucketView for login mode.
+func HandleBucketViewDynamic(templates fs.FS, allowDelete bool, listRecursive bool) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleBucketView(s3, templates, allowDelete, listRecursive).ServeHTTP(w, r)
+ }
+}
+
+// HandleCreateBucketDynamic wraps HandleCreateBucket for login mode.
+func HandleCreateBucketDynamic() http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleCreateBucket(s3).ServeHTTP(w, r)
+ }
+}
+
+// HandleDeleteBucketDynamic wraps HandleDeleteBucket for login mode.
+func HandleDeleteBucketDynamic() http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleDeleteBucket(s3).ServeHTTP(w, r)
+ }
+}
+
+// HandleCreateObjectDynamic wraps HandleCreateObject for login mode.
+func HandleCreateObjectDynamic(sseInfo SSEType) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleCreateObject(s3, sseInfo).ServeHTTP(w, r)
+ }
+}
+
+// HandleGenerateUrlDynamic wraps HandleGenerateUrl for login mode.
+func HandleGenerateUrlDynamic() http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleGenerateUrl(s3).ServeHTTP(w, r)
+ }
+}
+
+// HandleGetObjectDynamic wraps HandleGetObject for login mode.
+func HandleGetObjectDynamic(forceDownload bool) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleGetObject(s3, forceDownload).ServeHTTP(w, r)
+ }
+}
+
+// HandleDeleteObjectDynamic wraps HandleDeleteObject for login mode.
+func HandleDeleteObjectDynamic() http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleDeleteObject(s3).ServeHTTP(w, r)
+ }
+}
diff --git a/web/template/login.html.tmpl b/web/template/login.html.tmpl
new file mode 100644
index 0000000..f153018
--- /dev/null
+++ b/web/template/login.html.tmpl
@@ -0,0 +1,46 @@
+{{ define "content" }}
+<nav>
+ <div class="nav-wrapper container">
+ <a href="/" class="brand-logo">Cozystack S3 Manager</a>
+ </div>
+</nav>
+
+<div class="container">
+ <div class="section">
+ <div class="row">
+ <div class="col l6 offset-l3 m8 offset-m2 s12">
+ <div class="card">
+ <div class="card-content">
+ <span class="card-title">Sign In</span>
+ <p>Enter your S3 credentials to access the bucket manager.</p>
+ <br>
+
+ {{ if .Error }}
+ <div class="card-panel red lighten-4 red-text text-darken-4">
+ <i class="material-icons tiny">error</i> {{ .Error }}
+ </div>
+ {{ end }}
+
+ <form method="POST" action="/login">
+ <div class="input-field">
+ <i class="material-icons prefix">vpn_key</i>
+ <input id="accessKey" name="accessKey" type="text" required>
+ <label for="accessKey">Access Key ID</label>
+ </div>
+ <div class="input-field">
+ <i class="material-icons prefix">lock</i>
+ <input id="secretKey" name="secretKey" type="password" required>
+ <label for="secretKey">Secret Access Key</label>
+ </div>
+ <br>
+ <button type="submit" class="btn waves-effect waves-light" style="width:100%;">
+ Sign In <i class="material-icons right">send</i>
+ </button>
+ </form>
+ </div>
+ </div>
+ </div>
+ </div>
+ </div>
+</div>
+{{ end }}

View File

@@ -17,6 +17,19 @@ spec:
image: "{{ $.Files.Get "images/s3manager.tag" | trim }}"
env:
- name: ENDPOINT
value: "s3.{{ .Values._namespace.host }}"
valueFrom:
secretKeyRef:
name: {{ .Values.bucketName }}-credentials
key: endpoint
- name: SKIP_SSL_VERIFICATION
value: "true"
- name: ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: {{ .Values.bucketName }}-credentials
key: accessKey
- name: SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.bucketName }}-credentials
key: secretKey

View File

@@ -8,6 +8,9 @@ kind: Ingress
metadata:
name: {{ .Values.bucketName }}-ui
annotations:
nginx.ingress.kubernetes.io/auth-type: "basic"
nginx.ingress.kubernetes.io/auth-secret: "{{ .Values.bucketName }}-ui-auth"
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "99999"
nginx.ingress.kubernetes.io/proxy-send-timeout: "99999"

View File

@@ -1,2 +1,24 @@
{{/* Secrets previously used for s3manager credential injection and nginx basic auth */}}
{{/* are no longer needed — s3manager now handles authentication via its own login page */}}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace .Values.bucketName }}
{{- $bucketInfo := fromJson (b64dec (index $existingSecret.data "BucketInfo")) }}
{{- $accessKeyID := index $bucketInfo.spec.secretS3 "accessKeyID" }}
{{- $accessSecretKey := index $bucketInfo.spec.secretS3 "accessSecretKey" }}
{{- $endpoint := index $bucketInfo.spec.secretS3 "endpoint" }}
{{- $bucketName := index $bucketInfo.spec "bucketName" }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.bucketName }}-credentials
type: Opaque
stringData:
accessKey: {{ $accessKeyID | quote }}
secretKey: {{ $accessSecretKey | quote }}
endpoint: {{ trimPrefix "https://" $endpoint }}
bucketName: {{ $bucketName | quote }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.bucketName }}-ui-auth
data:
auth: {{ htpasswd $accessKeyID $accessSecretKey | b64enc | quote }}

View File

@@ -1,20 +0,0 @@
{{- range $name, $user := .Values.users }}
{{- $secretName := printf "%s-%s" $.Values.bucketName $name }}
{{- $existingSecret := lookup "v1" "Secret" $.Release.Namespace $secretName }}
{{- if $existingSecret }}
{{- $bucketInfo := fromJson (b64dec (index $existingSecret.data "BucketInfo")) }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ $secretName }}-credentials
labels:
apps.cozystack.io/user-secret: "true"
type: Opaque
stringData:
accessKey: {{ index $bucketInfo.spec.secretS3 "accessKeyID" | quote }}
secretKey: {{ index $bucketInfo.spec.secretS3 "accessSecretKey" | quote }}
endpoint: {{ trimPrefix "https://" (index $bucketInfo.spec.secretS3 "endpoint") }}
bucketName: {{ index $bucketInfo.spec "bucketName" | quote }}
{{- end }}
{{- end }}

View File

@@ -1,2 +1 @@
bucketName: "cozystack"
users: {}

View File

@@ -1,5 +1,8 @@
export NAME=cert-manager-crds
export NAMESPACE=cozy-cert-manager
include ../../../hack/package.mk
update:
rm -rf charts
helm repo add jetstack https://charts.jetstack.io
helm repo update jetstack
helm pull jetstack/cert-manager --untar --untardir charts
rm -f -- `find charts/cert-manager/templates -maxdepth 1 -mindepth 1 | grep -v 'crds.yaml\|_helpers.tpl'`

Some files were not shown because too many files have changed in this diff Show More