Compare commits

..

16 Commits

Author SHA1 Message Date
Andrei Kvapil
725f70399f refactor: remove assets-server and switch to cozystack-operator
- Remove cozystack-assets-server and related manifests
- Move grafana dashboards to dedicated pod in grafana-operator
- Remove old installer (cozystack.yaml) and switch to cozystack-operator
- Remove cozystackOperator.enabled flag (operator is now always used)
- Remove obsolete platform templates (helmrepos, helmreleases, apps, namespaces)
- Update Makefile to build cozystack-operator and cozystack-packages
- Remove installer.sh script (replaced by operator)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-06 17:42:29 +01:00
Andrei Kvapil
49f7ac9192 refactor: move migrations to platform chart
- Move migrations from scripts/migrations/ to packages/core/platform/images/migrations/
- Create migration-hook.yaml template to run migrations as Helm pre-upgrade/pre-install hook
- Add Dockerfile and run-migrations.sh for migrations image
- Remove old cozystack-assets image directory
- Update platform Makefile to build migrations image instead of assets

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-06 17:40:56 +01:00
Andrei Kvapil
ab8cc5ffd4 refactor: update CozystackResourceDefinition to use chartRef instead of chart
This commit replaces the `chart` field with `chartRef` in CozystackResourceDefinition
to support direct OCIRepository references instead of HelmRepository lookups.

Changes:
- Update API types to use ChartRef structure with Kind, Name, Namespace fields
- Modify HelmRelease reconciler to set spec.chartRef instead of spec.chart
- Update config and registry to use new ChartRef configuration
- Add backward compatibility in lineage mapper for both chartRef and legacy chart
- Update all CozystackResourceDefinition YAML files to use new format
- Regenerate CRDs and deepcopy functions

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-06 17:39:08 +01:00
Andrei Kvapil
f5e6107e3a feat(api): show only hash in version column for applications and modules
Fix getVersion to parse "0.1.4+abcdef" format (with "+" separator)
instead of incorrectly looking for "sha256:" prefix.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-06 17:38:17 +01:00
Andrei Kvapil
0fb02e6470 [operator] Add valuesFrom injection to HelmReleases (#1810)
## What this PR does

Add automatic `valuesFrom` injection with `cozystack-values` secret into
HelmReleases created by Package reconciler. This enables charts to
access cluster and namespace configuration via `.Values._cluster` and
`.Values._namespace`.

**Changes:**

- **Package Reconciler:** Inject `valuesFrom` referencing
`cozystack-values` secret, add `Owns(&HelmRelease{})` to reconcile on
changes
- **SecretReplicatorReconciler (new):** Replicate secret to namespaces
with active HelmReleases, cleanup orphaned secrets
- **PackageSource Reconciler:** Replace Watches with
`Owns(&ArtifactGenerator{})`
- **Platform:** Add `operator.cozystack.io/skip-cozystack-values`
annotation to platform PackageSource

### Release note

```release-note
[operator] Add valuesFrom injection to HelmReleases created by Package reconciler with automatic secret replication
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Configurable secret replication that copies designated values secrets
into namespaces selected by labels.
* Annotation to opt out of injecting cozystack-values into specific
PackageSource resources.
* New runtime flags to customize secret name, namespace, and namespace
selector.

* **Refactor**
* Simplified ownership/watcher setup to improve event handling and
reconciliation efficiency.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-06 17:35:34 +01:00
Timofei Larkin
3a1e7fdd8f Improve Reconcile function
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2026-01-06 18:53:17 +03:00
Andrei Kvapil
05b2244b36 feat(operator): add secret replicator and reconciler improvements
Add namespace-based secret replication with label selector approach.
The implementation uses configurable secret name, namespace, and
target namespace selector. Cache filtering optimizes memory usage.

Co-Authored-By: Timofei Larkin <lllamnyp@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-06 16:31:40 +01:00
Andrei Kvapil
d2fc6f8470 [linstor] Build linstor-server with custom patches (#1726)
## What this PR does

Build piraeus-server (linstor-server) from source with custom patches:

- **adjust-on-resfile-change.diff** — Use actual device path in res file
during toggle-disk; fix LUKS data offset
- Upstream: [#473](https://github.com/LINBIT/linstor-server/pull/473),
[#472](https://github.com/LINBIT/linstor-server/pull/472)
- **allow-toggle-disk-retry.diff** — Allow retry and cancellation of
failed toggle-disk operations
  - Upstream: [#475](https://github.com/LINBIT/linstor-server/pull/475)
- **force-metadata-check-on-disk-add.diff** — Create metadata during
toggle-disk from diskless to diskful
  - Upstream: [#474](https://github.com/LINBIT/linstor-server/pull/474)
- **skip-adjust-when-device-inaccessible.diff** — Skip DRBD adjust/res
file regeneration when child layer device is inaccessible
  - Upstream: [#471](https://github.com/LINBIT/linstor-server/pull/471)

Also updates plunger-satellite script and values.yaml for the new build.

### Release note

```release-note
[linstor] Build linstor-server with custom patches for improved disk handling
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added automatic DRBD stall detection and recovery, improving storage
resync resilience without manual intervention.
* Introduced configurable container image references via Helm values for
streamlined deployment.

* **Bug Fixes**
* Enhanced disk toggle operations with retry and cancellation support
for better error handling.
  * Improved metadata creation during disk state transitions.
* Added device accessibility checks to prevent errors when underlying
storage devices are unavailable.
* Fixed LUKS encryption header sizing for consistent deployment across
nodes.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-06 16:17:25 +01:00
Andrei Kvapil
daa91fd2f3 feat(linstor): build linstor-server with custom patches
Add patches for piraeus-server:
- adjust-on-resfile-change: handle resource file changes
- allow-toggle-disk-retry: enable disk operation retries
- force-metadata-check-on-disk-add: verify metadata on disk addition

Update plunger-satellite script and values.yaml for new build.

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-06 10:50:21 +01:00
Timofei Larkin
43df3a1b70 [testing] Add aliases and autocomplete (#1803)
## What this PR does

Adds a `k=kubectl` alias and bash completion for kubectl to the
e2e-testing sandbox container to maintainers have an easier time
exec'ing into the CI container when something needs to be debugged.

### Release note

```release-note
[testing] Add k=kubectl alias and enable kubectl completion in the CI
container.
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Enhanced the e2e sandbox image to enable shell bash-completion and
kubectl command completion.
* Added an alias (k) and completion wiring for kubectl to improve
interactive command use.
* These changes augment the test environment shell during image build to
provide a smoother developer/testing experience.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-05 22:48:01 +04:00
Andrei Kvapil
7b75903cee feat(operator): add valuesFrom injection to HelmReleases
Add automatic injection of cozystack-values secret reference into
HelmReleases created by Package reconciler. This enables charts to
access cluster and namespace configuration via .Values._cluster and
.Values._namespace.

Add annotation operator.cozystack.io/skip-cozystack-values to disable
injection for specific PackageSources (used for platform PackageSource
to avoid circular dependency).

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-05 18:51:18 +01:00
Andrei Kvapil
07b406e9bc [kubernetes] Fix endpoints for cilium-gateway (#1729)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

Integrate fix
- https://github.com/kubevirt/cloud-provider-kubevirt/pull/379

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kubernetes] Fix endpoints for cilium-gateway
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Improved handling of incomplete endpoint data by introducing fallback
detection mechanisms.
* Enhanced service discovery to gather endpoints from all available
resources when standard detection fails.
* Updated logging to provide better visibility into fallback operations
and current resource status.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-05 18:11:41 +01:00
Andrei Kvapil
5f36396ccc [platform] Replace Helm lookup with valuesFrom mechanism (#1787)
## What this PR does

Replaces Helm lookup functions with FluxCD valuesFrom mechanism for
passing configuration to HelmReleases. This provides cleaner config
propagation and eliminates the need for force reconcile controllers.

### Changes:

**Platform/Tenant charts:**
- Add Secret `cozystack-values` creation in platform chart (for
tenant-root and system namespaces)
- Add Secret `cozystack-values` creation in tenant chart (for child
namespaces)

**cozystack-api:**
- Add `valuesFrom` references to HelmRelease when creating applications
- Filter keys starting with `_` when returning Application specs
- Validate that user values don't contain `_` prefixed keys

**cozystack-controller:**
- Add validation that HelmRelease contains correct valuesFrom
configuration
- Remove `CozystackConfigReconciler` (no longer needed)
- Remove `TenantHelmReconciler` (no longer needed)

**Helm charts (40+ files):**
- Add helper templates in cozy-lib for `_cluster`/`_namespace` access
- Replace ConfigMap lookups with `.Values._cluster.*`
- Replace Namespace annotation lookups with `.Values._namespace.*`

### Architecture:

```
Secret cozystack-values (in each namespace)
├── _cluster: YAML with data from ConfigMaps (cozystack, cozystack-branding, cozystack-scheduling)
└── _namespace: YAML with namespace service references (etcd, host, ingress, monitoring, seaweedfs)

HelmRelease
└── spec.valuesFrom:
    ├── Secret/cozystack-values → _namespace → .Values._namespace
    └── Secret/cozystack-values → _cluster → .Values._cluster
```

### Release note

```release-note
[platform] Replace Helm lookup functions with FluxCD valuesFrom mechanism for configuration propagation
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Helm releases and namespaces now source centralized cluster/namespace
configuration via a new secret (cozystack-values), and many templates
read values from chart-provided _cluster/_namespace entries.

* **Bug Fixes**
* API now rejects application specs containing reserved keys prefixed
with "_" to prevent invalid configurations.

* **Refactor**
* Two background reconciler controllers were removed from startup,
simplifying controller initialization.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-05 17:53:33 +01:00
Andrei Kvapil
2e61810547 refactor: replace Helm lookup with valuesFrom mechanism
Replace Helm lookup functions with FluxCD valuesFrom mechanism for
reading cluster and namespace configuration.

Changes:
- Create Secret cozystack-values in each namespace with values.yaml key
  containing _cluster and _namespace configuration as nested YAML
- Configure HelmReleases to read from this Secret via valuesFrom
  (valuesKey defaults to values.yaml, so it can be omitted)
- Update cozy-lib helpers to access config via .Values._cluster
- Add default values for required _cluster keys to ensure all fields exist
- Update Go code (cozystack-api and helm reconciler) to use new format

This eliminates the need for Helm lookup functions while maintaining
the same configuration interface for charts.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-05 16:10:55 +01:00
Timofei Larkin
bf1928c96f [testing] Add aliases and autocomplete
## What this PR does

Adds a `k=kubectl` alias and bash completion for kubectl to the
e2e-testing sandbox container to maintainers have an easier time
exec'ing into the CI container when something needs to be debugged.

### Release note

```release-note
[testing] Add k=kubectl alias and enable kubectl completion in the CI
container.
```

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2026-01-05 16:29:39 +03:00
Andrei Kvapil
f59665208c [kubernetes] Fix endpoints for cilium-gateway
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-04 10:35:16 +01:00
168 changed files with 2944 additions and 1640 deletions

View File

@@ -1,4 +1,4 @@
.PHONY: manifests repos assets unit-tests helm-unit-tests
.PHONY: manifests assets unit-tests helm-unit-tests
build-deps:
@command -V find docker skopeo jq gh helm > /dev/null
@@ -18,6 +18,7 @@ build: build-deps
make -C packages/system/backup-controller image
make -C packages/system/lineage-controller-webhook image
make -C packages/system/cilium image
make -C packages/system/linstor image
make -C packages/system/kubeovn-webhook image
make -C packages/system/kubeovn-plunger image
make -C packages/system/dashboard image
@@ -25,21 +26,15 @@ build: build-deps
make -C packages/system/kamaji image
make -C packages/system/bucket image
make -C packages/system/objectstorage-controller image
make -C packages/system/grafana-operator image
make -C packages/core/testing image
make -C packages/core/talos image
make -C packages/core/platform image
make -C packages/core/installer image
make manifests
repos:
rm -rf _out
make -C packages/system repo
make -C packages/apps repo
make -C packages/extra repo
manifests:
mkdir -p _out/assets
(cd packages/core/installer/; helm template -n cozy-installer installer .) > _out/assets/cozystack-installer.yaml
(cd packages/core/installer/; helm template --namespace cozy-installer installer .) > _out/assets/cozystack-installer.yaml
assets:
make -C packages/core/talos assets

View File

@@ -17,6 +17,7 @@ limitations under the License.
package v1alpha1
import (
helmv2 "github.com/fluxcd/helm-controller/api/v2"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -61,24 +62,6 @@ type CozystackResourceDefinitionSpec struct {
Dashboard *CozystackResourceDefinitionDashboard `json:"dashboard,omitempty"`
}
type CozystackResourceDefinitionChart struct {
// Name of the Helm chart
Name string `json:"name"`
// Source reference for the Helm chart
SourceRef SourceRef `json:"sourceRef"`
}
type SourceRef struct {
// Kind of the source reference
// +kubebuilder:default:="HelmRepository"
Kind string `json:"kind"`
// Name of the source reference
Name string `json:"name"`
// Namespace of the source reference
// +kubebuilder:default:="cozy-public"
Namespace string `json:"namespace"`
}
type CozystackResourceDefinitionApplication struct {
// Kind of the application, used for UI and API
Kind string `json:"kind"`
@@ -91,9 +74,8 @@ type CozystackResourceDefinitionApplication struct {
}
type CozystackResourceDefinitionRelease struct {
// Helm chart configuration
// +optional
Chart CozystackResourceDefinitionChart `json:"chart,omitempty"`
// Reference to the chart source
ChartRef *helmv2.CrossNamespaceSourceReference `json:"chartRef"`
// Labels for the release
Labels map[string]string `json:"labels,omitempty"`
// Prefix for the release name

View File

@@ -21,6 +21,7 @@ limitations under the License.
package v1alpha1
import (
"github.com/fluxcd/helm-controller/api/v2"
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -118,22 +119,6 @@ func (in *CozystackResourceDefinitionApplication) DeepCopy() *CozystackResourceD
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionChart) DeepCopyInto(out *CozystackResourceDefinitionChart) {
*out = *in
out.SourceRef = in.SourceRef
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionChart.
func (in *CozystackResourceDefinitionChart) DeepCopy() *CozystackResourceDefinitionChart {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionChart)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionDashboard) DeepCopyInto(out *CozystackResourceDefinitionDashboard) {
*out = *in
@@ -205,7 +190,11 @@ func (in *CozystackResourceDefinitionList) DeepCopyObject() runtime.Object {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionRelease) DeepCopyInto(out *CozystackResourceDefinitionRelease) {
*out = *in
out.Chart = in.Chart
if in.ChartRef != nil {
in, out := &in.ChartRef, &out.ChartRef
*out = new(v2.CrossNamespaceSourceReference)
**out = **in
}
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make(map[string]string, len(*in))
@@ -622,21 +611,6 @@ func (in Selector) DeepCopy() Selector {
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SourceRef) DeepCopyInto(out *SourceRef) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SourceRef.
func (in *SourceRef) DeepCopy() *SourceRef {
if in == nil {
return nil
}
out := new(SourceRef)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Variant) DeepCopyInto(out *Variant) {
*out = *in

View File

@@ -1,29 +0,0 @@
package main
import (
"flag"
"log"
"net/http"
"path/filepath"
)
func main() {
addr := flag.String("address", ":8123", "Address to listen on")
dir := flag.String("dir", "/cozystack/assets", "Directory to serve files from")
flag.Parse()
absDir, err := filepath.Abs(*dir)
if err != nil {
log.Fatalf("Error getting absolute path for %s: %v", *dir, err)
}
fs := http.FileServer(http.Dir(absDir))
http.Handle("/", fs)
log.Printf("Server starting on %s, serving directory %s", *addr, absDir)
err = http.ListenAndServe(*addr, nil)
if err != nil {
log.Fatalf("Server failed to start: %v", err)
}
}

View File

@@ -200,22 +200,6 @@ func main() {
os.Exit(1)
}
if err = (&controller.TenantHelmReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "TenantHelmReconciler")
os.Exit(1)
}
if err = (&controller.CozystackConfigReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CozystackConfigReconciler")
os.Exit(1)
}
cozyAPIKind := "DaemonSet"
if reconcileDeployment {
cozyAPIKind = "Deployment"

View File

@@ -32,12 +32,16 @@ import (
helmv2 "github.com/fluxcd/helm-controller/api/v2"
sourcev1 "github.com/fluxcd/source-controller/api/v1"
sourcewatcherv1beta1 "github.com/fluxcd/source-watcher/api/v2/v1beta1"
corev1 "k8s.io/api/core/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log"
@@ -45,6 +49,7 @@ import (
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
"sigs.k8s.io/controller-runtime/pkg/webhook"
"github.com/cozystack/cozystack/internal/cozyvaluesreplicator"
"github.com/cozystack/cozystack/internal/fluxinstall"
"github.com/cozystack/cozystack/internal/operator"
// +kubebuilder:scaffold:imports
@@ -73,6 +78,9 @@ func main() {
var enableHTTP2 bool
var installFlux bool
var cozystackVersion string
var cozyValuesSecretName string
var cozyValuesSecretNamespace string
var cozyValuesNamespaceSelector string
var platformSourceURL string
var platformSourceName string
var platformSourceRef string
@@ -92,6 +100,9 @@ func main() {
flag.StringVar(&platformSourceURL, "platform-source-url", "", "Platform source URL (oci:// or https://). If specified, generates OCIRepository or GitRepository resource.")
flag.StringVar(&platformSourceName, "platform-source-name", "cozystack-packages", "Name for the generated platform source resource (default: cozystack-packages)")
flag.StringVar(&platformSourceRef, "platform-source-ref", "", "Reference specification as key=value pairs (e.g., 'branch=main' or 'digest=sha256:...,tag=v1.0'). For OCI: digest, semver, semverFilter, tag. For Git: branch, tag, semver, name, commit.")
flag.StringVar(&cozyValuesSecretName, "cozy-values-secret-name", "cozystack-values", "The name of the secret containing cluster-wide configuration values.")
flag.StringVar(&cozyValuesSecretNamespace, "cozy-values-secret-namespace", "cozy-system", "The namespace of the secret containing cluster-wide configuration values.")
flag.StringVar(&cozyValuesNamespaceSelector, "cozy-values-namespace-selector", "cozystack.io/system=true", "The label selector for namespaces where the cluster-wide configuration values must be replicated.")
opts := zap.Options{
Development: true,
@@ -110,10 +121,29 @@ func main() {
os.Exit(1)
}
targetNSSelector, err := labels.Parse(cozyValuesNamespaceSelector)
if err != nil {
setupLog.Error(err, "could not parse namespace label selector")
os.Exit(1)
}
// Start the controller manager
setupLog.Info("Starting controller manager")
mgr, err := ctrl.NewManager(config, ctrl.Options{
Scheme: scheme,
Cache: cache.Options{
ByObject: map[client.Object]cache.ByObject{
// Cache only Secrets named <secretName> (in any namespace)
&corev1.Secret{}: {
Field: fields.OneTermEqualSelector("metadata.name", cozyValuesSecretName),
},
// Cache only Namespaces that match a label selector
&corev1.Namespace{}: {
Label: targetNSSelector,
},
},
},
Metrics: metricsserver.Options{
BindAddress: metricsAddr,
SecureServing: secureMetrics,
@@ -187,6 +217,18 @@ func main() {
os.Exit(1)
}
// Setup CozyValuesReplicator reconciler
if err := (&cozyvaluesreplicator.SecretReplicatorReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
SourceNamespace: cozyValuesSecretNamespace,
SecretName: cozyValuesSecretName,
TargetNamespaceSelector: targetNSSelector,
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CozyValuesReplicator")
os.Exit(1)
}
// +kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {

View File

@@ -73,7 +73,7 @@ func (r *CozystackResourceDefinitionHelmReconciler) updateHelmReleasesForCRD(ctx
labelSelector := client.MatchingLabels{
"apps.cozystack.io/application.kind": applicationKind,
"apps.cozystack.io/application.group": applicationGroup,
"cozystack.io/ui": "true",
"cozystack.io/ui": "true",
}
// List all HelmReleases with matching labels
@@ -97,65 +97,75 @@ func (r *CozystackResourceDefinitionHelmReconciler) updateHelmReleasesForCRD(ctx
return nil
}
// updateHelmReleaseChart updates the chart in HelmRelease based on CozystackResourceDefinition
// expectedValuesFrom returns the expected valuesFrom configuration for HelmReleases
func expectedValuesFrom() []helmv2.ValuesReference {
return []helmv2.ValuesReference{
{
Kind: "Secret",
Name: "cozystack-values",
},
}
}
// valuesFromEqual compares two ValuesReference slices
func valuesFromEqual(a, b []helmv2.ValuesReference) bool {
if len(a) != len(b) {
return false
}
for i := range a {
if a[i].Kind != b[i].Kind ||
a[i].Name != b[i].Name ||
a[i].ValuesKey != b[i].ValuesKey ||
a[i].TargetPath != b[i].TargetPath ||
a[i].Optional != b[i].Optional {
return false
}
}
return true
}
// updateHelmReleaseChart updates the chart and valuesFrom in HelmRelease based on CozystackResourceDefinition
func (r *CozystackResourceDefinitionHelmReconciler) updateHelmReleaseChart(ctx context.Context, hr *helmv2.HelmRelease, crd *cozyv1alpha1.CozystackResourceDefinition) error {
logger := log.FromContext(ctx)
hrCopy := hr.DeepCopy()
updated := false
// Validate Chart configuration exists
if crd.Spec.Release.Chart.Name == "" {
logger.V(4).Info("Skipping HelmRelease chart update: Chart.Name is empty", "crd", crd.Name)
// Validate ChartRef configuration exists
if crd.Spec.Release.ChartRef == nil ||
crd.Spec.Release.ChartRef.Kind == "" ||
crd.Spec.Release.ChartRef.Name == "" ||
crd.Spec.Release.ChartRef.Namespace == "" {
logger.Error(fmt.Errorf("invalid ChartRef in CRD"), "Skipping HelmRelease chartRef update: ChartRef is nil or incomplete",
"crd", crd.Name)
return nil
}
// Validate SourceRef fields
if crd.Spec.Release.Chart.SourceRef.Kind == "" ||
crd.Spec.Release.Chart.SourceRef.Name == "" ||
crd.Spec.Release.Chart.SourceRef.Namespace == "" {
logger.Error(fmt.Errorf("invalid SourceRef in CRD"), "Skipping HelmRelease chart update: SourceRef fields are incomplete",
"crd", crd.Name,
"kind", crd.Spec.Release.Chart.SourceRef.Kind,
"name", crd.Spec.Release.Chart.SourceRef.Name,
"namespace", crd.Spec.Release.Chart.SourceRef.Namespace)
return nil
}
// Use ChartRef directly from CRD
expectedChartRef := crd.Spec.Release.ChartRef
// Get version and reconcileStrategy from CRD or use defaults
version := ">= 0.0.0-0"
reconcileStrategy := "Revision"
// TODO: Add Version and ReconcileStrategy fields to CozystackResourceDefinitionChart if needed
// Build expected SourceRef
expectedSourceRef := helmv2.CrossNamespaceObjectReference{
Kind: crd.Spec.Release.Chart.SourceRef.Kind,
Name: crd.Spec.Release.Chart.SourceRef.Name,
Namespace: crd.Spec.Release.Chart.SourceRef.Namespace,
}
if hrCopy.Spec.Chart == nil {
// Need to create Chart spec
hrCopy.Spec.Chart = &helmv2.HelmChartTemplate{
Spec: helmv2.HelmChartTemplateSpec{
Chart: crd.Spec.Release.Chart.Name,
Version: version,
ReconcileStrategy: reconcileStrategy,
SourceRef: expectedSourceRef,
},
}
// Check if chartRef needs to be updated
if hrCopy.Spec.ChartRef == nil {
hrCopy.Spec.ChartRef = expectedChartRef
// Clear the old chart field when switching to chartRef
hrCopy.Spec.Chart = nil
updated = true
} else if hrCopy.Spec.ChartRef.Kind != expectedChartRef.Kind ||
hrCopy.Spec.ChartRef.Name != expectedChartRef.Name ||
hrCopy.Spec.ChartRef.Namespace != expectedChartRef.Namespace {
hrCopy.Spec.ChartRef = expectedChartRef
updated = true
}
// Check and update valuesFrom configuration
expected := expectedValuesFrom()
if !valuesFromEqual(hrCopy.Spec.ValuesFrom, expected) {
logger.V(4).Info("Updating HelmRelease valuesFrom", "name", hr.Name, "namespace", hr.Namespace)
hrCopy.Spec.ValuesFrom = expected
updated = true
} else {
// Update existing Chart spec
if hrCopy.Spec.Chart.Spec.Chart != crd.Spec.Release.Chart.Name ||
hrCopy.Spec.Chart.Spec.SourceRef != expectedSourceRef {
hrCopy.Spec.Chart.Spec.Chart = crd.Spec.Release.Chart.Name
hrCopy.Spec.Chart.Spec.SourceRef = expectedSourceRef
updated = true
}
}
if updated {
logger.V(4).Info("Updating HelmRelease chart", "name", hr.Name, "namespace", hr.Namespace)
logger.V(4).Info("Updating HelmRelease chartRef", "name", hr.Name, "namespace", hr.Namespace)
if err := r.Update(ctx, hrCopy); err != nil {
return fmt.Errorf("failed to update HelmRelease: %w", err)
}
@@ -163,4 +173,3 @@ func (r *CozystackResourceDefinitionHelmReconciler) updateHelmReleaseChart(ctx c
return nil
}

View File

@@ -1,140 +0,0 @@
package controller
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"sort"
"time"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/predicate"
)
type CozystackConfigReconciler struct {
client.Client
Scheme *runtime.Scheme
}
var configMapNames = []string{"cozystack", "cozystack-branding", "cozystack-scheduling"}
const configMapNamespace = "cozy-system"
const digestAnnotation = "cozystack.io/cozy-config-digest"
const forceReconcileKey = "reconcile.fluxcd.io/forceAt"
const requestedAt = "reconcile.fluxcd.io/requestedAt"
func (r *CozystackConfigReconciler) Reconcile(ctx context.Context, _ ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
time.Sleep(2 * time.Second)
digest, err := r.computeDigest(ctx)
if err != nil {
log.Error(err, "failed to compute config digest")
return ctrl.Result{}, nil
}
var helmList helmv2.HelmReleaseList
if err := r.List(ctx, &helmList); err != nil {
return ctrl.Result{}, fmt.Errorf("failed to list HelmReleases: %w", err)
}
now := time.Now().Format(time.RFC3339Nano)
updated := 0
for _, hr := range helmList.Items {
isSystemApp := hr.Labels["cozystack.io/system-app"] == "true"
isTenantRoot := hr.Namespace == "tenant-root" && hr.Name == "tenant-root"
if !isSystemApp && !isTenantRoot {
continue
}
patchTarget := hr.DeepCopy()
if hr.Annotations == nil {
hr.Annotations = map[string]string{}
}
if hr.Annotations[digestAnnotation] == digest {
continue
}
patchTarget.Annotations[digestAnnotation] = digest
patchTarget.Annotations[forceReconcileKey] = now
patchTarget.Annotations[requestedAt] = now
patch := client.MergeFrom(hr.DeepCopy())
if err := r.Patch(ctx, patchTarget, patch); err != nil {
log.Error(err, "failed to patch HelmRelease", "name", hr.Name, "namespace", hr.Namespace)
continue
}
updated++
log.Info("patched HelmRelease with new config digest", "name", hr.Name, "namespace", hr.Namespace)
}
log.Info("finished reconciliation", "updatedHelmReleases", updated)
return ctrl.Result{}, nil
}
func (r *CozystackConfigReconciler) computeDigest(ctx context.Context) (string, error) {
hash := sha256.New()
for _, name := range configMapNames {
var cm corev1.ConfigMap
err := r.Get(ctx, client.ObjectKey{Namespace: configMapNamespace, Name: name}, &cm)
if err != nil {
if kerrors.IsNotFound(err) {
continue // ignore missing
}
return "", err
}
// Sort keys for consistent hashing
var keys []string
for k := range cm.Data {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
v := cm.Data[k]
fmt.Fprintf(hash, "%s:%s=%s\n", name, k, v)
}
}
return hex.EncodeToString(hash.Sum(nil)), nil
}
func (r *CozystackConfigReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
WithEventFilter(predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
cm, ok := e.ObjectNew.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
CreateFunc: func(e event.CreateEvent) bool {
cm, ok := e.Object.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
DeleteFunc: func(e event.DeleteEvent) bool {
cm, ok := e.Object.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
}).
For(&corev1.ConfigMap{}).
Complete(r)
}
func contains(slice []string, val string) bool {
for _, s := range slice {
if s == val {
return true
}
}
return false
}

View File

@@ -1,159 +0,0 @@
package controller
import (
"context"
"fmt"
"strings"
"time"
e "errors"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
"gopkg.in/yaml.v2"
corev1 "k8s.io/api/core/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
)
type TenantHelmReconciler struct {
client.Client
Scheme *runtime.Scheme
}
func (r *TenantHelmReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)
time.Sleep(2 * time.Second)
hr := &helmv2.HelmRelease{}
if err := r.Get(ctx, req.NamespacedName, hr); err != nil {
if errors.IsNotFound(err) {
return ctrl.Result{}, nil
}
logger.Error(err, "unable to fetch HelmRelease")
return ctrl.Result{}, err
}
if !strings.HasPrefix(hr.Name, "tenant-") {
return ctrl.Result{}, nil
}
if len(hr.Status.Conditions) == 0 || hr.Status.Conditions[0].Type != "Ready" {
return ctrl.Result{}, nil
}
if len(hr.Status.History) == 0 {
logger.Info("no history in HelmRelease status", "name", hr.Name)
return ctrl.Result{}, nil
}
if hr.Status.History[0].Status != "deployed" {
return ctrl.Result{}, nil
}
newDigest := hr.Status.History[0].Digest
var hrList helmv2.HelmReleaseList
childNamespace := getChildNamespace(hr.Namespace, hr.Name)
if childNamespace == "tenant-root" && hr.Name == "tenant-root" {
if hr.Spec.Values == nil {
logger.Error(e.New("hr.Spec.Values is nil"), "cant annotate tenant-root ns")
return ctrl.Result{}, nil
}
err := annotateTenantRootNs(*hr.Spec.Values, r.Client)
if err != nil {
logger.Error(err, "cant annotate tenant-root ns")
return ctrl.Result{}, nil
}
logger.Info("namespace 'tenant-root' annotated")
}
if err := r.List(ctx, &hrList, client.InNamespace(childNamespace)); err != nil {
logger.Error(err, "unable to list HelmReleases in namespace", "namespace", hr.Name)
return ctrl.Result{}, err
}
for _, item := range hrList.Items {
if item.Name == hr.Name {
continue
}
oldDigest := item.GetAnnotations()["cozystack.io/tenant-config-digest"]
if oldDigest == newDigest {
continue
}
patchTarget := item.DeepCopy()
if patchTarget.Annotations == nil {
patchTarget.Annotations = map[string]string{}
}
ts := time.Now().Format(time.RFC3339Nano)
patchTarget.Annotations["cozystack.io/tenant-config-digest"] = newDigest
patchTarget.Annotations["reconcile.fluxcd.io/forceAt"] = ts
patchTarget.Annotations["reconcile.fluxcd.io/requestedAt"] = ts
patch := client.MergeFrom(item.DeepCopy())
if err := r.Patch(ctx, patchTarget, patch); err != nil {
logger.Error(err, "failed to patch HelmRelease", "name", patchTarget.Name)
continue
}
logger.Info("patched HelmRelease with new digest", "name", patchTarget.Name, "digest", newDigest, "version", hr.Status.History[0].Version)
}
return ctrl.Result{}, nil
}
func (r *TenantHelmReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&helmv2.HelmRelease{}).
Complete(r)
}
func getChildNamespace(currentNamespace, hrName string) string {
tenantName := strings.TrimPrefix(hrName, "tenant-")
switch {
case currentNamespace == "tenant-root" && hrName == "tenant-root":
// 1) root tenant inside root namespace
return "tenant-root"
case currentNamespace == "tenant-root":
// 2) any other tenant in root namespace
return fmt.Sprintf("tenant-%s", tenantName)
default:
// 3) tenant in a dedicated namespace
return fmt.Sprintf("%s-%s", currentNamespace, tenantName)
}
}
func annotateTenantRootNs(values apiextensionsv1.JSON, c client.Client) error {
var data map[string]interface{}
if err := yaml.Unmarshal(values.Raw, &data); err != nil {
return fmt.Errorf("failed to parse HelmRelease values: %w", err)
}
host, ok := data["host"].(string)
if !ok || host == "" {
return fmt.Errorf("host field not found or not a string")
}
var ns corev1.Namespace
if err := c.Get(context.TODO(), client.ObjectKey{Name: "tenant-root"}, &ns); err != nil {
return fmt.Errorf("failed to get namespace tenant-root: %w", err)
}
if ns.Annotations == nil {
ns.Annotations = map[string]string{}
}
ns.Annotations["namespace.cozystack.io/host"] = host
if err := c.Update(context.TODO(), &ns); err != nil {
return fmt.Errorf("failed to update namespace: %w", err)
}
return nil
}

View File

@@ -0,0 +1,272 @@
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cozyvaluesreplicator
import (
"context"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
)
// SecretReplicatorReconciler replicates a source secret to namespaces matching a label selector.
type SecretReplicatorReconciler struct {
client.Client
Scheme *runtime.Scheme
// Source of truth:
SourceNamespace string
SecretName string
// Namespaces to replicate into:
// (e.g. labels.SelectorFromSet(labels.Set{"tenant":"true"}), or metav1.LabelSelectorAsSelector(...))
TargetNamespaceSelector labels.Selector
}
func (r *SecretReplicatorReconciler) SetupWithManager(mgr ctrl.Manager) error {
// 1) Primary watch for requirement (b):
// Reconcile any Secret named r.SecretName in any namespace (includes source too).
// This keeps Secrets in cache and causes "copy changed -> reconcile it" to happen.
secretNameOnly := predicate.NewPredicateFuncs(func(obj client.Object) bool {
return obj.GetName() == r.SecretName
})
// 2) Secondary watch for requirement (c):
// When the *source* Secret changes, fan-out reconcile requests to every matching namespace.
onlySourceSecret := predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool { return isSourceSecret(e.Object, r) },
UpdateFunc: func(e event.UpdateEvent) bool { return isSourceSecret(e.ObjectNew, r) },
DeleteFunc: func(e event.DeleteEvent) bool { return isSourceSecret(e.Object, r) },
GenericFunc: func(e event.GenericEvent) bool {
return isSourceSecret(e.Object, r)
},
}
// Fan-out mapper for source Secret events -> one request per matching target namespace.
fanOutOnSourceSecret := handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, _ client.Object) []reconcile.Request {
// List namespaces *from the cache* (because we also watch Namespaces below).
var nsList corev1.NamespaceList
if err := r.List(ctx, &nsList); err != nil {
// If list fails, best-effort: return nothing; reconcile will be retried by next event.
return nil
}
reqs := make([]reconcile.Request, 0, len(nsList.Items))
for i := range nsList.Items {
ns := &nsList.Items[i]
if ns.Name == r.SourceNamespace {
continue
}
if r.TargetNamespaceSelector != nil && !r.TargetNamespaceSelector.Matches(labels.Set(ns.Labels)) {
continue
}
reqs = append(reqs, reconcile.Request{
NamespacedName: types.NamespacedName{
Namespace: ns.Name,
Name: r.SecretName,
},
})
}
return reqs
})
// 3) Namespace watch for requirement (a):
// When a namespace is created/updated to match selector, enqueue reconcile for the Secret copy in that namespace.
enqueueOnNamespaceMatch := handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, obj client.Object) []reconcile.Request {
ns, ok := obj.(*corev1.Namespace)
if !ok {
return nil
}
if ns.Name == r.SourceNamespace {
return nil
}
if r.TargetNamespaceSelector != nil && !r.TargetNamespaceSelector.Matches(labels.Set(ns.Labels)) {
return nil
}
return []reconcile.Request{{
NamespacedName: types.NamespacedName{
Namespace: ns.Name,
Name: r.SecretName,
},
}}
})
// Only trigger from namespace events where the label match may be (or become) true.
// (You can keep this simple; it's fine if it fires on any update—your Reconcile should be idempotent.)
namespaceMayMatter := predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
ns, ok := e.Object.(*corev1.Namespace)
return ok && (r.TargetNamespaceSelector == nil || r.TargetNamespaceSelector.Matches(labels.Set(ns.Labels)))
},
UpdateFunc: func(e event.UpdateEvent) bool {
oldNS, okOld := e.ObjectOld.(*corev1.Namespace)
newNS, okNew := e.ObjectNew.(*corev1.Namespace)
if !okOld || !okNew {
return false
}
// Fire if it matches now OR matched before (covers transitions both ways; reconcile can decide what to do).
oldMatch := r.TargetNamespaceSelector == nil || r.TargetNamespaceSelector.Matches(labels.Set(oldNS.Labels))
newMatch := r.TargetNamespaceSelector == nil || r.TargetNamespaceSelector.Matches(labels.Set(newNS.Labels))
return oldMatch || newMatch
},
DeleteFunc: func(event.DeleteEvent) bool { return false }, // nothing to do on namespace delete
GenericFunc: func(event.GenericEvent) bool { return false },
}
return ctrl.NewControllerManagedBy(mgr).
// (b) Watch all Secrets with the chosen name; this also ensures Secret objects are cached.
For(&corev1.Secret{}, builder.WithPredicates(secretNameOnly)).
// (c) Add a second watch on Secret, but only for the source secret, and fan-out to all namespaces.
Watches(
&corev1.Secret{},
fanOutOnSourceSecret,
builder.WithPredicates(onlySourceSecret),
).
// (a) Watch Namespaces so they're cached and so "namespace appears / starts matching" enqueues reconcile.
Watches(
&corev1.Namespace{},
enqueueOnNamespaceMatch,
builder.WithPredicates(namespaceMayMatter),
).
Complete(r)
}
func isSourceSecret(obj client.Object, r *SecretReplicatorReconciler) bool {
if obj == nil {
return false
}
return obj.GetNamespace() == r.SourceNamespace && obj.GetName() == r.SecretName
}
func (r *SecretReplicatorReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)
// Ignore requests that don't match our secret name or are for the source namespace
if req.Name != r.SecretName || req.Namespace == r.SourceNamespace {
return ctrl.Result{}, nil
}
// Verify the target namespace still exists and matches the selector
targetNamespace := &corev1.Namespace{}
if err := r.Get(ctx, types.NamespacedName{Name: req.Namespace}, targetNamespace); err != nil {
if apierrors.IsNotFound(err) {
// Namespace doesn't exist, nothing to do
return ctrl.Result{}, nil
}
logger.Error(err, "Failed to get target namespace", "namespace", req.Namespace)
return ctrl.Result{}, err
}
// Check if namespace still matches the selector
if r.TargetNamespaceSelector != nil && !r.TargetNamespaceSelector.Matches(labels.Set(targetNamespace.Labels)) {
// Namespace no longer matches selector, delete the replicated secret if it exists
replicatedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: req.Namespace,
Name: req.Name,
},
}
if err := r.Delete(ctx, replicatedSecret); err != nil && !apierrors.IsNotFound(err) {
logger.Error(err, "Failed to delete replicated secret from non-matching namespace",
"namespace", req.Namespace, "secret", req.Name)
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
// Get the source secret
originalSecret := &corev1.Secret{}
if err := r.Get(ctx, types.NamespacedName{Namespace: r.SourceNamespace, Name: r.SecretName}, originalSecret); err != nil {
if apierrors.IsNotFound(err) {
// Source secret doesn't exist, delete the replicated secret if it exists
replicatedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: req.Namespace,
Name: req.Name,
},
}
if err := r.Delete(ctx, replicatedSecret); err != nil && !apierrors.IsNotFound(err) {
logger.Error(err, "Failed to delete replicated secret after source secret deletion",
"namespace", req.Namespace, "secret", req.Name)
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}
logger.Error(err, "Failed to get source secret",
"namespace", r.SourceNamespace, "secret", r.SecretName)
return ctrl.Result{}, err
}
// Create or update the replicated secret
replicatedSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: req.Namespace,
Name: req.Name,
},
}
_, err := controllerutil.CreateOrUpdate(ctx, r.Client, replicatedSecret, func() error {
// Copy the secret data and type from the source
replicatedSecret.Data = make(map[string][]byte)
for k, v := range originalSecret.Data {
replicatedSecret.Data[k] = v
}
replicatedSecret.Type = originalSecret.Type
// Copy labels and annotations from source (if any)
if originalSecret.Labels != nil {
if replicatedSecret.Labels == nil {
replicatedSecret.Labels = make(map[string]string)
}
for k, v := range originalSecret.Labels {
replicatedSecret.Labels[k] = v
}
}
if originalSecret.Annotations != nil {
if replicatedSecret.Annotations == nil {
replicatedSecret.Annotations = make(map[string]string)
}
for k, v := range originalSecret.Annotations {
replicatedSecret.Annotations[k] = v
}
}
return nil
})
if err != nil {
logger.Error(err, "Failed to create or update replicated secret",
"namespace", req.Namespace, "secret", req.Name)
return ctrl.Result{}, err
}
return ctrl.Result{}, nil
}

View File

@@ -37,6 +37,14 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
)
const (
// AnnotationSkipCozystackValues disables injection of cozystack-values secret into HelmRelease
// This annotation should be placed on PackageSource
AnnotationSkipCozystackValues = "operator.cozystack.io/skip-cozystack-values"
// SecretCozystackValues is the name of the secret containing cluster and namespace configuration
SecretCozystackValues = "cozystack-values"
)
// PackageReconciler reconciles Package resources
type PackageReconciler struct {
client.Client
@@ -215,6 +223,16 @@ func (r *PackageReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ct
},
}
// Add valuesFrom for cozystack-values secret unless disabled by annotation on PackageSource
if packageSource.GetAnnotations()[AnnotationSkipCozystackValues] != "true" {
hr.Spec.ValuesFrom = []helmv2.ValuesReference{
{
Kind: "Secret",
Name: SecretCozystackValues,
},
}
}
// Set ownerReference
gvk, err := apiutil.GVKForObject(pkg, r.Scheme)
if err != nil {
@@ -869,6 +887,7 @@ func (r *PackageReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
Named("cozystack-package").
For(&cozyv1alpha1.Package{}).
Owns(&helmv2.HelmRelease{}).
Watches(
&cozyv1alpha1.PackageSource{},
handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, obj client.Object) []reconcile.Request {

View File

@@ -31,9 +31,7 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/apiutil"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
)
// PackageSourceReconciler reconciles PackageSource resources
@@ -409,26 +407,7 @@ func (r *PackageSourceReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
Named("cozystack-packagesource").
For(&cozyv1alpha1.PackageSource{}).
Watches(
&sourcewatcherv1beta1.ArtifactGenerator{},
handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, obj client.Object) []reconcile.Request {
ag, ok := obj.(*sourcewatcherv1beta1.ArtifactGenerator)
if !ok {
return nil
}
// Find the PackageSource that owns this ArtifactGenerator by ownerReference
for _, ownerRef := range ag.OwnerReferences {
if ownerRef.Kind == "PackageSource" {
return []reconcile.Request{{
NamespacedName: types.NamespacedName{
Name: ownerRef.Name,
},
}}
}
}
return nil
}),
).
Owns(&sourcewatcherv1beta1.ArtifactGenerator{}).
Complete(r)
}

View File

@@ -1,5 +1,4 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $seaweedfs := index $myNS.metadata.annotations "namespace.cozystack.io/seaweedfs" }}
{{- $seaweedfs := .Values._namespace.seaweedfs }}
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketClaim
metadata:

View File

@@ -21,5 +21,8 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
values:
bucketName: {{ .Release.Name }}

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
{{- if .Values.clickhouseKeeper.enabled }}
apiVersion: "clickhouse-keeper.altinity.com/v1"

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-credentials" .Release.Name) }}
{{- $passwords := dict }}
{{- $users := .Values.users }}

View File

@@ -50,9 +50,8 @@ spec:
postgresUID: 999
postgresGID: 999
enableSuperuserAccess: true
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints := get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
labelSelector:

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" | default (dict "data" (dict)) }}
{{- $clusterDomain := index $cozyConfig.data "cluster-domain" | default "cozy.local" }}
{{- $clusterDomain := index .Values._cluster "cluster-domain" | default "cozy.local" }}
---
apiVersion: apps.foundationdb.org/v1beta2
kind: FoundationDBCluster

View File

@@ -1,5 +1,5 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 53388eb8e..28644236f 100644
index 53388eb8e..873060251 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -12,7 +12,6 @@ import (
@@ -10,12 +10,17 @@ index 53388eb8e..28644236f 100644
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@@ -669,35 +668,50 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
@@ -666,38 +665,62 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
// for extracting the nodes it does not matter what type of address we are dealing with
// all nodes with an endpoint for a corresponding slice will be selected.
nodeSet := sets.Set[string]{}
+ hasEndpointsWithoutNodeName := false
for _, slice := range tenantSlices {
for _, endpoint := range slice.Endpoints {
// find all unique nodes that correspond to an endpoint in a tenant slice
+ if endpoint.NodeName == nil {
+ klog.Warningf("Skipping endpoint without NodeName in slice %s/%s", slice.Namespace, slice.Name)
+ hasEndpointsWithoutNodeName = true
+ continue
+ }
nodeSet.Insert(*endpoint.NodeName)
@@ -23,6 +28,13 @@ index 53388eb8e..28644236f 100644
}
- klog.Infof("Desired nodes for service %s in namespace %s: %v", service.Name, service.Namespace, sets.List(nodeSet))
+ // Fallback: if no endpoints with NodeName were found, but there are endpoints without NodeName,
+ // distribute traffic to all VMIs (similar to ExternalTrafficPolicy=Cluster behavior)
+ if nodeSet.Len() == 0 && hasEndpointsWithoutNodeName {
+ klog.Infof("No endpoints with NodeName found for service %s/%s, falling back to all VMIs", service.Namespace, service.Name)
+ return c.getAllVMIEndpoints()
+ }
+
+ klog.Infof("Desired nodes for service %s/%s: %v", service.Namespace, service.Name, sets.List(nodeSet))
for _, node := range sets.List(nodeSet) {
@@ -68,7 +80,7 @@ index 53388eb8e..28644236f 100644
desiredEndpoints = append(desiredEndpoints, &discovery.Endpoint{
Addresses: []string{i.IP},
Conditions: discovery.EndpointConditions{
@@ -705,9 +719,9 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
@@ -705,9 +728,9 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
Serving: &serving,
Terminating: &terminating,
},
@@ -80,6 +92,71 @@ index 53388eb8e..28644236f 100644
}
}
}
@@ -716,6 +739,64 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
return desiredEndpoints
}
+// getAllVMIEndpoints returns endpoints for all VMIs in the infra namespace.
+// This is used as a fallback when tenant endpoints don't have NodeName specified,
+// similar to ExternalTrafficPolicy=Cluster behavior where traffic is distributed to all nodes.
+func (c *Controller) getAllVMIEndpoints() []*discovery.Endpoint {
+ var endpoints []*discovery.Endpoint
+
+ // List all VMIs in the infra namespace
+ vmiList, err := c.infraDynamic.
+ Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).
+ Namespace(c.infraNamespace).
+ List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ klog.Errorf("Failed to list VMIs in namespace %q: %v", c.infraNamespace, err)
+ return endpoints
+ }
+
+ for _, obj := range vmiList.Items {
+ vmi := &kubevirtv1.VirtualMachineInstance{}
+ err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, vmi)
+ if err != nil {
+ klog.Errorf("Failed to convert Unstructured to VirtualMachineInstance: %v", err)
+ continue
+ }
+
+ if vmi.Status.NodeName == "" {
+ klog.Warningf("Skipping VMI %s/%s: NodeName is empty", vmi.Namespace, vmi.Name)
+ continue
+ }
+ nodeNamePtr := &vmi.Status.NodeName
+
+ ready := vmi.Status.Phase == kubevirtv1.Running
+ serving := vmi.Status.Phase == kubevirtv1.Running
+ terminating := vmi.Status.Phase == kubevirtv1.Failed || vmi.Status.Phase == kubevirtv1.Succeeded
+
+ for _, i := range vmi.Status.Interfaces {
+ if i.Name == "default" {
+ if i.IP == "" {
+ klog.Warningf("VMI %s/%s interface %q has no IP, skipping", vmi.Namespace, vmi.Name, i.Name)
+ continue
+ }
+ endpoints = append(endpoints, &discovery.Endpoint{
+ Addresses: []string{i.IP},
+ Conditions: discovery.EndpointConditions{
+ Ready: &ready,
+ Serving: &serving,
+ Terminating: &terminating,
+ },
+ NodeName: nodeNamePtr,
+ })
+ break
+ }
+ }
+ }
+
+ klog.Infof("Fallback: created %d endpoints from all VMIs in namespace %s", len(endpoints), c.infraNamespace)
+ return endpoints
+}
+
func (c *Controller) ensureEndpointSliceLabels(slice *discovery.EndpointSlice, svc *v1.Service) (map[string]string, bool) {
labels := make(map[string]string)
labelsChanged := false
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1c97035b4..d205d0bed 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go

View File

@@ -1,7 +1,6 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $etcd := index $myNS.metadata.annotations "namespace.cozystack.io/etcd" }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $etcd := .Values._namespace.etcd }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
{{- $kubevirtmachinetemplateNames := list }}
{{- define "kubevirtmachinetemplate" -}}
spec:
@@ -31,9 +30,8 @@ spec:
{{- end }}
cluster.x-k8s.io/deployment-name: {{ $.Release.Name }}-{{ .groupName }}
spec:
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints := get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 10 }}
labelSelector:

View File

@@ -1,5 +1,4 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $targetTenant := index $myNS.metadata.annotations "namespace.cozystack.io/monitoring" }}
{{- $targetTenant := .Values._namespace.monitoring }}
{{- if .Values.addons.monitoringAgents.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease

View File

@@ -1,8 +1,6 @@
{{- define "cozystack.defaultVPAValues" -}}
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $targetTenant := index $myNS.metadata.annotations "namespace.cozystack.io/monitoring" }}
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
{{- $targetTenant := .Values._namespace.monitoring }}
vpaForVPA: false
vertical-pod-autoscaler:
recommender:

View File

@@ -1,5 +1,4 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $ingress := .Values._namespace.ingress }}
{{- if and (eq .Values.addons.ingressNginx.exposeMethod "Proxied") .Values.addons.ingressNginx.hosts }}
---
apiVersion: networking.k8s.io/v1

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-credentials" .Release.Name) }}
{{- $passwords := dict }}
@@ -53,6 +52,9 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
values:
nats:
container:

View File

@@ -46,9 +46,8 @@ spec:
imageName: ghcr.io/cloudnative-pg/postgresql:{{ include "postgres.versionMap" $ | trimPrefix "v" }}
enableSuperuserAccess: true
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints := get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
labelSelector:

View File

@@ -31,4 +31,7 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
{{- end }}

View File

@@ -30,3 +30,6 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values

View File

@@ -31,4 +31,7 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
{{- end }}

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $oidcEnabled := index $cozyConfig.data "oidc-enabled" }}
{{- $oidcEnabled := index .Values._cluster "oidc-enabled" }}
{{- if eq $oidcEnabled "true" }}
{{- if .Capabilities.APIVersions.Has "v1.edp.epam.com/v1" }}
apiVersion: v1.edp.epam.com/v1

View File

@@ -31,4 +31,7 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
{{- end }}

View File

@@ -1,46 +1,63 @@
{{- define "cozystack.namespace-anotations" }}
{{- $context := index . 0 }}
{{- $existingNS := index . 1 }}
{{- range $x := list "etcd" "monitoring" "ingress" "seaweedfs" }}
{{- if (index $context.Values $x) }}
namespace.cozystack.io/{{ $x }}: "{{ include "tenant.name" $context }}"
{{- else }}
namespace.cozystack.io/{{ $x }}: "{{ index $existingNS.metadata.annotations (printf "namespace.cozystack.io/%s" $x) | required (printf "namespace %s has no namespace.cozystack.io/%s annotation" $context.Release.Namespace $x) }}"
{{- end }}
{{- end }}
{{- end }}
{{/* Lookup for namespace uid (needed for ownerReferences) */}}
{{- $existingNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- if not $existingNS }}
{{- fail (printf "error lookup existing namespace: %s" .Release.Namespace) }}
{{- end }}
{{- if ne (include "tenant.name" .) "tenant-root" }}
{{/* Compute namespace values once for use in both Secret and labels */}}
{{- $tenantName := include "tenant.name" . }}
{{- $parentNamespace := .Values._namespace | default dict }}
{{- $parentHost := $parentNamespace.host | default "" }}
{{/* Compute host */}}
{{- $computedHost := "" }}
{{- if .Values.host }}
{{- $computedHost = .Values.host }}
{{- else if $parentHost }}
{{- $computedHost = printf "%s.%s" (splitList "-" $tenantName | last) $parentHost }}
{{- end }}
{{/* Compute service references */}}
{{- $etcd := $parentNamespace.etcd | default "" }}
{{- if .Values.etcd }}
{{- $etcd = $tenantName }}
{{- end }}
{{- $ingress := $parentNamespace.ingress | default "" }}
{{- if .Values.ingress }}
{{- $ingress = $tenantName }}
{{- end }}
{{- $monitoring := $parentNamespace.monitoring | default "" }}
{{- if .Values.monitoring }}
{{- $monitoring = $tenantName }}
{{- end }}
{{- $seaweedfs := $parentNamespace.seaweedfs | default "" }}
{{- if .Values.seaweedfs }}
{{- $seaweedfs = $tenantName }}
{{- end }}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ include "tenant.name" . }}
name: {{ $tenantName }}
{{- if hasPrefix "tenant-" .Release.Namespace }}
annotations:
{{- if .Values.host }}
namespace.cozystack.io/host: "{{ .Values.host }}"
{{- else }}
{{ $parentHost := index $existingNS.metadata.annotations "namespace.cozystack.io/host" | required (printf "namespace %s has no namespace.cozystack.io/host annotation" .Release.Namespace) }}
namespace.cozystack.io/host: "{{ splitList "-" (include "tenant.name" .) | last }}.{{ $parentHost }}"
{{- end }}
{{- include "cozystack.namespace-anotations" (list . $existingNS) | nindent 4 }}
labels:
tenant.cozystack.io/{{ include "tenant.name" $ }}: ""
{{- if hasPrefix "tenant-" .Release.Namespace }}
tenant.cozystack.io/{{ $tenantName }}: ""
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
tenant.cozystack.io/{{ join "-" (slice $parts 0 (add $i 1)) }}: ""
{{- end }}
{{- end }}
{{- end }}
{{- include "cozystack.namespace-anotations" (list $ $existingNS) | nindent 4 }}
{{/* Labels for network policies */}}
namespace.cozystack.io/etcd: {{ $etcd | quote }}
namespace.cozystack.io/ingress: {{ $ingress | quote }}
namespace.cozystack.io/monitoring: {{ $monitoring | quote }}
namespace.cozystack.io/seaweedfs: {{ $seaweedfs | quote }}
namespace.cozystack.io/host: {{ $computedHost | quote }}
alpha.kubevirt.io/auto-memory-limits-ratio: "1.0"
ownerReferences:
- apiVersion: v1
@@ -50,4 +67,23 @@ metadata:
name: {{ .Release.Namespace }}
uid: {{ $existingNS.metadata.uid }}
{{- end }}
---
apiVersion: v1
kind: Secret
metadata:
name: cozystack-values
namespace: {{ $tenantName }}
labels:
reconcile.fluxcd.io/watch: Enabled
type: Opaque
stringData:
values.yaml: |
_cluster:
{{- .Values._cluster | toYaml | nindent 6 }}
_namespace:
etcd: {{ $etcd | quote }}
ingress: {{ $ingress | quote }}
monitoring: {{ $monitoring | quote }}
seaweedfs: {{ $seaweedfs | quote }}
host: {{ $computedHost | quote }}
{{- end }}

View File

@@ -31,4 +31,7 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
{{- end }}

View File

@@ -74,9 +74,8 @@ Generate a stable UUID for cloud-init re-initialization upon upgrade.
Node Affinity for Windows VMs
*/}}
{{- define "virtual-machine.nodeAffinity" -}}
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" -}}
{{- if $configMap -}}
{{- $dedicatedNodesForWindowsVMs := get $configMap.data "dedicatedNodesForWindowsVMs" -}}
{{- if .Values._cluster.scheduling -}}
{{- $dedicatedNodesForWindowsVMs := get .Values._cluster.scheduling "dedicatedNodesForWindowsVMs" -}}
{{- if eq $dedicatedNodesForWindowsVMs "true" -}}
{{- $isWindows := hasPrefix "windows" (toString .Values.instanceProfile) -}}
affinity:

View File

@@ -74,9 +74,8 @@ Generate a stable UUID for cloud-init re-initialization upon upgrade.
Node Affinity for Windows VMs
*/}}
{{- define "virtual-machine.nodeAffinity" -}}
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" -}}
{{- if $configMap -}}
{{- $dedicatedNodesForWindowsVMs := get $configMap.data "dedicatedNodesForWindowsVMs" -}}
{{- if .Values._cluster.scheduling -}}
{{- $dedicatedNodesForWindowsVMs := get .Values._cluster.scheduling "dedicatedNodesForWindowsVMs" -}}
{{- if eq $dedicatedNodesForWindowsVMs "true" -}}
{{- $isWindows := hasPrefix "windows" (toString .Values.instanceProfile) -}}
affinity:

View File

@@ -1,5 +1,4 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $host := .Values._namespace.host }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-vpn" .Release.Name) }}
{{- $accessKeys := list }}
{{- $passwords := dict }}

View File

@@ -7,51 +7,42 @@ pre-checks:
../../../hack/pre-checks.sh
show:
cozyhr show -n $(NAMESPACE) $(NAME) --plain
cozyhr show --namespace $(NAMESPACE) $(NAME) --plain
apply:
cozyhr show -n $(NAMESPACE) $(NAME) --plain | kubectl apply -f-
cozyhr show --namespace $(NAMESPACE) $(NAME) --plain | kubectl apply --filename -
diff:
cozyhr show -n $(NAMESPACE) $(NAME) --plain | kubectl diff -f -
cozyhr show --namespace $(NAMESPACE) $(NAME) --plain | kubectl diff --filename -
image: pre-checks image-cozystack
image-cozystack:
docker buildx build -f images/cozystack/Dockerfile ../../.. \
--tag $(REGISTRY)/installer:$(call settag,$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/installer:latest \
--cache-to type=inline \
--metadata-file images/installer.json \
$(BUILDX_ARGS)
IMAGE="$(REGISTRY)/installer:$(call settag,$(TAG))@$$(yq e '."containerimage.digest"' images/installer.json -o json -r)" \
yq -i '.cozystack.image = strenv(IMAGE)' values.yaml
rm -f images/installer.json
image: pre-checks image-operator image-packages
update-version:
TAG="$(call settag,$(TAG))" \
yq -i '.cozystackOperator.cozystackVersion = strenv(TAG)' values.yaml
yq --inplace '.cozystackOperator.cozystackVersion = strenv(TAG)' values.yaml
image-operator:
docker buildx build -f images/cozystack-operator/Dockerfile ../../.. \
--tag $(REGISTRY)/cozystack-operator:$(call settag,$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/cozystack-operator:latest \
--cache-to type=inline \
--metadata-file images/cozystack-operator.json \
$(BUILDX_ARGS)
IMAGE="$(REGISTRY)/cozystack-operator:$(call settag,$(TAG))@$$(yq e '."containerimage.digest"' images/cozystack-operator.json -o json -r)" \
yq -i '.cozystackOperator.image = strenv(IMAGE)' values.yaml
docker buildx build --file images/cozystack-operator/Dockerfile ../../.. \
--tag $(REGISTRY)/cozystack-operator:$(call settag,$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/cozystack-operator:latest \
--cache-to type=inline \
--metadata-file images/cozystack-operator.json \
$(BUILDX_ARGS)
IMAGE="$(REGISTRY)/cozystack-operator:$(call settag,$(TAG))@$$(yq --exit-status '.["containerimage.digest"]' images/cozystack-operator.json --output-format json --raw-output)" \
yq --inplace '.cozystackOperator.image = strenv(IMAGE)' values.yaml
rm -f images/cozystack-operator.json
image-packages: update-version
mkdir -p ../../../_out/assets images
flux push artifact \
oci://$(REGISTRY)/platform-packages:$(call settag,$(TAG)) \
--path=../../../packages \
--source=https://github.com/cozystack/cozystack \
--revision="$$(git describe --tags):$$(git rev-parse HEAD)" \
2>&1 | tee images/cozystack-packages.log
export REPO="oci://$(REGISTRY)/platform-packages"; \
export DIGEST=$$(awk -F@ '/artifact successfully pushed/ {print $$2}' images/cozystack-packages.log; rm -f images/cozystack-packages.log); \
test -n "$$DIGEST" && yq -i '.cozystackOperator.platformSource = (strenv(REPO) + "@" + strenv(DIGEST))' values.yaml
oci://$(REGISTRY)/cozystack-packages:$(call settag,$(TAG)) \
--path=../../../packages \
--source=https://github.com/cozystack/cozystack \
--revision="$$(git describe --tags):$$(git rev-parse HEAD)" \
2>&1 | tee images/cozystack-packages.log
REPO="oci://$(REGISTRY)/cozystack-packages" \
DIGEST=$$(awk --field-separator @ '/artifact successfully pushed/ {print $$2}' images/cozystack-packages.log) && \
rm -f images/cozystack-packages.log && \
test -n "$$DIGEST" && \
yq --inplace '.cozystackOperator.platformSourceUrl = strenv(REPO)' values.yaml && \
yq --inplace '.cozystackOperator.platformSourceRef = "digest=" + strenv(DIGEST)' values.yaml

View File

@@ -1,41 +0,0 @@
FROM golang:1.24-alpine AS k8s-await-election-builder
ARG K8S_AWAIT_ELECTION_GITREPO=https://github.com/LINBIT/k8s-await-election
ARG K8S_AWAIT_ELECTION_VERSION=0.4.1
# TARGETARCH is a docker special variable: https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope
ARG TARGETARCH
RUN apk add --no-cache git make
RUN git clone ${K8S_AWAIT_ELECTION_GITREPO} /usr/local/go/k8s-await-election/ \
&& cd /usr/local/go/k8s-await-election \
&& git reset --hard v${K8S_AWAIT_ELECTION_VERSION} \
&& make \
&& mv ./out/k8s-await-election-${TARGETARCH} /k8s-await-election
FROM golang:1.25-alpine AS builder
ARG TARGETOS
ARG TARGETARCH
RUN apk add --no-cache make git
RUN apk add helm --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community
COPY . /src/
WORKDIR /src
RUN go mod download
FROM alpine:3.22
RUN wget -O- https://github.com/cozystack/cozyhr/raw/refs/heads/main/hack/install.sh | sh -s -- -v 1.5.0
RUN apk add --no-cache make kubectl helm coreutils git jq openssl
COPY --from=builder /src/scripts /cozystack/scripts
COPY --from=builder /src/packages/core /cozystack/packages/core
COPY --from=builder /src/packages/system /cozystack/packages/system
COPY --from=k8s-await-election-builder /k8s-await-election /usr/bin/k8s-await-election
WORKDIR /cozystack
ENTRYPOINT ["/usr/bin/k8s-await-election", "/cozystack/scripts/installer.sh" ]

View File

@@ -1,4 +1,3 @@
{{- if .Values.cozystackOperator.enabled }}
---
apiVersion: v1
kind: Namespace
@@ -83,6 +82,8 @@ apiVersion: cozystack.io/v1alpha1
kind: PackageSource
metadata:
name: cozystack.cozystack-platform
annotations:
operator.cozystack.io/skip-cozystack-values: "true"
spec:
sourceRef:
kind: OCIRepository
@@ -119,4 +120,3 @@ spec:
valuesFiles:
- values.yaml
- values-isp-hosted.yaml
{{- end }}

View File

@@ -1,81 +0,0 @@
{{- if not .Values.cozystackOperator.enabled }}
---
apiVersion: v1
kind: Namespace
metadata:
name: cozy-system
labels:
cozystack.io/system: "true"
pod-security.kubernetes.io/enforce: privileged
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cozystack
namespace: cozy-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cozystack
subjects:
- kind: ServiceAccount
name: cozystack
namespace: cozy-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cozystack
namespace: cozy-system
spec:
replicas: 1
selector:
matchLabels:
app: cozystack
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
template:
metadata:
labels:
app: cozystack
spec:
hostNetwork: true
serviceAccountName: cozystack
containers:
- name: cozystack
image: "{{ .Values.cozystack.image }}"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
- name: INSTALL_FLUX
value: "true"
- name: KUBERNETES_SERVICE_PORT
value: "7445"
- name: K8S_AWAIT_ELECTION_ENABLED
value: "1"
- name: K8S_AWAIT_ELECTION_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAMESPACE
value: cozy-system
- name: K8S_AWAIT_ELECTION_IDENTITY
valueFrom:
fieldRef:
fieldPath: metadata.name
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
- key: "node.cilium.io/agent-not-ready"
operator: "Exists"
effect: "NoSchedule"
{{- end }}

View File

@@ -1,8 +1,5 @@
cozystack:
image: ghcr.io/cozystack/cozystack/installer:v0.38.2@sha256:9ff92b655de6f9bea3cba4cd42dcffabd9aace6966dcfb1cc02dda2420ea4a15
cozystackOperator:
enabled: false
image: ghcr.io/cozystack/cozystack/cozystack-operator:latest@sha256:f7f6e0fd9e896b7bfa642d0bfa4378bc14e646bc5c2e86e2e09a82770ef33181
platformSourceUrl: 'oci://ghcr.io/cozystack/cozystack/platform-packages'
platformSourceUrl: 'oci://ghcr.io/cozystack/cozystack/cozystack-packages'
platformSourceRef: 'digest=sha256:0576491291b33936cdf770a5c5b5692add97339c1505fc67a92df9d69dfbfdf6'
cozystackVersion: latest

View File

@@ -4,31 +4,26 @@ NAMESPACE=cozy-system
include ../../../scripts/common-envs.mk
show:
cozyhr show -n $(NAMESPACE) $(NAME) --plain
cozyhr show --namespace $(NAMESPACE) $(NAME) --plain
apply:
cozyhr show -n $(NAMESPACE) $(NAME) --plain | kubectl apply -f-
kubectl delete helmreleases.helm.toolkit.fluxcd.io -l cozystack.io/marked-for-deletion=true -A
cozyhr show --namespace $(NAMESPACE) $(NAME) --plain | kubectl apply --filename -
kubectl delete helmreleases.helm.toolkit.fluxcd.io --selector cozystack.io/marked-for-deletion=true --all-namespaces
reconcile: apply
namespaces-show:
cozyhr show -n $(NAMESPACE) $(NAME) --plain -s templates/namespaces.yaml
namespaces-apply:
cozyhr show -n $(NAMESPACE) $(NAME) --plain -s templates/namespaces.yaml | kubectl apply -f-
diff:
cozyhr show -n $(NAMESPACE) $(NAME) --plain | kubectl diff -f-
cozyhr show --namespace $(NAMESPACE) $(NAME) --plain | kubectl diff --filename -
image: image-assets
image-assets:
docker buildx build -f images/cozystack-assets/Dockerfile ../../.. \
--tag $(REGISTRY)/cozystack-assets:$(call settag,$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/cozystack-assets:latest \
image: image-migrations
image-migrations:
docker buildx build --file images/migrations/Dockerfile . \
--tag $(REGISTRY)/platform-migrations:$(call settag,$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/platform-migrations:latest \
--cache-to type=inline \
--metadata-file images/cozystack-assets.json \
--metadata-file images/migrations.json \
$(BUILDX_ARGS)
IMAGE="$(REGISTRY)/cozystack-assets:$(call settag,$(TAG))@$$(yq e '."containerimage.digest"' images/cozystack-assets.json -o json -r)" \
yq -i '.assets.image = strenv(IMAGE)' values.yaml
rm -f images/cozystack-assets.json
IMAGE="$(REGISTRY)/platform-migrations:$(call settag,$(TAG))@$$(yq --exit-status '.["containerimage.digest"]' images/migrations.json --output-format json --raw-output)" \
yq --inplace '.migrations.image = strenv(IMAGE)' values.yaml
rm -f images/migrations.json

View File

@@ -0,0 +1,638 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.20.0
name: cozystackresourcedefinitions.cozystack.io
spec:
group: cozystack.io
names:
kind: CozystackResourceDefinition
listKind: CozystackResourceDefinitionList
plural: cozystackresourcedefinitions
singular: cozystackresourcedefinition
scope: Cluster
versions:
- name: v1alpha1
schema:
openAPIV3Schema:
description: CozystackResourceDefinition is the Schema for the cozystackresourcedefinitions
API
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
properties:
application:
description: Application configuration
properties:
kind:
description: Kind of the application, used for UI and API
type: string
openAPISchema:
description: OpenAPI schema for the application, used for API
validation
type: string
plural:
description: Plural name of the application, used for UI and API
type: string
singular:
description: Singular name of the application, used for UI and
API
type: string
required:
- kind
- openAPISchema
- plural
- singular
type: object
dashboard:
description: Dashboard configuration for this resource
properties:
category:
description: Category used to group resources in the UI (e.g.,
"Storage", "Networking")
type: string
description:
description: Short description shown in catalogs or headers (e.g.,
"S3 compatible storage")
type: string
icon:
description: Icon encoded as a string (e.g., inline SVG, base64,
or data URI)
type: string
keysOrder:
description: Order of keys in the YAML view
items:
items:
type: string
type: array
type: array
module:
description: Whether this resource is a module (tenant module)
type: boolean
name:
description: Hard-coded name used in the UI (e.g., "bucket")
type: string
plural:
description: Plural human-readable name (e.g., "Buckets")
type: string
singular:
description: Human-readable name shown in the UI (e.g., "Bucket")
type: string
singularResource:
description: Whether this resource is singular (not a collection)
in the UI
type: boolean
tabs:
description: Which tabs to show for this resource
items:
description: DashboardTab enumerates allowed UI tabs.
enum:
- workloads
- ingresses
- services
- secrets
- yaml
type: string
type: array
tags:
description: Free-form tags for search and filtering
items:
type: string
type: array
weight:
description: Order weight for sorting resources in the UI (lower
first)
type: integer
required:
- category
- plural
- singular
type: object
ingresses:
description: Ingress selectors
properties:
exclude:
description: |-
Exclude contains an array of resource selectors that target resources.
If a resource matches the selector in any of the elements in the array, it is
hidden from the user, regardless of the matches in the include array.
items:
description: "CozystackResourceDefinitionResourceSelector extends
metav1.LabelSelector with resourceNames support.\nA resource
matches this selector only if it satisfies ALL criteria:\n-
Label selector conditions (matchExpressions and matchLabels)\n-
AND has a name that matches one of the names in resourceNames
(if specified)\n\nThe resourceNames field supports Go templates
with the following variables available:\n- {{ .name }}: The
name of the managing application (from apps.cozystack.io/application.name)\n-
{{ .kind }}: The lowercased kind of the managing application
(from apps.cozystack.io/application.kind)\n- {{ .namespace
}}: The namespace of the resource being processed\n\nExample
YAML:\n\n\tsecrets:\n\t include:\n\t - matchExpressions:\n\t
\ - key: badlabel\n\t operator: DoesNotExist\n\t matchLabels:\n\t
\ goodlabel: goodvalue\n\t resourceNames:\n\t -
\"{{ .name }}-secret\"\n\t - \"{{ .kind }}-{{ .name }}-tls\"\n\t
\ - \"specificname\""
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
resourceNames:
description: |-
ResourceNames is a list of resource names to match
If specified, the resource must have one of these exact names to match the selector
items:
type: string
type: array
type: object
x-kubernetes-map-type: atomic
type: array
include:
description: |-
Include contains an array of resource selectors that target resources.
If a resource matches the selector in any of the elements in the array, and
matches none of the selectors in the exclude array that resource is marked
as a tenant resource and is visible to users.
items:
description: "CozystackResourceDefinitionResourceSelector extends
metav1.LabelSelector with resourceNames support.\nA resource
matches this selector only if it satisfies ALL criteria:\n-
Label selector conditions (matchExpressions and matchLabels)\n-
AND has a name that matches one of the names in resourceNames
(if specified)\n\nThe resourceNames field supports Go templates
with the following variables available:\n- {{ .name }}: The
name of the managing application (from apps.cozystack.io/application.name)\n-
{{ .kind }}: The lowercased kind of the managing application
(from apps.cozystack.io/application.kind)\n- {{ .namespace
}}: The namespace of the resource being processed\n\nExample
YAML:\n\n\tsecrets:\n\t include:\n\t - matchExpressions:\n\t
\ - key: badlabel\n\t operator: DoesNotExist\n\t matchLabels:\n\t
\ goodlabel: goodvalue\n\t resourceNames:\n\t -
\"{{ .name }}-secret\"\n\t - \"{{ .kind }}-{{ .name }}-tls\"\n\t
\ - \"specificname\""
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
resourceNames:
description: |-
ResourceNames is a list of resource names to match
If specified, the resource must have one of these exact names to match the selector
items:
type: string
type: array
type: object
x-kubernetes-map-type: atomic
type: array
type: object
release:
description: Release configuration
properties:
chartRef:
description: Reference to the chart source
properties:
apiVersion:
description: APIVersion of the referent.
type: string
kind:
description: Kind of the referent.
enum:
- OCIRepository
- HelmChart
- ExternalArtifact
type: string
name:
description: Name of the referent.
maxLength: 253
minLength: 1
type: string
namespace:
description: |-
Namespace of the referent, defaults to the namespace of the Kubernetes
resource object that contains the reference.
maxLength: 63
minLength: 1
type: string
required:
- kind
- name
type: object
labels:
additionalProperties:
type: string
description: Labels for the release
type: object
prefix:
description: Prefix for the release name
type: string
required:
- chartRef
- prefix
type: object
secrets:
description: Secret selectors
properties:
exclude:
description: |-
Exclude contains an array of resource selectors that target resources.
If a resource matches the selector in any of the elements in the array, it is
hidden from the user, regardless of the matches in the include array.
items:
description: "CozystackResourceDefinitionResourceSelector extends
metav1.LabelSelector with resourceNames support.\nA resource
matches this selector only if it satisfies ALL criteria:\n-
Label selector conditions (matchExpressions and matchLabels)\n-
AND has a name that matches one of the names in resourceNames
(if specified)\n\nThe resourceNames field supports Go templates
with the following variables available:\n- {{ .name }}: The
name of the managing application (from apps.cozystack.io/application.name)\n-
{{ .kind }}: The lowercased kind of the managing application
(from apps.cozystack.io/application.kind)\n- {{ .namespace
}}: The namespace of the resource being processed\n\nExample
YAML:\n\n\tsecrets:\n\t include:\n\t - matchExpressions:\n\t
\ - key: badlabel\n\t operator: DoesNotExist\n\t matchLabels:\n\t
\ goodlabel: goodvalue\n\t resourceNames:\n\t -
\"{{ .name }}-secret\"\n\t - \"{{ .kind }}-{{ .name }}-tls\"\n\t
\ - \"specificname\""
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
resourceNames:
description: |-
ResourceNames is a list of resource names to match
If specified, the resource must have one of these exact names to match the selector
items:
type: string
type: array
type: object
x-kubernetes-map-type: atomic
type: array
include:
description: |-
Include contains an array of resource selectors that target resources.
If a resource matches the selector in any of the elements in the array, and
matches none of the selectors in the exclude array that resource is marked
as a tenant resource and is visible to users.
items:
description: "CozystackResourceDefinitionResourceSelector extends
metav1.LabelSelector with resourceNames support.\nA resource
matches this selector only if it satisfies ALL criteria:\n-
Label selector conditions (matchExpressions and matchLabels)\n-
AND has a name that matches one of the names in resourceNames
(if specified)\n\nThe resourceNames field supports Go templates
with the following variables available:\n- {{ .name }}: The
name of the managing application (from apps.cozystack.io/application.name)\n-
{{ .kind }}: The lowercased kind of the managing application
(from apps.cozystack.io/application.kind)\n- {{ .namespace
}}: The namespace of the resource being processed\n\nExample
YAML:\n\n\tsecrets:\n\t include:\n\t - matchExpressions:\n\t
\ - key: badlabel\n\t operator: DoesNotExist\n\t matchLabels:\n\t
\ goodlabel: goodvalue\n\t resourceNames:\n\t -
\"{{ .name }}-secret\"\n\t - \"{{ .kind }}-{{ .name }}-tls\"\n\t
\ - \"specificname\""
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
resourceNames:
description: |-
ResourceNames is a list of resource names to match
If specified, the resource must have one of these exact names to match the selector
items:
type: string
type: array
type: object
x-kubernetes-map-type: atomic
type: array
type: object
services:
description: Service selectors
properties:
exclude:
description: |-
Exclude contains an array of resource selectors that target resources.
If a resource matches the selector in any of the elements in the array, it is
hidden from the user, regardless of the matches in the include array.
items:
description: "CozystackResourceDefinitionResourceSelector extends
metav1.LabelSelector with resourceNames support.\nA resource
matches this selector only if it satisfies ALL criteria:\n-
Label selector conditions (matchExpressions and matchLabels)\n-
AND has a name that matches one of the names in resourceNames
(if specified)\n\nThe resourceNames field supports Go templates
with the following variables available:\n- {{ .name }}: The
name of the managing application (from apps.cozystack.io/application.name)\n-
{{ .kind }}: The lowercased kind of the managing application
(from apps.cozystack.io/application.kind)\n- {{ .namespace
}}: The namespace of the resource being processed\n\nExample
YAML:\n\n\tsecrets:\n\t include:\n\t - matchExpressions:\n\t
\ - key: badlabel\n\t operator: DoesNotExist\n\t matchLabels:\n\t
\ goodlabel: goodvalue\n\t resourceNames:\n\t -
\"{{ .name }}-secret\"\n\t - \"{{ .kind }}-{{ .name }}-tls\"\n\t
\ - \"specificname\""
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
resourceNames:
description: |-
ResourceNames is a list of resource names to match
If specified, the resource must have one of these exact names to match the selector
items:
type: string
type: array
type: object
x-kubernetes-map-type: atomic
type: array
include:
description: |-
Include contains an array of resource selectors that target resources.
If a resource matches the selector in any of the elements in the array, and
matches none of the selectors in the exclude array that resource is marked
as a tenant resource and is visible to users.
items:
description: "CozystackResourceDefinitionResourceSelector extends
metav1.LabelSelector with resourceNames support.\nA resource
matches this selector only if it satisfies ALL criteria:\n-
Label selector conditions (matchExpressions and matchLabels)\n-
AND has a name that matches one of the names in resourceNames
(if specified)\n\nThe resourceNames field supports Go templates
with the following variables available:\n- {{ .name }}: The
name of the managing application (from apps.cozystack.io/application.name)\n-
{{ .kind }}: The lowercased kind of the managing application
(from apps.cozystack.io/application.kind)\n- {{ .namespace
}}: The namespace of the resource being processed\n\nExample
YAML:\n\n\tsecrets:\n\t include:\n\t - matchExpressions:\n\t
\ - key: badlabel\n\t operator: DoesNotExist\n\t matchLabels:\n\t
\ goodlabel: goodvalue\n\t resourceNames:\n\t -
\"{{ .name }}-secret\"\n\t - \"{{ .kind }}-{{ .name }}-tls\"\n\t
\ - \"specificname\""
properties:
matchExpressions:
description: matchExpressions is a list of label selector
requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector
applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
resourceNames:
description: |-
ResourceNames is a list of resource names to match
If specified, the resource must have one of these exact names to match the selector
items:
type: string
type: array
type: object
x-kubernetes-map-type: atomic
type: array
type: object
required:
- application
- release
type: object
type: object
served: true
storage: true

View File

@@ -1,25 +0,0 @@
FROM golang:1.25-alpine AS builder
ARG TARGETOS
ARG TARGETARCH
RUN apk add --no-cache make git
RUN apk add helm --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community
COPY . /src/
WORKDIR /src
RUN go mod download
RUN go build -o /cozystack-assets-server -ldflags '-extldflags "-static" -w -s' ./cmd/cozystack-assets-server
RUN make repos
FROM alpine:3.22
COPY --from=builder /src/_out/repos /cozystack/assets/repos
COPY --from=builder /cozystack-assets-server /usr/bin/cozystack-assets-server
COPY --from=builder /src/dashboards /cozystack/assets/dashboards
WORKDIR /cozystack
ENTRYPOINT ["/usr/bin/cozystack-assets-server"]

View File

@@ -0,0 +1,12 @@
FROM alpine:3.22
RUN wget -O- https://github.com/cozystack/cozyhr/raw/refs/heads/main/hack/install.sh | sh -s -- -v 1.5.0
RUN apk add --no-cache kubectl helm coreutils git jq ca-certificates bash curl
COPY migrations /migrations
COPY run-migrations.sh /usr/bin/run-migrations.sh
WORKDIR /migrations
ENTRYPOINT ["/usr/bin/run-migrations.sh"]

View File

@@ -0,0 +1,41 @@
#!/bin/sh
set -euo pipefail
NAMESPACE="${NAMESPACE:-cozy-system}"
CURRENT_VERSION="${CURRENT_VERSION:-0}"
TARGET_VERSION="${TARGET_VERSION:-0}"
echo "Starting migrations from version $CURRENT_VERSION to $TARGET_VERSION"
# Check if ConfigMap exists
if ! kubectl get configmap --namespace "$NAMESPACE" cozystack-version >/dev/null 2>&1; then
echo "ConfigMap cozystack-version does not exist, creating it with version $TARGET_VERSION"
kubectl create configmap --namespace "$NAMESPACE" cozystack-version \
--from-literal=version="$TARGET_VERSION" \
--dry-run=client --output yaml | kubectl apply --filename -
echo "ConfigMap created with version $TARGET_VERSION"
exit 0
fi
# If current version is already at target, nothing to do
if [ "$CURRENT_VERSION" -ge "$TARGET_VERSION" ]; then
echo "Current version $CURRENT_VERSION is already at or above target version $TARGET_VERSION"
exit 0
fi
# Run migrations sequentially from current version to target version
for i in $(seq $((CURRENT_VERSION + 1)) $TARGET_VERSION); do
if [ -f "/migrations/$i" ]; then
echo "Running migration $i"
chmod +x /migrations/$i
/migrations/$i || {
echo "Migration $i failed"
exit 1
}
echo "Migration $i completed successfully"
else
echo "Migration $i not found, skipping"
fi
done
echo "All migrations completed successfully"

View File

@@ -1,6 +1,22 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $cozystackBranding := lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
{{- $cozystackScheduling := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- $kubeRootCa := lookup "v1" "ConfigMap" "kube-system" "kube-root-ca.crt" }}
{{- $bundleName := index $cozyConfig.data "bundle-name" }}
{{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
{{/* Default values for _cluster config to ensure all required keys exist */}}
{{- $clusterDefaults := dict
"root-host" ""
"bundle-name" ""
"clusterissuer" "http01"
"oidc-enabled" "false"
"expose-services" ""
"expose-ingress" "tenant-root"
"expose-external-ips" ""
"cluster-domain" "cozy.local"
"api-server-endpoint" ""
}}
{{- $clusterConfig := mergeOverwrite $clusterDefaults ($cozyConfig.data | default dict) }}
{{- $host := "example.org" }}
{{- $host := "example.org" }}
{{- if $cozyConfig.data }}
@@ -22,6 +38,8 @@ kind: Namespace
metadata:
annotations:
helm.sh/resource-policy: keep
labels:
tenant.cozystack.io/tenant-root: ""
namespace.cozystack.io/etcd: tenant-root
namespace.cozystack.io/monitoring: tenant-root
namespace.cozystack.io/ingress: tenant-root
@@ -29,6 +47,36 @@ metadata:
namespace.cozystack.io/host: "{{ $host }}"
name: tenant-root
---
apiVersion: v1
kind: Secret
metadata:
name: cozystack-values
namespace: tenant-root
labels:
reconcile.fluxcd.io/watch: Enabled
type: Opaque
stringData:
values.yaml: |
_cluster:
{{- $clusterConfig | toYaml | nindent 6 }}
{{- with $cozystackBranding.data }}
branding:
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- with $cozystackScheduling.data }}
scheduling:
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- with $kubeRootCa.data }}
kube-root-ca: {{ index . "ca.crt" | b64enc | quote }}
{{- end }}
_namespace:
etcd: tenant-root
monitoring: tenant-root
ingress: tenant-root
seaweedfs: tenant-root
host: {{ $host | quote }}
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
@@ -56,6 +104,9 @@ spec:
kind: HelmRepository
name: cozystack-apps
namespace: cozy-public
valuesFrom:
- kind: Secret
name: cozystack-values
values:
host: "{{ $host }}"
dependsOn:

View File

@@ -1,73 +0,0 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cozystack-assets
namespace: cozy-system
labels:
app: cozystack-assets
spec:
serviceName: cozystack-assets
replicas: 1
selector:
matchLabels:
app: cozystack-assets
template:
metadata:
labels:
app: cozystack-assets
spec:
hostNetwork: true
containers:
- name: assets-server
image: "{{ .Values.assets.image }}"
args:
- "-dir=/cozystack/assets"
- "-address=:8123"
ports:
- name: http
containerPort: 8123
hostPort: 8123
tolerations:
- operator: Exists
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cozystack-assets-reader
namespace: cozy-system
rules:
- apiGroups: [""]
resources:
- pods/proxy
resourceNames:
- cozystack-assets-0
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cozystack-assets-reader
namespace: cozy-system
subjects:
- kind: User
name: cozystack-assets-reader
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: cozystack-assets-reader
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: cozystack-assets
namespace: cozy-system
spec:
ports:
- name: http
port: 80
targetPort: 8123
selector:
app: cozystack-assets
type: ClusterIP

View File

@@ -83,6 +83,9 @@ spec:
values:
{{- toYaml . | nindent 4}}
{{- end }}
valuesFrom:
- kind: Secret
name: cozystack-values
{{- with $x.dependsOn }}
dependsOn:

View File

@@ -1,40 +0,0 @@
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: cozystack-system
namespace: cozy-system
labels:
cozystack.io/repository: system
spec:
interval: 5m0s
url: https://{{ include "cozystack.kubernetesAPIEndpoint" . }}/api/v1/namespaces/cozy-system/pods/cozystack-assets-0/proxy/repos/system
certSecretRef:
name: cozystack-assets-tls
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: cozystack-apps
namespace: cozy-public
labels:
cozystack.io/ui: "true"
cozystack.io/repository: apps
spec:
interval: 5m0s
url: https://{{ include "cozystack.kubernetesAPIEndpoint" . }}/api/v1/namespaces/cozy-system/pods/cozystack-assets-0/proxy/repos/apps
certSecretRef:
name: cozystack-assets-tls
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: cozystack-extra
namespace: cozy-public
labels:
cozystack.io/repository: extra
spec:
interval: 5m0s
url: https://{{ include "cozystack.kubernetesAPIEndpoint" . }}/api/v1/namespaces/cozy-system/pods/cozystack-assets-0/proxy/repos/extra
certSecretRef:
name: cozystack-assets-tls

View File

@@ -0,0 +1,69 @@
{{- $shouldRunMigrationHook := false }}
{{- $currentVersion := 0 }}
{{- $targetVersion := .Values.migrations.targetVersion | int }}
{{- $configMap := lookup "v1" "ConfigMap" .Release.Namespace "cozystack-version" }}
{{- if $configMap }}
{{- $currentVersion = dig "data" "version" "0" $configMap | int }}
{{- if lt $currentVersion $targetVersion }}
{{- $shouldRunMigrationHook = true }}
{{- end }}
{{- else }}
{{- $shouldRunMigrationHook = true }}
{{- end }}
{{- if $shouldRunMigrationHook }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: cozystack-migration-hook
annotations:
helm.sh/hook: pre-upgrade,pre-install
helm.sh/hook-weight: "1"
helm.sh/hook-delete-policy: before-hook-creation
spec:
backoffLimit: 3
template:
metadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"
spec:
serviceAccountName: cozystack-migration-hook
containers:
- name: migration
image: {{ .Values.migrations.image }}
env:
- name: NAMESPACE
value: {{ .Release.Namespace | quote }}
- name: CURRENT_VERSION
value: {{ $currentVersion | quote }}
- name: TARGET_VERSION
value: {{ $targetVersion | quote }}
restartPolicy: Never
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
helm.sh/hook: pre-upgrade,pre-install
helm.sh/hook-weight: "1"
helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation
name: cozystack-migration-hook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: cozystack-migration-hook
namespace: {{ .Release.Namespace | quote }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cozystack-migration-hook
annotations:
helm.sh/hook: pre-upgrade,pre-install
helm.sh/hook-weight: "1"
helm.sh/hook-delete-policy: hook-succeeded,before-hook-creation
{{- end }}

View File

@@ -1,5 +1,20 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $cozystackBranding := lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
{{- $cozystackScheduling := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- $bundleName := index $cozyConfig.data "bundle-name" }}
{{/* Default values for _cluster config to ensure all required keys exist */}}
{{- $clusterDefaults := dict
"root-host" ""
"bundle-name" ""
"clusterissuer" "http01"
"oidc-enabled" "false"
"expose-services" ""
"expose-ingress" "tenant-root"
"expose-external-ips" ""
"cluster-domain" "cozy.local"
"api-server-endpoint" ""
}}
{{- $clusterConfig := mergeOverwrite $clusterDefaults ($cozyConfig.data | default dict) }}
{{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
{{- $disabledComponents := splitList "," ((index $cozyConfig.data "bundle-disable") | default "") }}
{{- $enabledComponents := splitList "," ((index $cozyConfig.data "bundle-enable") | default "") }}
@@ -37,4 +52,25 @@ metadata:
pod-security.kubernetes.io/enforce: privileged
{{- end }}
name: {{ $namespace }}
---
apiVersion: v1
kind: Secret
metadata:
name: cozystack-values
namespace: {{ $namespace }}
labels:
reconcile.fluxcd.io/watch: Enabled
type: Opaque
stringData:
values.yaml: |
_cluster:
{{- $clusterConfig | toYaml | nindent 6 }}
{{- with $cozystackBranding.data }}
branding:
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- with $cozystackScheduling.data }}
scheduling:
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -1,6 +0,0 @@
{{/*
{{- range $path, $_ := .Files.Glob "sources/*.yaml" }}
---
{{ $.Files.Get $path }}
{{- end }}
*/}}

View File

@@ -1,2 +1,3 @@
assets:
image: ghcr.io/cozystack/cozystack/cozystack-assets:latest@sha256:19b166819d0205293c85d8351a3e038dc4c146b876a8e2ae21dce1d54f0b9e33
migrations:
image: ghcr.io/cozystack/cozystack/platform-migrations:latest
targetVersion: 22

0
packages/core/testing/Chart.yaml Executable file → Normal file
View File

0
packages/core/testing/Makefile Executable file → Normal file
View File

10
packages/core/testing/images/e2e-sandbox/Dockerfile Executable file → Normal file
View File

@@ -9,7 +9,7 @@ ARG TARGETOS
ARG TARGETARCH
RUN apt update -q
RUN apt install -yq --no-install-recommends psmisc genisoimage ca-certificates qemu-kvm qemu-utils iproute2 iptables wget xz-utils netcat curl jq make git
RUN apt install -yq --no-install-recommends psmisc genisoimage ca-certificates qemu-kvm qemu-utils iproute2 iptables wget xz-utils netcat curl jq make git bash-completion
RUN curl -sSL "https://github.com/siderolabs/talos/releases/download/v${TALOSCTL_VERSION}/talosctl-${TARGETOS}-${TARGETARCH}" -o /usr/local/bin/talosctl \
&& chmod +x /usr/local/bin/talosctl
RUN curl -sSL "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/${TARGETOS}/${TARGETARCH}/kubectl" -o /usr/local/bin/kubectl \
@@ -21,5 +21,13 @@ RUN curl -sSL "https://fluxcd.io/install.sh" | bash
RUN curl -sSL "https://github.com/cozystack/cozyhr/raw/refs/heads/main/hack/install.sh" | sh -s -- -v "${COZYHR_VERSION}"
RUN curl https://dl.min.io/client/mc/release/${TARGETOS}-${TARGETARCH}/mc --create-dirs -o /usr/local/bin/mc \
&& chmod +x /usr/local/bin/mc
RUN <<'EOF'
cat <<'EOT' >> /etc/bash.bashrc
. /etc/bash_completion
. <(kubectl completion bash)
alias k=kubectl
complete -F __start_kubectl k
EOT
EOF
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]

0
packages/core/testing/values.yaml Executable file → Normal file
View File

View File

@@ -1,9 +1,6 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:

View File

@@ -1,9 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $host := .Values._namespace.host }}
{{ range $m := .Values.machines }}
---

View File

@@ -49,10 +49,9 @@ spec:
{{- with .Values.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list . $) | nindent 10 }}
{{- end }}
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- $rawConstraints := "" }}
{{- if $configMap }}
{{- $rawConstraints = get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints = get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- end }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 6 }}

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $oidcEnabled := index $cozyConfig.data "oidc-enabled" }}
{{- $oidcEnabled := index .Values._cluster "oidc-enabled" }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:

View File

@@ -1,23 +1,14 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $host := .Values._namespace.host | default (index .Values._cluster "root-host") }}
{{- $k8sClientSecret := lookup "v1" "Secret" "cozy-keycloak" "k8s-client" }}
{{- if $k8sClientSecret }}
{{- $apiServerEndpoint := index $cozyConfig.data "api-server-endpoint" }}
{{- $managementKubeconfigEndpoint := default "" (get $cozyConfig.data "management-kubeconfig-endpoint") }}
{{- $apiServerEndpoint := index .Values._cluster "api-server-endpoint" }}
{{- $managementKubeconfigEndpoint := default "" (index .Values._cluster "management-kubeconfig-endpoint") }}
{{- if and $managementKubeconfigEndpoint (ne $managementKubeconfigEndpoint "") }}
{{- $apiServerEndpoint = $managementKubeconfigEndpoint }}
{{- end }}
{{- $k8sClient := index $k8sClientSecret.data "client-secret-key" | b64dec }}
{{- $rootSaConfigMap := lookup "v1" "ConfigMap" "kube-system" "kube-root-ca.crt" }}
{{- $k8sCa := index $rootSaConfigMap.data "ca.crt" | b64enc }}
{{- if .Capabilities.APIVersions.Has "helm.toolkit.fluxcd.io/v2" }}
{{- $tenantRoot := lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" "tenant-root" "tenant-root" }}
{{- if and $tenantRoot $tenantRoot.spec $tenantRoot.spec.values $tenantRoot.spec.values.host }}
{{- $host = $tenantRoot.spec.values.host }}
{{- end }}
{{- end }}
{{- $k8sCa := index .Values._cluster "kube-root-ca" }}
---
apiVersion: v1
kind: Secret

View File

@@ -1,6 +1,5 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $exposeIngress := index $cozyConfig.data "expose-ingress" | default "tenant-root" }}
{{- $exposeExternalIPs := (index $cozyConfig.data "expose-external-ips") | default "" | nospace }}
{{- $exposeIngress := (index .Values._cluster "expose-ingress") | default "tenant-root" }}
{{- $exposeExternalIPs := (index .Values._cluster "expose-external-ips") | default "" | nospace }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
@@ -24,6 +23,9 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
values:
ingress-nginx:
fullnameOverride: {{ trimPrefix "tenant-" .Release.Namespace }}-ingress

View File

@@ -5,9 +5,8 @@ metadata:
name: alerta-db
spec:
instances: 2
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints := get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
labelSelector:

View File

@@ -1,9 +1,6 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
{{- $apiKey := randAlphaNum 32 }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace "alerta" }}

View File

@@ -1,6 +1,6 @@
{{- range (split "\n" (.Files.Get "dashboards.list")) }}
{{- $parts := split "/" . }}
{{- if eq (len $parts) 2 }}
{{- if eq (len $parts) 2 }}
---
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDashboard
@@ -11,6 +11,6 @@ spec:
instanceSelector:
matchLabels:
dashboards: grafana
url: http://cozystack-assets.cozy-system.svc/dashboards/{{ . }}.json
url: http://grafana-dashboards.cozy-grafana-operator.svc/{{ . }}.json
{{- end }}
{{- end }}

View File

@@ -6,9 +6,8 @@ spec:
instances: 2
storage:
size: {{ .Values.grafana.db.size }}
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints := get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
labelSelector:

View File

@@ -1,9 +1,6 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
---
apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana

View File

@@ -1,7 +1,5 @@
{{- if eq .Values.topology "Client" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $host := .Values._namespace.host }}
---
apiVersion: apps/v1
kind: Deployment

View File

@@ -1,9 +1,5 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
{{- if and (not (eq .Values.topology "Client")) (.Values.filer.grpcHost) }}
---
apiVersion: networking.k8s.io/v1

View File

@@ -34,9 +34,8 @@
{{- end }}
{{- if not (eq .Values.topology "Client") }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
@@ -60,6 +59,9 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
values:
global:
serviceAccountName: "{{ .Release.Namespace }}-seaweedfs"

View File

@@ -1,7 +1,130 @@
{{/*
Cluster-wide configuration helpers.
These helpers read from .Values._cluster which is populated via valuesFrom from Secret cozystack-values.
*/}}
{{/*
Get the root host for the cluster.
Usage: {{ include "cozy-lib.root-host" . }}
*/}}
{{- define "cozy-lib.root-host" -}}
{{- (index .Values._cluster "root-host") | default "" }}
{{- end }}
{{/*
Get the bundle name for the cluster.
Usage: {{ include "cozy-lib.bundle-name" . }}
*/}}
{{- define "cozy-lib.bundle-name" -}}
{{- (index .Values._cluster "bundle-name") | default "" }}
{{- end }}
{{/*
Get the images registry.
Usage: {{ include "cozy-lib.images-registry" . }}
*/}}
{{- define "cozy-lib.images-registry" -}}
{{- (index .Values._cluster "images-registry") | default "" }}
{{- end }}
{{/*
Get the ipv4 cluster CIDR.
Usage: {{ include "cozy-lib.ipv4-cluster-cidr" . }}
*/}}
{{- define "cozy-lib.ipv4-cluster-cidr" -}}
{{- (index .Values._cluster "ipv4-cluster-cidr") | default "" }}
{{- end }}
{{/*
Get the ipv4 service CIDR.
Usage: {{ include "cozy-lib.ipv4-service-cidr" . }}
*/}}
{{- define "cozy-lib.ipv4-service-cidr" -}}
{{- (index .Values._cluster "ipv4-service-cidr") | default "" }}
{{- end }}
{{/*
Get the ipv4 join CIDR.
Usage: {{ include "cozy-lib.ipv4-join-cidr" . }}
*/}}
{{- define "cozy-lib.ipv4-join-cidr" -}}
{{- (index .Values._cluster "ipv4-join-cidr") | default "" }}
{{- end }}
{{/*
Get scheduling configuration.
Usage: {{ include "cozy-lib.scheduling" . }}
Returns: YAML string of scheduling configuration
*/}}
{{- define "cozy-lib.scheduling" -}}
{{- if .Values._cluster.scheduling }}
{{- .Values._cluster.scheduling | toYaml }}
{{- end }}
{{- end }}
{{/*
Get branding configuration.
Usage: {{ include "cozy-lib.branding" . }}
Returns: YAML string of branding configuration
*/}}
{{- define "cozy-lib.branding" -}}
{{- if .Values._cluster.branding }}
{{- .Values._cluster.branding | toYaml }}
{{- end }}
{{- end }}
{{/*
Namespace-specific configuration helpers.
These helpers read from .Values._namespace which is populated via valuesFrom from Secret cozystack-values.
*/}}
{{/*
Get the host for this namespace.
Usage: {{ include "cozy-lib.ns-host" . }}
*/}}
{{- define "cozy-lib.ns-host" -}}
{{- .Values._namespace.host | default "" }}
{{- end }}
{{/*
Get the etcd namespace reference.
Usage: {{ include "cozy-lib.ns-etcd" . }}
*/}}
{{- define "cozy-lib.ns-etcd" -}}
{{- .Values._namespace.etcd | default "" }}
{{- end }}
{{/*
Get the ingress namespace reference.
Usage: {{ include "cozy-lib.ns-ingress" . }}
*/}}
{{- define "cozy-lib.ns-ingress" -}}
{{- .Values._namespace.ingress | default "" }}
{{- end }}
{{/*
Get the monitoring namespace reference.
Usage: {{ include "cozy-lib.ns-monitoring" . }}
*/}}
{{- define "cozy-lib.ns-monitoring" -}}
{{- .Values._namespace.monitoring | default "" }}
{{- end }}
{{/*
Get the seaweedfs namespace reference.
Usage: {{ include "cozy-lib.ns-seaweedfs" . }}
*/}}
{{- define "cozy-lib.ns-seaweedfs" -}}
{{- .Values._namespace.seaweedfs | default "" }}
{{- end }}
{{/*
Legacy helper - kept for backward compatibility during migration.
Loads config into context. Deprecated: use direct .Values._cluster access instead.
*/}}
{{- define "cozy-lib.loadCozyConfig" }}
{{- include "cozy-lib.checkInput" . }}
{{- if not (hasKey (index . 1) "cozyConfig") }}
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $_ := set (index . 1) "cozyConfig" $cozyConfig }}
{{- $_ := set (index . 1) "cozyConfig" (dict "data" ((index . 1).Values._cluster | default dict)) }}
{{- end }}
{{- end }}

View File

@@ -13,12 +13,10 @@ spec:
prefix: ""
labels:
cozystack.io/ui: "true"
chart:
name: bootbox
sourceRef:
kind: HelmRepository
name: cozystack-extra
namespace: cozy-public
chartRef:
kind: OCIRepository
name: bootbox-rd
namespace: cozy-system
dashboard:
category: Administration
singular: BootBox

View File

@@ -13,12 +13,10 @@ spec:
prefix: bucket-
labels:
cozystack.io/ui: "true"
chart:
name: bucket
sourceRef:
kind: HelmRepository
name: cozystack-apps
namespace: cozy-public
chartRef:
kind: OCIRepository
name: bucket-rd
namespace: cozy-system
dashboard:
singular: Bucket
plural: Buckets

View File

@@ -1,8 +1,6 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $host := .Values._namespace.host }}
{{- $ingress := .Values._namespace.ingress }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
apiVersion: networking.k8s.io/v1
kind: Ingress

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
apiVersion: cert-manager.io/v1
kind: ClusterIssuer

View File

@@ -13,12 +13,10 @@ spec:
prefix: clickhouse-
labels:
cozystack.io/ui: "true"
chart:
name: clickhouse
sourceRef:
kind: HelmRepository
name: cozystack-apps
namespace: cozy-public
chartRef:
kind: OCIRepository
name: clickhouse-rd
namespace: cozy-system
dashboard:
category: PaaS
singular: ClickHouse

Some files were not shown because too many files have changed in this diff Show More