mirror of
https://github.com/cozystack/cozystack.git
synced 2026-03-11 17:38:55 +00:00
Compare commits
52 Commits
feat/cozys
...
v1.0.2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
48ce08f584 | ||
|
|
2675ff326a | ||
|
|
51a5073175 | ||
|
|
05d1c02eff | ||
|
|
f06817d4e8 | ||
|
|
0fefaa246f | ||
|
|
7cea11e57b | ||
|
|
31ae2bb826 | ||
|
|
c976378f2f | ||
|
|
edc32eec51 | ||
|
|
e51f05d850 | ||
|
|
dac1a375e2 | ||
|
|
2a956eb0f9 | ||
|
|
6fbe026927 | ||
|
|
4c3a6987c5 | ||
|
|
30c5696541 | ||
|
|
42780f26d2 | ||
|
|
e9e2121153 | ||
|
|
3033e718dd | ||
|
|
aa8a7eae47 | ||
|
|
5a14dc6f54 | ||
|
|
2b59d4fc97 | ||
|
|
ab26d71cc7 | ||
|
|
66a61bd63e | ||
|
|
f887e34206 | ||
|
|
7a107296e5 | ||
|
|
e16d987403 | ||
|
|
c63fcf50b3 | ||
|
|
4417cc35a0 | ||
|
|
78cc4c0955 | ||
|
|
65c6936e95 | ||
|
|
cd3643b8cc | ||
|
|
2024ec3a8b | ||
|
|
f282f19c1b | ||
|
|
7427bbdaa3 | ||
|
|
da89203a32 | ||
|
|
e0dfc8a321 | ||
|
|
9cbd948b08 | ||
|
|
e5f7bc5c53 | ||
|
|
e89ba43c39 | ||
|
|
a20951def3 | ||
|
|
4c73ac54a0 | ||
|
|
cfb5914cdd | ||
|
|
948346ef6d | ||
|
|
da597225d1 | ||
|
|
7871d425dd | ||
|
|
a9adda5e88 | ||
|
|
880b99f3f7 | ||
|
|
c7290f3521 | ||
|
|
2b1b5e8fa9 | ||
|
|
4f2578a32b | ||
|
|
211e01bd87 |
2
.github/CODEOWNERS
vendored
2
.github/CODEOWNERS
vendored
@@ -1 +1 @@
|
||||
* @kvaps @lllamnyp @lexfrei @androndo @IvanHunters
|
||||
* @kvaps @lllamnyp @lexfrei @androndo @IvanHunters @sircthulhu
|
||||
|
||||
57
docs/changelogs/v1.0.0-rc.2.md
Normal file
57
docs/changelogs/v1.0.0-rc.2.md
Normal file
@@ -0,0 +1,57 @@
|
||||
<!--
|
||||
https://github.com/cozystack/cozystack/releases/tag/v1.0.0-rc.2
|
||||
-->
|
||||
|
||||
> **⚠️ Release Candidate Warning**: This is a release candidate intended for final validation before the stable v1.0.0 release. Breaking changes are not expected at this stage, but please test thoroughly before deploying to production.
|
||||
|
||||
## Features and Improvements
|
||||
|
||||
* **[keycloak] Allow custom Ingress hostname via values**: Added an `ingress.host` field to the cozy-keycloak chart values, allowing operators to override the default `keycloak.<root-host>` Ingress hostname. The custom hostname is applied to both the Ingress resource and the `KC_HOSTNAME` environment variable in the StatefulSet. When left empty, the original behavior is preserved (fully backward compatible) ([**@sircthulhu**](https://github.com/sircthulhu) in #2101).
|
||||
|
||||
## Fixes
|
||||
|
||||
* **[platform] Fix upgrade issues in migrations, etcd timeout, and migration script**: Fixed multiple upgrade failures discovered during v0.41.1 → v1.0 upgrade testing. Migration 26 now uses the `cozystack.io/ui=true` label (always present on v0.41.1) instead of the new label that depends on migration 22 having run, and adds robust Helm secret deletion with fallback and verification. Migrations 28 and 29 wrap `grep` calls to prevent `pipefail` exits and fix the reconcile annotation to use RFC3339 format. Migration 27 now skips missing CRDs and adds a name-pattern fallback for Helm secret deletion. The etcd HelmRelease timeout is increased from 10m to 30m to accommodate TLS cert rotation hooks. The `migrate-to-version-1.0.sh` script gains the missing `bundle-disable`, `bundle-enable`, `expose-ingress`, and `expose-services` field mappings ([**@kvaps**](https://github.com/kvaps) in #2096).
|
||||
|
||||
* **[platform] Fix orphaned -rd HelmReleases after application renames**: After the `ferretdb→mongodb`, `mysql→mariadb`, and `virtual-machine→vm-disk+vm-instance` renames, the system-level `-rd` HelmReleases in `cozy-system` (`ferretdb-rd`, `mysql-rd`, `virtual-machine-rd`) were left orphaned, referencing ExternalArtifacts that no longer exist and causing persistent reconciliation failures. Migrations 28 and 29 are updated to remove these resources, and migration 33 is added as a safety net for clusters that already passed those migrations ([**@kvaps**](https://github.com/kvaps) in #2102).
|
||||
|
||||
* **[monitoring-agents] Fix FQDN resolution regression in tenant workload clusters**: The fix introduced in #2075 used `_cluster.cluster-domain` references in `values.yaml`, but `_cluster` values are not accessible from Helm subchart contexts — meaning fluent-bit received empty hostnames and failed to forward logs. This PR replaces the `_cluster` references with a new `global.clusterDomain` variable (empty by default for management clusters, set to the cluster domain for tenant clusters), which is correctly shared with all subcharts ([**@kvaps**](https://github.com/kvaps) in #2086).
|
||||
|
||||
* **[dashboard] Fix legacy templating and cluster identifier in sidebar links**: Standardized the cluster identifier used across dashboard menu links, administration links, and API request paths, resolving incorrect or broken link targets for the Backups and External IPs sidebar sections ([**@androndo**](https://github.com/androndo) in #2093).
|
||||
|
||||
* **[dashboard] Fix backupjobs creation form and sidebar backup category identifier**: Fixed the backup job creation form configuration, adding the required Name, Namespace, Plan Name, Application, and Backup Class fields. Fixed the sidebar backup category identifier that was causing incorrect navigation ([**@androndo**](https://github.com/androndo) in #2103).
|
||||
|
||||
## Documentation
|
||||
|
||||
* **[website] Add Helm chart development principles guide**: Added a new developer guide section documenting Cozystack's four core Helm chart principles: easy upstream updates, local-first artifacts, local dev/test workflow, and no external dependencies ([**@kvaps**](https://github.com/kvaps) in cozystack/website#418).
|
||||
|
||||
* **[website] Add network architecture overview**: Added comprehensive network architecture documentation covering the multi-layered networking stack — MetalLB (L2/BGP), Cilium eBPF (kube-proxy replacement), Kube-OVN (centralized IPAM), and tenant isolation with identity-based eBPF policies — with Mermaid diagrams for all major traffic flows ([**@IvanHunters**](https://github.com/IvanHunters) in cozystack/website#422).
|
||||
|
||||
* **[website] Update documentation to use jsonpatch for service exposure**: Improved `kubectl patch` commands throughout installation and configuration guides to use JSON Patch `add` operations for extending arrays instead of replacing them wholesale, making the documented commands safer and more precise ([**@sircthulhu**](https://github.com/sircthulhu) in cozystack/website#427).
|
||||
|
||||
* **[website] Update certificates section in Platform Package documentation**: Updated the certificate configuration documentation to reflect the new `solver` and `issuerName` fields introduced in v1.0.0-rc.1, replacing the legacy `issuerType` references ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in cozystack/website#429).
|
||||
|
||||
* **[website] Add tenant Kubernetes cluster log querying guide**: Added documentation for querying logs from tenant Kubernetes clusters in Grafana using VictoriaLogs labels (`tenant`, `kubernetes_namespace_name`, `kubernetes_pod_name`), including the `monitoringAgents` addon prerequisite and step-by-step filtering examples ([**@IvanHunters**](https://github.com/IvanHunters) in cozystack/website#430).
|
||||
|
||||
* **[website] Replace non-idempotent commands with idempotent alternatives**: Updated `helm install` to `helm upgrade --install`, `kubectl create -f` to `kubectl apply -f`, and `kubectl create ns` to the dry-run+apply pattern across all installation and deployment guides so commands can be safely re-run ([**@lexfrei**](https://github.com/lexfrei) in cozystack/website#431).
|
||||
|
||||
* **[website] Fix broken documentation links with `.md` suffix**: Fixed incorrect internal links with `.md` suffix across virtualization guides for both v0 and v1 documentation, standardizing link text to "Developer Guide" ([**@cheese**](https://github.com/cheese) in cozystack/website#432).
|
||||
|
||||
## Contributors
|
||||
|
||||
We'd like to thank all contributors who made this release possible:
|
||||
|
||||
* [**@androndo**](https://github.com/androndo)
|
||||
* [**@cheese**](https://github.com/cheese)
|
||||
* [**@IvanHunters**](https://github.com/IvanHunters)
|
||||
* [**@kvaps**](https://github.com/kvaps)
|
||||
* [**@lexfrei**](https://github.com/lexfrei)
|
||||
* [**@myasnikovdaniil**](https://github.com/myasnikovdaniil)
|
||||
* [**@sircthulhu**](https://github.com/sircthulhu)
|
||||
|
||||
### New Contributors
|
||||
|
||||
We're excited to welcome our first-time contributors:
|
||||
|
||||
* [**@cheese**](https://github.com/cheese) - First contribution!
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.0-rc.1...v1.0.0-rc.2
|
||||
@@ -32,6 +32,54 @@ if ! kubectl get namespace "$NAMESPACE" &> /dev/null; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Step 0: Annotate critical resources to prevent Helm from deleting them
|
||||
echo "Step 0: Protect critical resources from Helm deletion"
|
||||
echo ""
|
||||
echo "The following resources will be annotated with helm.sh/resource-policy=keep"
|
||||
echo "to prevent Helm from deleting them when the installer release is removed:"
|
||||
echo " - Namespace: $NAMESPACE"
|
||||
echo " - ConfigMap: $NAMESPACE/cozystack-version"
|
||||
echo ""
|
||||
read -p "Do you want to annotate these resources? (y/N) " -n 1 -r
|
||||
echo ""
|
||||
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Annotating namespace $NAMESPACE..."
|
||||
kubectl annotate namespace "$NAMESPACE" helm.sh/resource-policy=keep --overwrite
|
||||
echo "Annotating ConfigMap cozystack-version..."
|
||||
kubectl annotate configmap -n "$NAMESPACE" cozystack-version helm.sh/resource-policy=keep --overwrite 2>/dev/null || echo " ConfigMap cozystack-version not found, skipping."
|
||||
echo ""
|
||||
echo "Resources annotated successfully."
|
||||
else
|
||||
echo "WARNING: Skipping annotation. If you remove the Helm installer release,"
|
||||
echo "the namespace and its contents may be deleted!"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Step 1: Check for cozy-proxy HelmRelease with conflicting releaseName
|
||||
# In v0.41.x, cozy-proxy was incorrectly configured with releaseName "cozystack",
|
||||
# which conflicts with the installer helm release name. If not suspended, cozy-proxy
|
||||
# HelmRelease will overwrite the installer release and delete cozystack-operator.
|
||||
COZY_PROXY_RELEASE_NAME=$(kubectl get hr -n "$NAMESPACE" cozy-proxy -o jsonpath='{.spec.releaseName}' 2>/dev/null || true)
|
||||
if [ "$COZY_PROXY_RELEASE_NAME" = "cozystack" ]; then
|
||||
echo "WARNING: HelmRelease cozy-proxy has releaseName 'cozystack', which conflicts"
|
||||
echo "with the installer release. It must be suspended before proceeding, otherwise"
|
||||
echo "it will overwrite the installer and delete cozystack-operator."
|
||||
echo ""
|
||||
read -p "Suspend HelmRelease cozy-proxy? (y/N) " -n 1 -r
|
||||
echo ""
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
kubectl -n "$NAMESPACE" patch hr cozy-proxy --type=merge --field-manager=flux-client-side-apply -p '{"spec":{"suspend":true}}'
|
||||
echo "HelmRelease cozy-proxy suspended."
|
||||
else
|
||||
echo "ERROR: Cannot proceed with conflicting cozy-proxy HelmRelease active."
|
||||
echo "Please suspend it manually:"
|
||||
echo " kubectl -n $NAMESPACE patch hr cozy-proxy --type=merge -p '{\"spec\":{\"suspend\":true}}'"
|
||||
exit 1
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Read ConfigMap cozystack
|
||||
echo "Reading ConfigMap cozystack..."
|
||||
COZYSTACK_CM=$(kubectl get configmap -n "$NAMESPACE" cozystack -o json 2>/dev/null || echo "{}")
|
||||
@@ -52,6 +100,10 @@ OIDC_ENABLED=$(echo "$COZYSTACK_CM" | jq -r '.data["oidc-enabled"] // "false"')
|
||||
KEYCLOAK_REDIRECTS=$(echo "$COZYSTACK_CM" | jq -r '.data["extra-keycloak-redirect-uri-for-dashboard"] // ""' )
|
||||
TELEMETRY_ENABLED=$(echo "$COZYSTACK_CM" | jq -r '.data["telemetry-enabled"] // "true"')
|
||||
BUNDLE_NAME=$(echo "$COZYSTACK_CM" | jq -r '.data["bundle-name"] // "paas-full"')
|
||||
BUNDLE_DISABLE=$(echo "$COZYSTACK_CM" | jq -r '.data["bundle-disable"] // ""')
|
||||
BUNDLE_ENABLE=$(echo "$COZYSTACK_CM" | jq -r '.data["bundle-enable"] // ""')
|
||||
EXPOSE_INGRESS=$(echo "$COZYSTACK_CM" | jq -r '.data["expose-ingress"] // "tenant-root"')
|
||||
EXPOSE_SERVICES=$(echo "$COZYSTACK_CM" | jq -r '.data["expose-services"] // ""')
|
||||
|
||||
# Certificate issuer configuration (old undocumented field: clusterissuer)
|
||||
OLD_CLUSTER_ISSUER=$(echo "$COZYSTACK_CM" | jq -r '.data["clusterissuer"] // ""')
|
||||
@@ -99,21 +151,24 @@ else
|
||||
EXTERNAL_IPS=$(echo "$EXTERNAL_IPS" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - "$0}')
|
||||
fi
|
||||
|
||||
# Determine bundle type
|
||||
case "$BUNDLE_NAME" in
|
||||
paas-full|distro-full)
|
||||
SYSTEM_ENABLED="true"
|
||||
SYSTEM_TYPE="full"
|
||||
;;
|
||||
paas-hosted|distro-hosted)
|
||||
SYSTEM_ENABLED="false"
|
||||
SYSTEM_TYPE="hosted"
|
||||
;;
|
||||
*)
|
||||
SYSTEM_ENABLED="false"
|
||||
SYSTEM_TYPE="hosted"
|
||||
;;
|
||||
esac
|
||||
# Convert comma-separated lists to YAML arrays
|
||||
if [ -z "$BUNDLE_DISABLE" ]; then
|
||||
DISABLED_PACKAGES="[]"
|
||||
else
|
||||
DISABLED_PACKAGES=$(echo "$BUNDLE_DISABLE" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - "$0}')
|
||||
fi
|
||||
|
||||
if [ -z "$BUNDLE_ENABLE" ]; then
|
||||
ENABLED_PACKAGES="[]"
|
||||
else
|
||||
ENABLED_PACKAGES=$(echo "$BUNDLE_ENABLE" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - "$0}')
|
||||
fi
|
||||
|
||||
if [ -z "$EXPOSE_SERVICES" ]; then
|
||||
EXPOSED_SERVICES_YAML="[]"
|
||||
else
|
||||
EXPOSED_SERVICES_YAML=$(echo "$EXPOSE_SERVICES" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - "$0}')
|
||||
fi
|
||||
|
||||
# Update bundle naming
|
||||
BUNDLE_NAME=$(echo "$BUNDLE_NAME" | sed 's/paas/isp/')
|
||||
@@ -141,8 +196,6 @@ echo " Root Host: $ROOT_HOST"
|
||||
echo " API Server Endpoint: $API_SERVER_ENDPOINT"
|
||||
echo " OIDC Enabled: $OIDC_ENABLED"
|
||||
echo " Bundle Name: $BUNDLE_NAME"
|
||||
echo " System Enabled: $SYSTEM_ENABLED"
|
||||
echo " System Type: $SYSTEM_TYPE"
|
||||
echo " Certificate Solver: ${SOLVER:-http01 (default)}"
|
||||
echo " Issuer Name: ${ISSUER_NAME:-letsencrypt-prod (default)}"
|
||||
echo ""
|
||||
@@ -160,15 +213,8 @@ spec:
|
||||
platform:
|
||||
values:
|
||||
bundles:
|
||||
system:
|
||||
enabled: $SYSTEM_ENABLED
|
||||
type: "$SYSTEM_TYPE"
|
||||
iaas:
|
||||
enabled: true
|
||||
paas:
|
||||
enabled: true
|
||||
naas:
|
||||
enabled: true
|
||||
disabledPackages: $DISABLED_PACKAGES
|
||||
enabledPackages: $ENABLED_PACKAGES
|
||||
networking:
|
||||
clusterDomain: "$CLUSTER_DOMAIN"
|
||||
podCIDR: "$POD_CIDR"
|
||||
@@ -177,6 +223,8 @@ spec:
|
||||
joinCIDR: "$JOIN_CIDR"
|
||||
publishing:
|
||||
host: "$ROOT_HOST"
|
||||
ingressName: "$EXPOSE_INGRESS"
|
||||
exposedServices: $EXPOSED_SERVICES_YAML
|
||||
apiServerEndpoint: "$API_SERVER_ENDPOINT"
|
||||
externalIPs: $EXTERNAL_IPS
|
||||
${CERTIFICATES_SECTION}
|
||||
|
||||
@@ -156,7 +156,7 @@ menuItems = append(menuItems, map[string]any{
|
||||
map[string]any{
|
||||
"key": "{plural}",
|
||||
"label": "{ResourceLabel}",
|
||||
"link": "/openapi-ui/{clusterName}/{namespace}/api-table/{group}/{version}/{plural}",
|
||||
"link": "/openapi-ui/{cluster}/{namespace}/api-table/{group}/{version}/{plural}",
|
||||
},
|
||||
},
|
||||
}),
|
||||
@@ -174,7 +174,7 @@ menuItems = append(menuItems, map[string]any{
|
||||
|
||||
**Important Notes**:
|
||||
- The sidebar tag (`{lowercase-kind}-sidebar`) must match what the Factory uses
|
||||
- The link format: `/openapi-ui/{clusterName}/{namespace}/api-table/{group}/{version}/{plural}`
|
||||
- The link format: `/openapi-ui/{cluster}/{namespace}/api-table/{group}/{version}/{plural}`
|
||||
- All sidebars share the same `keysAndTags` and `menuItems`, so changes affect all sidebar instances
|
||||
|
||||
### Step 4: Verify Integration
|
||||
|
||||
@@ -195,6 +195,7 @@ func applyListInputOverrides(schema map[string]any, kind string, openAPIProps ma
|
||||
"valueUri": "/api/clusters/{cluster}/k8s/apis/instancetype.kubevirt.io/v1beta1/virtualmachineclusterinstancetypes",
|
||||
"keysToValue": []any{"metadata", "name"},
|
||||
"keysToLabel": []any{"metadata", "name"},
|
||||
"allowEmpty": true,
|
||||
},
|
||||
}
|
||||
if prop, _ := openAPIProps["instanceType"].(map[string]any); prop != nil {
|
||||
|
||||
@@ -202,6 +202,10 @@ func TestApplyListInputOverrides_VMInstance(t *testing.T) {
|
||||
t.Errorf("expected valueUri %s, got %v", expectedURI, customProps["valueUri"])
|
||||
}
|
||||
|
||||
if customProps["allowEmpty"] != true {
|
||||
t.Errorf("expected allowEmpty true, got %v", customProps["allowEmpty"])
|
||||
}
|
||||
|
||||
// Check disks[].name is a listInput
|
||||
disks, ok := specProps["disks"].(map[string]any)
|
||||
if !ok {
|
||||
|
||||
@@ -582,15 +582,14 @@ type factoryFlags struct {
|
||||
Secrets bool
|
||||
}
|
||||
|
||||
// factoryFeatureFlags tries several conventional locations so you can evolve the API
|
||||
// without breaking the controller. Defaults are false (hidden).
|
||||
// factoryFeatureFlags determines which tabs to show based on whether the
|
||||
// ApplicationDefinition has non-empty Include resource selectors.
|
||||
// Workloads tab is always shown.
|
||||
func factoryFeatureFlags(crd *cozyv1alpha1.ApplicationDefinition) factoryFlags {
|
||||
var f factoryFlags
|
||||
|
||||
f.Workloads = true
|
||||
f.Ingresses = true
|
||||
f.Services = true
|
||||
f.Secrets = true
|
||||
|
||||
return f
|
||||
return factoryFlags{
|
||||
Workloads: true,
|
||||
Ingresses: len(crd.Spec.Ingresses.Include) > 0,
|
||||
Services: len(crd.Spec.Services.Include) > 0,
|
||||
Secrets: len(crd.Spec.Secrets.Include) > 0,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -299,10 +299,6 @@ func (m *Manager) buildExpectedResourceSet(crds []cozyv1alpha1.ApplicationDefini
|
||||
|
||||
// Add other stock sidebars that are created for each CRD
|
||||
stockSidebars := []string{
|
||||
"stock-instance-api-form",
|
||||
"stock-instance-api-table",
|
||||
"stock-instance-builtin-form",
|
||||
"stock-instance-builtin-table",
|
||||
"stock-project-factory-marketplace",
|
||||
"stock-project-factory-workloadmonitor-details",
|
||||
"stock-project-api-form",
|
||||
@@ -311,6 +307,10 @@ func (m *Manager) buildExpectedResourceSet(crds []cozyv1alpha1.ApplicationDefini
|
||||
"stock-project-builtin-table",
|
||||
"stock-project-crd-form",
|
||||
"stock-project-crd-table",
|
||||
"stock-instance-api-form",
|
||||
"stock-instance-api-table",
|
||||
"stock-instance-builtin-form",
|
||||
"stock-instance-builtin-table",
|
||||
}
|
||||
for _, sidebarID := range stockSidebars {
|
||||
expected["Sidebar"][sidebarID] = true
|
||||
|
||||
@@ -17,8 +17,7 @@ import (
|
||||
|
||||
// ensureSidebar creates/updates multiple Sidebar resources that share the same menu:
|
||||
// - The "details" sidebar tied to the current kind (stock-project-factory-<kind>-details)
|
||||
// - The stock-instance sidebars: api-form, api-table, builtin-form, builtin-table
|
||||
// - The stock-project sidebars: api-form, api-table, builtin-form, builtin-table, crd-form, crd-table
|
||||
// - The stock-project sidebars: api-form, api-table, builtin-form, builtin-table, crd-form, crd-table
|
||||
//
|
||||
// Menu rules:
|
||||
// - The first section is "Marketplace" with two hardcoded entries:
|
||||
@@ -176,23 +175,23 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
|
||||
|
||||
// Add hardcoded Backups section
|
||||
menuItems = append(menuItems, map[string]any{
|
||||
"key": "backups",
|
||||
"key": "backups-category",
|
||||
"label": "Backups",
|
||||
"children": []any{
|
||||
map[string]any{
|
||||
"key": "plans",
|
||||
"label": "Plans",
|
||||
"link": "/openapi-ui/{clusterName}/{namespace}/api-table/backups.cozystack.io/v1alpha1/plans",
|
||||
"link": "/openapi-ui/{cluster}/{namespace}/api-table/backups.cozystack.io/v1alpha1/plans",
|
||||
},
|
||||
map[string]any{
|
||||
"key": "backupjobs",
|
||||
"label": "BackupJobs",
|
||||
"link": "/openapi-ui/{clusterName}/{namespace}/api-table/backups.cozystack.io/v1alpha1/backupjobs",
|
||||
"link": "/openapi-ui/{cluster}/{namespace}/api-table/backups.cozystack.io/v1alpha1/backupjobs",
|
||||
},
|
||||
map[string]any{
|
||||
"key": "backups",
|
||||
"label": "Backups",
|
||||
"link": "/openapi-ui/{clusterName}/{namespace}/api-table/backups.cozystack.io/v1alpha1/backups",
|
||||
"link": "/openapi-ui/{cluster}/{namespace}/api-table/backups.cozystack.io/v1alpha1/backups",
|
||||
},
|
||||
},
|
||||
})
|
||||
@@ -215,7 +214,7 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
|
||||
map[string]any{
|
||||
"key": "loadbalancer-services",
|
||||
"label": "External IPs",
|
||||
"link": "/openapi-ui/{clusterName}/{namespace}/factory/external-ips",
|
||||
"link": "/openapi-ui/{cluster}/{namespace}/factory/external-ips",
|
||||
},
|
||||
map[string]any{
|
||||
"key": "tenants",
|
||||
@@ -228,13 +227,7 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
|
||||
// 6) Prepare the list of Sidebar IDs to upsert with the SAME content
|
||||
// Create sidebars for ALL CRDs with dashboard config
|
||||
targetIDs := []string{
|
||||
// stock-instance sidebars
|
||||
"stock-instance-api-form",
|
||||
"stock-instance-api-table",
|
||||
"stock-instance-builtin-form",
|
||||
"stock-instance-builtin-table",
|
||||
|
||||
// stock-project sidebars
|
||||
// stock-project sidebars (namespace-level, full menu)
|
||||
"stock-project-factory-marketplace",
|
||||
"stock-project-factory-workloadmonitor-details",
|
||||
"stock-project-factory-kube-service-details",
|
||||
@@ -250,6 +243,11 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
|
||||
"stock-project-builtin-table",
|
||||
"stock-project-crd-form",
|
||||
"stock-project-crd-table",
|
||||
// stock-instance sidebars (namespace-level pages after namespace is selected)
|
||||
"stock-instance-api-form",
|
||||
"stock-instance-api-table",
|
||||
"stock-instance-builtin-form",
|
||||
"stock-instance-builtin-table",
|
||||
}
|
||||
|
||||
// Add details sidebars for all CRDs with dashboard config
|
||||
|
||||
@@ -505,7 +505,7 @@ func CreateAllCustomFormsOverrides() []*dashboardv1alpha1.CustomFormsOverride {
|
||||
createFormItem("spec.applicationRef.name", "Application Name", "text"),
|
||||
createFormItemWithAPI("spec.backupClassName", "Backup Class", "select", map[string]any{
|
||||
"api": map[string]any{
|
||||
"fetchUrl": "/api/clusters/{clusterName}/k8s/apis/backups.cozystack.io/v1alpha1/backupclasses",
|
||||
"fetchUrl": "/api/clusters/{cluster}/k8s/apis/backups.cozystack.io/v1alpha1/backupclasses",
|
||||
"pathToItems": []any{"items"},
|
||||
"pathToValue": []any{"metadata", "name"},
|
||||
"pathToLabel": []any{"metadata", "name"},
|
||||
@@ -516,6 +516,27 @@ func CreateAllCustomFormsOverrides() []*dashboardv1alpha1.CustomFormsOverride {
|
||||
createFormItem("spec.schedule.cron", "Schedule Cron", "text"),
|
||||
},
|
||||
}),
|
||||
|
||||
// BackupJobs form override - backups.cozystack.io/v1alpha1
|
||||
createCustomFormsOverride("default-/backups.cozystack.io/v1alpha1/backupjobs", map[string]any{
|
||||
"formItems": []any{
|
||||
createFormItem("metadata.name", "Name", "text"),
|
||||
createFormItem("metadata.namespace", "Namespace", "text"),
|
||||
createFormItem("spec.planRef.name", "Plan Name (optional)", "text"),
|
||||
createFormItem("spec.applicationRef.apiGroup", "Application API Group", "text"),
|
||||
createFormItem("spec.applicationRef.kind", "Application Kind", "text"),
|
||||
createFormItem("spec.applicationRef.name", "Application Name", "text"),
|
||||
createFormItemWithAPI("spec.backupClassName", "Backup Class", "select", map[string]any{
|
||||
"api": map[string]any{
|
||||
"fetchUrl": "/api/clusters/{cluster}/k8s/apis/backups.cozystack.io/v1alpha1/backupclasses",
|
||||
"pathToItems": []any{"items"},
|
||||
"pathToValue": []any{"metadata", "name"},
|
||||
"pathToLabel": []any{"metadata", "name"},
|
||||
"clusterNameVar": "clusterName",
|
||||
},
|
||||
}),
|
||||
},
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.0.0@sha256:7deeee117e7eec599cb453836ca95eadd131dfc8c875dc457ef29dc1433395e0
|
||||
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.0.0@sha256:3753b735b0315bee90de54cb25cfebc63bd2cc90ad11ca4fdc0e70439abd5096
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:604561e23df1b8eb25c24cf73fd93c7aaa6d1e7c56affbbda5c6f0f83424e4b1
|
||||
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:434aa3b8e2a3cbf6681426b174e1c4fde23bafd12a6cccd046b5cb1749092ec4
|
||||
|
||||
@@ -18,7 +18,7 @@ spec:
|
||||
name: cozystack-etcd-application-default-etcd
|
||||
namespace: cozy-system
|
||||
interval: 5m
|
||||
timeout: 10m
|
||||
timeout: 30m
|
||||
install:
|
||||
remediation:
|
||||
retries: -1
|
||||
|
||||
@@ -10,6 +10,8 @@ metadata:
|
||||
labels:
|
||||
cozystack.io/system: "true"
|
||||
pod-security.kubernetes.io/enforce: privileged
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
cozystackOperator:
|
||||
# Deployment variant: talos, generic, hosted
|
||||
variant: talos
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-operator:v1.0.0-rc.1@sha256:5c0148116b2ab425106f6b86bbc1dfec593a83c993947c24eae92946d1c6116a
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-operator:v1.0.2@sha256:cf29eaa954ec75088ab9faf79dd402f63ae43cdf31325b0664ec7c35e8b09035
|
||||
platformSourceUrl: 'oci://ghcr.io/cozystack/cozystack/cozystack-packages'
|
||||
platformSourceRef: 'digest=sha256:b4ee831911b9c259a073f00390559f0bd5d8c78e22e48427a64ef05ed90ca008'
|
||||
platformSourceRef: 'digest=sha256:dd8956454160be6323e7afae32e16a460ea0373a1982496dadd831b75c571129'
|
||||
# Generic variant configuration (only used when cozystackOperator.variant=generic)
|
||||
cozystack:
|
||||
# Kubernetes API server host (IP only, no protocol/port)
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
# Migration 26 --> 27
|
||||
# Migrate monitoring resources from extra/monitoring to system/monitoring
|
||||
# This migration re-labels resources so they become owned by monitoring-system HelmRelease
|
||||
# and deletes old helm release secrets so that helm does not diff old vs new chart manifests.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
@@ -35,10 +36,39 @@ relabel_resources() {
|
||||
done
|
||||
}
|
||||
|
||||
# Delete all helm release secrets for a given release name in a namespace.
|
||||
# Uses both label selector and name-pattern matching to ensure complete cleanup.
|
||||
delete_helm_secrets() {
|
||||
local ns="$1"
|
||||
local release="$2"
|
||||
|
||||
# Primary: delete by label selector
|
||||
kubectl delete secrets -n "$ns" -l "name=${release},owner=helm" --ignore-not-found
|
||||
|
||||
# Fallback: find and delete by name pattern (in case labels were modified)
|
||||
local remaining
|
||||
remaining=$(kubectl get secrets -n "$ns" -o name | { grep "^secret/sh\.helm\.release\.v1\.${release}\." || true; })
|
||||
if [ -n "$remaining" ]; then
|
||||
echo " Found secrets not matched by label selector, deleting by name..."
|
||||
echo "$remaining" | while IFS= read -r secret; do
|
||||
echo " Deleting $secret"
|
||||
kubectl delete -n "$ns" "$secret" --ignore-not-found
|
||||
done
|
||||
fi
|
||||
|
||||
# Verify all secrets are gone
|
||||
remaining=$(kubectl get secrets -n "$ns" -o name | { grep "^secret/sh\.helm\.release\.v1\.${release}\." || true; })
|
||||
if [ -n "$remaining" ]; then
|
||||
echo " ERROR: Failed to delete helm release secrets:"
|
||||
echo "$remaining"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Find all tenant namespaces with monitoring HelmRelease
|
||||
echo "Finding tenant namespaces with monitoring HelmRelease..."
|
||||
NAMESPACES=$(kubectl get hr --all-namespaces -l apps.cozystack.io/application.kind=Monitoring \
|
||||
-o jsonpath='{range .items[*]}{.metadata.namespace}{"\n"}{end}' 2>/dev/null | sort -u || true)
|
||||
NAMESPACES=$(kubectl get hr --all-namespaces -l cozystack.io/ui=true --field-selector=metadata.name=monitoring \
|
||||
-o jsonpath='{range .items[*]}{.metadata.namespace}{"\n"}{end}' | sort -u)
|
||||
|
||||
if [ -z "$NAMESPACES" ]; then
|
||||
echo "No monitoring HelmReleases found in tenant namespaces, skipping migration"
|
||||
@@ -66,7 +96,7 @@ for ns in $NAMESPACES; do
|
||||
# Step 1: Suspend the HelmRelease
|
||||
echo ""
|
||||
echo "Step 1: Suspending HelmRelease monitoring..."
|
||||
kubectl patch hr -n "$ns" monitoring --type=merge -p '{"spec":{"suspend":true}}' 2>/dev/null || true
|
||||
kubectl patch hr -n "$ns" monitoring --type=merge -p '{"spec":{"suspend":true}}'
|
||||
|
||||
# Wait a moment for reconciliation to stop
|
||||
sleep 2
|
||||
@@ -74,7 +104,7 @@ for ns in $NAMESPACES; do
|
||||
# Step 2: Delete helm secrets for the monitoring release
|
||||
echo ""
|
||||
echo "Step 2: Deleting helm secrets for monitoring release..."
|
||||
kubectl delete secrets -n "$ns" -l name=monitoring,owner=helm --ignore-not-found
|
||||
delete_helm_secrets "$ns" "monitoring"
|
||||
|
||||
# Step 3: Relabel resources to be owned by monitoring-system
|
||||
echo ""
|
||||
@@ -121,7 +151,9 @@ for ns in $NAMESPACES; do
|
||||
echo "Processing Cozystack resources..."
|
||||
relabel_resources "$ns" "workloadmonitors.cozystack.io"
|
||||
|
||||
# Step 4: Delete the suspended HelmRelease (Flux won't delete resources when HR is suspended)
|
||||
# Step 4: Delete the suspended HelmRelease
|
||||
# Helm secrets are already gone, so flux finalizer will find no release to uninstall
|
||||
# and will simply remove the finalizer without deleting any resources.
|
||||
echo ""
|
||||
echo "Step 4: Deleting suspended HelmRelease monitoring..."
|
||||
kubectl delete hr -n "$ns" monitoring --ignore-not-found
|
||||
|
||||
@@ -5,10 +5,24 @@ set -euo pipefail
|
||||
|
||||
# Migrate Piraeus CRDs to piraeus-operator-crds Helm release
|
||||
for crd in linstorclusters.piraeus.io linstornodeconnections.piraeus.io linstorsatelliteconfigurations.piraeus.io linstorsatellites.piraeus.io; do
|
||||
kubectl annotate crd "$crd" meta.helm.sh/release-namespace=cozy-linstor meta.helm.sh/release-name=piraeus-operator-crds --overwrite
|
||||
kubectl label crd "$crd" app.kubernetes.io/managed-by=Helm helm.toolkit.fluxcd.io/namespace=cozy-linstor helm.toolkit.fluxcd.io/name=piraeus-operator-crds --overwrite
|
||||
if kubectl get crd "$crd" >/dev/null 2>&1; then
|
||||
echo " Relabeling CRD $crd"
|
||||
kubectl annotate crd "$crd" meta.helm.sh/release-namespace=cozy-linstor meta.helm.sh/release-name=piraeus-operator-crds --overwrite
|
||||
kubectl label crd "$crd" app.kubernetes.io/managed-by=Helm helm.toolkit.fluxcd.io/namespace=cozy-linstor helm.toolkit.fluxcd.io/name=piraeus-operator-crds --overwrite
|
||||
else
|
||||
echo " CRD $crd not found, skipping"
|
||||
fi
|
||||
done
|
||||
|
||||
# Delete old piraeus-operator helm secrets (by label and by name pattern)
|
||||
kubectl delete secret -n cozy-linstor -l name=piraeus-operator,owner=helm --ignore-not-found
|
||||
remaining=$(kubectl get secrets -n cozy-linstor -o name 2>/dev/null | { grep "^secret/sh\.helm\.release\.v1\.piraeus-operator\." || true; })
|
||||
if [ -n "$remaining" ]; then
|
||||
echo " Deleting remaining piraeus-operator helm secrets by name..."
|
||||
echo "$remaining" | while IFS= read -r secret; do
|
||||
kubectl delete -n cozy-linstor "$secret" --ignore-not-found
|
||||
done
|
||||
fi
|
||||
|
||||
# Stamp version
|
||||
kubectl create configmap -n cozy-system cozystack-version \
|
||||
|
||||
@@ -348,7 +348,7 @@ PVCEOF
|
||||
# --- 3g: Clone Secrets ---
|
||||
echo " --- Clone Secrets ---"
|
||||
for secret in $(kubectl -n "$NAMESPACE" get secret -o name 2>/dev/null \
|
||||
| grep "secret/${OLD_NAME}" | grep -v "sh.helm.release"); do
|
||||
| { grep "secret/${OLD_NAME}" || true; } | { grep -v "sh.helm.release" || true; }); do
|
||||
old_secret_name="${secret#secret/}"
|
||||
new_secret_name="${NEW_NAME}${old_secret_name#${OLD_NAME}}"
|
||||
clone_resource "$NAMESPACE" "secret" "$old_secret_name" "$new_secret_name" "$OLD_NAME" "$NEW_NAME"
|
||||
@@ -357,7 +357,7 @@ PVCEOF
|
||||
# --- 3h: Clone ConfigMaps ---
|
||||
echo " --- Clone ConfigMaps ---"
|
||||
for cm in $(kubectl -n "$NAMESPACE" get configmap -o name 2>/dev/null \
|
||||
| grep "configmap/${OLD_NAME}"); do
|
||||
| { grep "configmap/${OLD_NAME}" || true; }); do
|
||||
old_cm_name="${cm#configmap/}"
|
||||
new_cm_name="${NEW_NAME}${old_cm_name#${OLD_NAME}}"
|
||||
clone_resource "$NAMESPACE" "configmap" "$old_cm_name" "$new_cm_name" "$OLD_NAME" "$NEW_NAME"
|
||||
@@ -468,13 +468,13 @@ PVCEOF
|
||||
fi
|
||||
|
||||
for secret in $(kubectl -n "$NAMESPACE" get secret -o name 2>/dev/null \
|
||||
| grep "secret/${OLD_NAME}" | grep -v "sh.helm.release"); do
|
||||
| { grep "secret/${OLD_NAME}" || true; } | { grep -v "sh.helm.release" || true; }); do
|
||||
old_secret_name="${secret#secret/}"
|
||||
delete_resource "$NAMESPACE" "secret" "$old_secret_name"
|
||||
done
|
||||
|
||||
for cm in $(kubectl -n "$NAMESPACE" get configmap -o name 2>/dev/null \
|
||||
| grep "configmap/${OLD_NAME}"); do
|
||||
| { grep "configmap/${OLD_NAME}" || true; }); do
|
||||
old_cm_name="${cm#configmap/}"
|
||||
delete_resource "$NAMESPACE" "configmap" "$old_cm_name"
|
||||
done
|
||||
@@ -611,6 +611,19 @@ done
|
||||
echo ""
|
||||
echo "=== Migration complete (${#INSTANCES[@]} instance(s)) ==="
|
||||
|
||||
# ============================================================
|
||||
# STEP 8: Clean up orphaned mysql-rd system HelmRelease
|
||||
# ============================================================
|
||||
echo ""
|
||||
echo "--- Step 8: Clean up orphaned mysql-rd HelmRelease ---"
|
||||
if kubectl -n cozy-system get hr mysql-rd --no-headers 2>/dev/null | grep -q .; then
|
||||
echo " [DELETE] hr/mysql-rd"
|
||||
kubectl -n cozy-system delete hr mysql-rd --wait=false
|
||||
else
|
||||
echo " [SKIP] hr/mysql-rd already gone"
|
||||
fi
|
||||
kubectl -n cozy-system delete secret -l "owner=helm,name=mysql-rd" --ignore-not-found
|
||||
|
||||
# Stamp version
|
||||
kubectl create configmap -n cozy-system cozystack-version \
|
||||
--from-literal=version=29 --dry-run=client -o yaml | kubectl apply -f-
|
||||
|
||||
@@ -9,8 +9,6 @@ set -euo pipefail
|
||||
OLD_PREFIX="virtual-machine"
|
||||
NEW_DISK_PREFIX="vm-disk"
|
||||
NEW_INSTANCE_PREFIX="vm-instance"
|
||||
PROTECTION_WEBHOOK_NAME="protection-webhook"
|
||||
PROTECTION_WEBHOOK_NS="protection-webhook"
|
||||
CDI_APISERVER_NS="cozy-kubevirt-cdi"
|
||||
CDI_APISERVER_DEPLOY="cdi-apiserver"
|
||||
CDI_VALIDATING_WEBHOOKS="cdi-api-datavolume-validate cdi-api-dataimportcron-validate cdi-api-populator-validate cdi-api-validate"
|
||||
@@ -88,7 +86,6 @@ echo " Total: ${#INSTANCES[@]} instance(s)"
|
||||
# STEP 2: Migrate each instance
|
||||
# ============================================================
|
||||
ALL_PV_NAMES=()
|
||||
ALL_PROTECTED_RESOURCES=()
|
||||
|
||||
for entry in "${INSTANCES[@]}"; do
|
||||
NAMESPACE="${entry%%/*}"
|
||||
@@ -315,7 +312,7 @@ PVCEOF
|
||||
# --- 2i: Clone Secrets ---
|
||||
echo " --- Clone Secrets ---"
|
||||
kubectl -n "$NAMESPACE" get secret -o name 2>/dev/null \
|
||||
| grep "secret/${OLD_NAME}" | grep -v "sh.helm.release" | grep -v "values" \
|
||||
| { grep "secret/${OLD_NAME}" || true; } | { grep -v "sh.helm.release" || true; } | { grep -v "values" || true; } \
|
||||
| while IFS= read -r secret; do
|
||||
old_secret_name="${secret#secret/}"
|
||||
suffix="${old_secret_name#${OLD_NAME}}"
|
||||
@@ -542,7 +539,7 @@ SVCEOF
|
||||
# --- 2q: Delete old resources ---
|
||||
echo " --- Delete old resources ---"
|
||||
kubectl -n "$NAMESPACE" get secret -o name 2>/dev/null \
|
||||
| grep "secret/${OLD_NAME}" | grep -v "sh.helm.release" | grep -v "values" \
|
||||
| { grep "secret/${OLD_NAME}" || true; } | { grep -v "sh.helm.release" || true; } | { grep -v "values" || true; } \
|
||||
| while IFS= read -r secret; do
|
||||
old_secret_name="${secret#secret/}"
|
||||
delete_resource "$NAMESPACE" "secret" "$old_secret_name"
|
||||
@@ -564,71 +561,17 @@ SVCEOF
|
||||
delete_resource "$NAMESPACE" "secret" "$VALUES_SECRET"
|
||||
fi
|
||||
|
||||
# Collect protected resources for batch deletion
|
||||
# Delete old service (if exists)
|
||||
if resource_exists "$NAMESPACE" "svc" "$OLD_NAME"; then
|
||||
ALL_PROTECTED_RESOURCES+=("${NAMESPACE}:svc/${OLD_NAME}")
|
||||
delete_resource "$NAMESPACE" "svc" "$OLD_NAME"
|
||||
fi
|
||||
done
|
||||
|
||||
# ============================================================
|
||||
# STEP 3: Delete protected resources (Services)
|
||||
# STEP 3: Restore PV reclaim policies
|
||||
# ============================================================
|
||||
echo ""
|
||||
echo "--- Step 3: Delete protected resources ---"
|
||||
|
||||
if [ ${#ALL_PROTECTED_RESOURCES[@]} -gt 0 ]; then
|
||||
WEBHOOK_EXISTS=false
|
||||
if kubectl -n "$PROTECTION_WEBHOOK_NS" get deploy "$PROTECTION_WEBHOOK_NAME" --no-headers 2>/dev/null | grep -q .; then
|
||||
WEBHOOK_EXISTS=true
|
||||
fi
|
||||
|
||||
if [ "$WEBHOOK_EXISTS" = "true" ]; then
|
||||
echo " --- Temporarily disabling protection-webhook ---"
|
||||
|
||||
WEBHOOK_REPLICAS=$(kubectl -n "$PROTECTION_WEBHOOK_NS" get deploy "$PROTECTION_WEBHOOK_NAME" \
|
||||
-o jsonpath='{.spec.replicas}' 2>/dev/null || echo "1")
|
||||
|
||||
echo " [SCALE] ${PROTECTION_WEBHOOK_NAME} -> 0 (was ${WEBHOOK_REPLICAS})"
|
||||
kubectl -n "$PROTECTION_WEBHOOK_NS" scale deploy "$PROTECTION_WEBHOOK_NAME" --replicas=0
|
||||
|
||||
echo " [PATCH] Set failurePolicy=Ignore on ValidatingWebhookConfiguration/${PROTECTION_WEBHOOK_NAME}"
|
||||
kubectl get validatingwebhookconfiguration "$PROTECTION_WEBHOOK_NAME" -o json | \
|
||||
jq '.webhooks[].failurePolicy = "Ignore"' | \
|
||||
kubectl apply -f - 2>/dev/null || true
|
||||
|
||||
echo " Waiting for webhook pods to terminate..."
|
||||
kubectl -n "$PROTECTION_WEBHOOK_NS" wait --for=delete pod \
|
||||
-l app.kubernetes.io/name=protection-webhook --timeout=60s 2>/dev/null || true
|
||||
sleep 3
|
||||
fi
|
||||
|
||||
for entry in "${ALL_PROTECTED_RESOURCES[@]}"; do
|
||||
ns="${entry%%:*}"
|
||||
res="${entry#*:}"
|
||||
echo " [DELETE] ${ns}/${res}"
|
||||
kubectl -n "$ns" delete "$res" --wait=false 2>/dev/null || true
|
||||
done
|
||||
|
||||
if [ "$WEBHOOK_EXISTS" = "true" ]; then
|
||||
echo " [PATCH] Set failurePolicy=Fail on ValidatingWebhookConfiguration/${PROTECTION_WEBHOOK_NAME}"
|
||||
kubectl get validatingwebhookconfiguration "$PROTECTION_WEBHOOK_NAME" -o json | \
|
||||
jq '.webhooks[].failurePolicy = "Fail"' | \
|
||||
kubectl apply -f - 2>/dev/null || true
|
||||
|
||||
echo " [SCALE] ${PROTECTION_WEBHOOK_NAME} -> ${WEBHOOK_REPLICAS}"
|
||||
kubectl -n "$PROTECTION_WEBHOOK_NS" scale deploy "$PROTECTION_WEBHOOK_NAME" \
|
||||
--replicas="$WEBHOOK_REPLICAS"
|
||||
echo " --- protection-webhook restored ---"
|
||||
fi
|
||||
else
|
||||
echo " [SKIP] No protected resources to delete"
|
||||
fi
|
||||
|
||||
# ============================================================
|
||||
# STEP 4: Restore PV reclaim policies
|
||||
# ============================================================
|
||||
echo ""
|
||||
echo "--- Step 4: Restore PV reclaim policies ---"
|
||||
echo "--- Step 3: Restore PV reclaim policies ---"
|
||||
for pv_name in "${ALL_PV_NAMES[@]}"; do
|
||||
if [ -n "$pv_name" ]; then
|
||||
current_policy=$(kubectl get pv "$pv_name" \
|
||||
@@ -643,7 +586,7 @@ for pv_name in "${ALL_PV_NAMES[@]}"; do
|
||||
done
|
||||
|
||||
# ============================================================
|
||||
# STEP 5: Temporarily disable CDI datavolume webhooks
|
||||
# STEP 4: Temporarily disable CDI datavolume webhooks
|
||||
# ============================================================
|
||||
# CDI's datavolume-validate webhook rejects DataVolume creation when a PVC
|
||||
# with the same name already exists. We must disable it so that vm-disk
|
||||
@@ -652,7 +595,7 @@ done
|
||||
# cdi-apiserver (which serves the webhooks), then delete webhook configs.
|
||||
# Both are restored after vm-disk HRs reconcile.
|
||||
echo ""
|
||||
echo "--- Step 5: Temporarily disable CDI webhooks ---"
|
||||
echo "--- Step 4: Temporarily disable CDI webhooks ---"
|
||||
|
||||
CDI_OPERATOR_REPLICAS=$(kubectl -n "$CDI_APISERVER_NS" get deploy cdi-operator \
|
||||
-o jsonpath='{.spec.replicas}' 2>/dev/null || echo "1")
|
||||
@@ -685,10 +628,10 @@ done
|
||||
sleep 2
|
||||
|
||||
# ============================================================
|
||||
# STEP 6: Unsuspend vm-disk HelmReleases first
|
||||
# STEP 5: Unsuspend vm-disk HelmReleases first
|
||||
# ============================================================
|
||||
echo ""
|
||||
echo "--- Step 6: Unsuspend vm-disk HelmReleases ---"
|
||||
echo "--- Step 5: Unsuspend vm-disk HelmReleases ---"
|
||||
for entry in "${INSTANCES[@]}"; do
|
||||
ns="${entry%%/*}"
|
||||
instance="${entry#*/}"
|
||||
@@ -705,7 +648,7 @@ for entry in "${INSTANCES[@]}"; do
|
||||
# Force immediate reconciliation
|
||||
echo " [TRIGGER] Reconcile ${ns}/hr/${disk_name}"
|
||||
kubectl -n "$ns" annotate hr "$disk_name" --overwrite \
|
||||
"reconcile.fluxcd.io/requestedAt=$(date +%s)" 2>/dev/null || true
|
||||
"reconcile.fluxcd.io/requestedAt=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" 2>/dev/null || true
|
||||
fi
|
||||
done
|
||||
|
||||
@@ -729,12 +672,12 @@ for entry in "${INSTANCES[@]}"; do
|
||||
done
|
||||
|
||||
# ============================================================
|
||||
# STEP 7: Restore CDI webhooks
|
||||
# STEP 6: Restore CDI webhooks
|
||||
# ============================================================
|
||||
# Scale cdi-operator and cdi-apiserver back up.
|
||||
# cdi-apiserver will recreate webhook configurations automatically on start.
|
||||
echo ""
|
||||
echo "--- Step 7: Restore CDI webhooks ---"
|
||||
echo "--- Step 6: Restore CDI webhooks ---"
|
||||
|
||||
echo " [SCALE] cdi-operator -> ${CDI_OPERATOR_REPLICAS}"
|
||||
kubectl -n "$CDI_APISERVER_NS" scale deploy cdi-operator \
|
||||
@@ -749,10 +692,10 @@ kubectl -n "$CDI_APISERVER_NS" rollout status deploy "$CDI_APISERVER_DEPLOY" --t
|
||||
echo " --- CDI webhooks restored ---"
|
||||
|
||||
# ============================================================
|
||||
# STEP 8: Unsuspend vm-instance HelmReleases
|
||||
# STEP 7: Unsuspend vm-instance HelmReleases
|
||||
# ============================================================
|
||||
echo ""
|
||||
echo "--- Step 8: Unsuspend vm-instance HelmReleases ---"
|
||||
echo "--- Step 7: Unsuspend vm-instance HelmReleases ---"
|
||||
for entry in "${INSTANCES[@]}"; do
|
||||
ns="${entry%%/*}"
|
||||
instance="${entry#*/}"
|
||||
@@ -772,6 +715,19 @@ done
|
||||
echo ""
|
||||
echo "=== Migration complete (${#INSTANCES[@]} instance(s)) ==="
|
||||
|
||||
# ============================================================
|
||||
# STEP 8: Clean up orphaned virtual-machine-rd system HelmRelease
|
||||
# ============================================================
|
||||
echo ""
|
||||
echo "--- Step 8: Clean up orphaned virtual-machine-rd HelmRelease ---"
|
||||
if kubectl -n cozy-system get hr virtual-machine-rd --no-headers 2>/dev/null | grep -q .; then
|
||||
echo " [DELETE] hr/virtual-machine-rd"
|
||||
kubectl -n cozy-system delete hr virtual-machine-rd --wait=false
|
||||
else
|
||||
echo " [SKIP] hr/virtual-machine-rd already gone"
|
||||
fi
|
||||
kubectl -n cozy-system delete secret -l "owner=helm,name=virtual-machine-rd" --ignore-not-found
|
||||
|
||||
# Stamp version
|
||||
kubectl create configmap -n cozy-system cozystack-version \
|
||||
--from-literal=version=30 --dry-run=client -o yaml | kubectl apply -f-
|
||||
|
||||
30
packages/core/platform/images/migrations/migrations/33
Executable file
30
packages/core/platform/images/migrations/migrations/33
Executable file
@@ -0,0 +1,30 @@
|
||||
#!/bin/sh
|
||||
# Migration 33 --> 34
|
||||
# Clean up orphaned system -rd HelmReleases left after application renames.
|
||||
#
|
||||
# These HelmReleases reference ExternalArtifacts that no longer exist:
|
||||
# ferretdb-rd -> replaced by mongodb-rd
|
||||
# mysql-rd -> replaced by mariadb-rd (migration 28 handled user HRs only)
|
||||
# virtual-machine-rd -> replaced by vm-disk-rd + vm-instance-rd (migration 29 handled user HRs only)
|
||||
#
|
||||
# Idempotent: safe to re-run.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
echo "=== Cleaning up orphaned -rd HelmReleases ==="
|
||||
|
||||
for hr_name in ferretdb-rd mysql-rd virtual-machine-rd; do
|
||||
if kubectl -n cozy-system get hr "$hr_name" --no-headers 2>/dev/null | grep -q .; then
|
||||
echo " [DELETE] hr/${hr_name}"
|
||||
kubectl -n cozy-system delete hr "$hr_name" --wait=false
|
||||
else
|
||||
echo " [SKIP] hr/${hr_name} already gone"
|
||||
fi
|
||||
kubectl -n cozy-system delete secret -l "owner=helm,name=${hr_name}" --ignore-not-found
|
||||
done
|
||||
|
||||
echo "=== Cleanup complete ==="
|
||||
|
||||
# Stamp version
|
||||
kubectl create configmap -n cozy-system cozystack-version \
|
||||
--from-literal=version=34 --dry-run=client -o yaml | kubectl apply -f-
|
||||
@@ -24,7 +24,7 @@ if [ "$CURRENT_VERSION" -ge "$TARGET_VERSION" ]; then
|
||||
fi
|
||||
|
||||
# Run migrations sequentially from current version to target version
|
||||
for i in $(seq $((CURRENT_VERSION + 1)) $TARGET_VERSION); do
|
||||
for i in $(seq $CURRENT_VERSION $((TARGET_VERSION - 1))); do
|
||||
if [ -f "/migrations/$i" ]; then
|
||||
echo "Running migration $i"
|
||||
chmod +x /migrations/$i
|
||||
|
||||
@@ -1,19 +0,0 @@
|
||||
---
|
||||
apiVersion: cozystack.io/v1alpha1
|
||||
kind: PackageSource
|
||||
metadata:
|
||||
name: cozystack.cozystack-scheduler
|
||||
spec:
|
||||
sourceRef:
|
||||
kind: OCIRepository
|
||||
name: cozystack-packages
|
||||
namespace: cozy-system
|
||||
path: /
|
||||
variants:
|
||||
- name: default
|
||||
components:
|
||||
- name: cozystack-scheduler
|
||||
path: system/cozystack-scheduler
|
||||
install:
|
||||
namespace: kube-system
|
||||
releaseName: cozystack-scheduler
|
||||
@@ -155,6 +155,5 @@
|
||||
{{include "cozystack.platform.package.default" (list "cozystack.bootbox" $) }}
|
||||
{{- end }}
|
||||
{{include "cozystack.platform.package.optional.default" (list "cozystack.hetzner-robotlb" $) }}
|
||||
{{include "cozystack.platform.package.optional.default" (list "cozystack.cozystack-scheduler" $) }}
|
||||
|
||||
{{- end }}
|
||||
|
||||
@@ -6,6 +6,8 @@ kind: ConfigMap
|
||||
metadata:
|
||||
name: cozystack-version
|
||||
namespace: {{ .Release.Namespace }}
|
||||
annotations:
|
||||
helm.sh/resource-policy: keep
|
||||
data:
|
||||
version: {{ .Values.migrations.targetVersion | quote }}
|
||||
{{- end }}
|
||||
|
||||
@@ -5,8 +5,8 @@ sourceRef:
|
||||
path: /
|
||||
migrations:
|
||||
enabled: false
|
||||
image: ghcr.io/cozystack/cozystack/platform-migrations:v1.0.0-rc.1@sha256:21a09c9f8dfd0a0c9b8c14c70029a39bfce021c66f1d4cacad9764c35dce6e8f
|
||||
targetVersion: 33
|
||||
image: ghcr.io/cozystack/cozystack/platform-migrations:v1.0.2@sha256:d43c6f26c65edd448f586a29969ff76718338f1f1f78b24d3ad54c6c8977c748
|
||||
targetVersion: 34
|
||||
# Bundle deployment configuration
|
||||
bundles:
|
||||
system:
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
e2e:
|
||||
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v1.0.0-rc.1@sha256:0eae9f519669667d60b160ebb93c127843c470ad9ca3447fceaa54604503a7ba
|
||||
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v1.0.2@sha256:0eae9f519669667d60b160ebb93c127843c470ad9ca3447fceaa54604503a7ba
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/matchbox:v1.0.0-rc.1@sha256:3306de19f1ad49a02c735d16b82d7c2ec015c8e0563f120f216274e9a3804431
|
||||
ghcr.io/cozystack/cozystack/matchbox:v1.0.2@sha256:a093fdcd86eb8d4fbff740dd0fa2f5cd87247fa46dbae14cbbd391851270f9f0
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.0.0-rc.1@sha256:235b194a531b70e266a10ef78d2955d19f5b659513f23d8b3cfbbc0dff7fc1c0
|
||||
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.0.2@sha256:2a3595cd88b30af55b2000d3ca204899beecef0012b0e0402754c3914aad1f7f
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
backupController:
|
||||
image: "ghcr.io/cozystack/cozystack/backup-controller:v1.0.0-rc.1@sha256:0bb4173bdcd3d917a7bd358ecc2c6a053a06ab0bd1fcdb89d1016a66173e6dfb"
|
||||
image: "ghcr.io/cozystack/cozystack/backup-controller:v1.0.2@sha256:ebf42dfc4c0d857cc2cb4339e556703d34d0c4086de5e7f78a771dc423da535d"
|
||||
replicas: 2
|
||||
debug: false
|
||||
metrics:
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
backupStrategyController:
|
||||
image: "ghcr.io/cozystack/cozystack/backupstrategy-controller:v1.0.0-rc.1@sha256:c2d975574ea9edcd785b533e01add37909959a64ef815529162dfe1f472ea702"
|
||||
image: "ghcr.io/cozystack/cozystack/backupstrategy-controller:v1.0.2@sha256:485d9994c672776f7ce1a74243d4b3073f6575ddfdabb8587797053b43230d67"
|
||||
replicas: 2
|
||||
debug: false
|
||||
metrics:
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:291427de7db54a1d19dc9c2c807bdcc664a14caa9538786f31317e8c01a4a008
|
||||
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:279008f87460d709e99ed25ee8a1e4568a290bb9afa0e3dd3a06d524163a132b
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
cozystackAPI:
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-api:v1.0.0-rc.1@sha256:9195ac6ef1d29aba3e1903f29a9aeeb72f5bf7ba3dbd7ebd866a06a4433e5915
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-api:v1.0.2@sha256:3ce477e373aee817fc9746ebf33ae48ff7abc68a71cef587a2a1841e04759ea3
|
||||
replicas: 2
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
cozystackController:
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-controller:v1.0.0-rc.1@sha256:0ac4b6d55daf79e2a7a045cd21b48ba598ac2628442fd9c4d0556cf9af6b83be
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-controller:v1.0.2@sha256:c2e9c3f68124465d5b01dd8179d089689e5f130f685b92483a6a3167dd87e70f
|
||||
debug: false
|
||||
disableTelemetry: false
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
apiVersion: v2
|
||||
name: cozy-cozystack-scheduler
|
||||
version: 0.1.0
|
||||
@@ -1,10 +0,0 @@
|
||||
export NAME=cozystack-scheduler
|
||||
export NAMESPACE=kube-system
|
||||
|
||||
include ../../../hack/package.mk
|
||||
|
||||
update:
|
||||
rm -rf crds templates values.yaml Chart.yaml
|
||||
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/cozystack/cozystack-scheduler | awk -F'[/^]' 'END{print $$3}') && \
|
||||
curl -sSL https://github.com/cozystack/cozystack-scheduler/archive/refs/tags/$${tag}.tar.gz | \
|
||||
tar xzvf - --strip 2 cozystack-scheduler-$${tag#*v}/chart
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,9 +0,0 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: cozystack-scheduler
|
||||
rules:
|
||||
- apiGroups: ["cozystack.io"]
|
||||
resources:
|
||||
- schedulingclasses
|
||||
verbs: ["get", "list", "watch"]
|
||||
@@ -1,38 +0,0 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: cozystack-scheduler:kube-scheduler
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:kube-scheduler
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cozystack-scheduler
|
||||
namespace: {{ .Release.Namespace }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: cozystack-scheduler:volume-scheduler
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:volume-scheduler
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cozystack-scheduler
|
||||
namespace: {{ .Release.Namespace }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: cozystack-scheduler
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cozystack-scheduler
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cozystack-scheduler
|
||||
namespace: {{ .Release.Namespace }}
|
||||
@@ -1,54 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cozystack-scheduler-config
|
||||
namespace: {{ .Release.Namespace }}
|
||||
data:
|
||||
scheduler-config.yaml: |
|
||||
apiVersion: kubescheduler.config.k8s.io/v1
|
||||
kind: KubeSchedulerConfiguration
|
||||
leaderElection:
|
||||
leaderElect: true
|
||||
resourceNamespace: {{ .Release.Namespace }}
|
||||
resourceName: cozystack-scheduler
|
||||
profiles:
|
||||
- schedulerName: cozystack-scheduler
|
||||
plugins:
|
||||
preFilter:
|
||||
disabled:
|
||||
- name: InterPodAffinity
|
||||
- name: NodeAffinity
|
||||
- name: PodTopologySpread
|
||||
enabled:
|
||||
- name: CozystackInterPodAffinity
|
||||
- name: CozystackNodeAffinity
|
||||
- name: CozystackPodTopologySpread
|
||||
- name: CozystackSchedulingClass
|
||||
filter:
|
||||
disabled:
|
||||
- name: InterPodAffinity
|
||||
- name: NodeAffinity
|
||||
- name: PodTopologySpread
|
||||
enabled:
|
||||
- name: CozystackInterPodAffinity
|
||||
- name: CozystackNodeAffinity
|
||||
- name: CozystackPodTopologySpread
|
||||
- name: CozystackSchedulingClass
|
||||
preScore:
|
||||
disabled:
|
||||
- name: InterPodAffinity
|
||||
- name: NodeAffinity
|
||||
- name: PodTopologySpread
|
||||
enabled:
|
||||
- name: CozystackInterPodAffinity
|
||||
- name: CozystackNodeAffinity
|
||||
- name: CozystackPodTopologySpread
|
||||
score:
|
||||
disabled:
|
||||
- name: InterPodAffinity
|
||||
- name: NodeAffinity
|
||||
- name: PodTopologySpread
|
||||
enabled:
|
||||
- name: CozystackInterPodAffinity
|
||||
- name: CozystackNodeAffinity
|
||||
- name: CozystackPodTopologySpread
|
||||
@@ -1,37 +0,0 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: cozystack-scheduler
|
||||
namespace: {{ .Release.Namespace }}
|
||||
spec:
|
||||
replicas: {{ .Values.replicas }}
|
||||
selector:
|
||||
matchLabels:
|
||||
app: cozystack-scheduler
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cozystack-scheduler
|
||||
spec:
|
||||
serviceAccountName: cozystack-scheduler
|
||||
containers:
|
||||
- name: cozystack-scheduler
|
||||
image: {{ .Values.image }}
|
||||
command:
|
||||
- /cozystack-scheduler
|
||||
- --config=/etc/kubernetes/scheduler-config.yaml
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10259
|
||||
scheme: HTTPS
|
||||
initialDelaySeconds: 15
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /etc/kubernetes/scheduler-config.yaml
|
||||
subPath: scheduler-config.yaml
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: cozystack-scheduler-config
|
||||
@@ -1,40 +0,0 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: cozystack-scheduler:extension-apiserver-authentication-reader
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: extension-apiserver-authentication-reader
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cozystack-scheduler
|
||||
namespace: {{ .Release.Namespace }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: cozystack-scheduler:leader-election
|
||||
namespace: {{ .Release.Namespace }}
|
||||
rules:
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
verbs: ["create", "get", "list", "update", "watch"]
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leasecandidates"]
|
||||
verbs: ["create", "get", "list", "update", "watch"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: cozystack-scheduler:leader-election
|
||||
namespace: {{ .Release.Namespace }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: cozystack-scheduler:leader-election
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: cozystack-scheduler
|
||||
namespace: {{ .Release.Namespace }}
|
||||
@@ -1,5 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: cozystack-scheduler
|
||||
namespace: {{ .Release.Namespace }}
|
||||
@@ -1,2 +0,0 @@
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-scheduler:v0.1.0@sha256:5f7150c82177478467ff80628acb5a400291aff503364aa9e26fc346d79a73cf
|
||||
replicas: 1
|
||||
@@ -6,7 +6,7 @@ FROM node:${NODE_VERSION}-alpine AS openapi-k8s-toolkit-builder
|
||||
RUN apk add git
|
||||
WORKDIR /src
|
||||
# release/1.4.0
|
||||
ARG COMMIT=c67029cc7b7495c65ee0406033576e773a73bb01
|
||||
ARG COMMIT=d6b9e4ad0d1eb9d3730f7f0c664792c8dda3214d
|
||||
RUN wget -O- https://github.com/PRO-Robotech/openapi-k8s-toolkit/archive/${COMMIT}.tar.gz | tar -xzvf- --strip-components=1
|
||||
|
||||
COPY openapi-k8s-toolkit/patches /patches
|
||||
|
||||
@@ -0,0 +1,37 @@
|
||||
diff --git a/src/localTypes/formExtensions.ts b/src/localTypes/formExtensions.ts
|
||||
--- a/src/localTypes/formExtensions.ts
|
||||
+++ b/src/localTypes/formExtensions.ts
|
||||
@@ -59,2 +59,4 @@
|
||||
relatedValuePath?: string
|
||||
+ allowEmpty?: boolean
|
||||
+ persistType?: 'str' | 'number' | 'arr' | 'obj'
|
||||
}
|
||||
diff --git a/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx b/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
|
||||
--- a/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
|
||||
+++ b/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
|
||||
@@ -149,3 +149,10 @@
|
||||
}, [relatedPath, form, arrName, fixedName, relatedFieldValue, onValuesChangeCallBack, isTouchedPeristed])
|
||||
|
||||
+ // When allowEmpty is set, auto-persist the field so the BFF preserves empty values
|
||||
+ useEffect(() => {
|
||||
+ if (customProps.allowEmpty) {
|
||||
+ persistedControls.onPersistMark(persistName || name, customProps.persistType ?? 'str')
|
||||
+ }
|
||||
+ }, [customProps.allowEmpty, customProps.persistType, persistedControls, persistName, name])
|
||||
+
|
||||
const uri = prepareTemplate({
|
||||
@@ -267,5 +274,14 @@
|
||||
validateTrigger="onBlur"
|
||||
hasFeedback={designNewLayout ? { icons: feedbackIcons } : true}
|
||||
style={{ flex: 1 }}
|
||||
+ normalize={(value: unknown) => {
|
||||
+ if (customProps.allowEmpty && (value === undefined || value === null)) {
|
||||
+ if (customProps.persistType === 'number') return 0
|
||||
+ if (customProps.persistType === 'arr') return []
|
||||
+ if (customProps.persistType === 'obj') return {}
|
||||
+ return ''
|
||||
+ }
|
||||
+ return value
|
||||
+ }}
|
||||
>
|
||||
<Select
|
||||
@@ -1,49 +0,0 @@
|
||||
diff --git a/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx b/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
|
||||
index d5e5230..9038dbb 100644
|
||||
--- a/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
|
||||
+++ b/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
|
||||
@@ -259,14 +259,15 @@ export const FormListInput: FC<TFormListInputProps> = ({
|
||||
<PersistedCheckbox formName={persistName || name} persistedControls={persistedControls} type="arr" />
|
||||
</Flex>
|
||||
</Flex>
|
||||
- <ResetedFormItem
|
||||
- key={arrKey !== undefined ? arrKey : Array.isArray(name) ? name.slice(-1)[0] : name}
|
||||
- name={arrName || fixedName}
|
||||
- rules={[getRequiredRule(forceNonRequired === false && !!required?.includes(getStringByName(name)), name)]}
|
||||
- validateTrigger="onBlur"
|
||||
- hasFeedback={designNewLayout ? { icons: feedbackIcons } : true}
|
||||
- >
|
||||
- <Flex gap={8} align="center">
|
||||
+ <Flex gap={8} align="center">
|
||||
+ <ResetedFormItem
|
||||
+ key={arrKey !== undefined ? arrKey : Array.isArray(name) ? name.slice(-1)[0] : name}
|
||||
+ name={arrName || fixedName}
|
||||
+ rules={[getRequiredRule(forceNonRequired === false && !!required?.includes(getStringByName(name)), name)]}
|
||||
+ validateTrigger="onBlur"
|
||||
+ hasFeedback={designNewLayout ? { icons: feedbackIcons } : true}
|
||||
+ style={{ flex: 1 }}
|
||||
+ >
|
||||
<Select
|
||||
mode={customProps.mode}
|
||||
placeholder="Select"
|
||||
@@ -277,13 +278,13 @@ export const FormListInput: FC<TFormListInputProps> = ({
|
||||
showSearch
|
||||
style={{ width: '100%' }}
|
||||
/>
|
||||
- {relatedValueTooltip && (
|
||||
- <Tooltip title={relatedValueTooltip}>
|
||||
- <QuestionCircleOutlined />
|
||||
- </Tooltip>
|
||||
- )}
|
||||
- </Flex>
|
||||
- </ResetedFormItem>
|
||||
+ </ResetedFormItem>
|
||||
+ {relatedValueTooltip && (
|
||||
+ <Tooltip title={relatedValueTooltip}>
|
||||
+ <QuestionCircleOutlined />
|
||||
+ </Tooltip>
|
||||
+ )}
|
||||
+ </Flex>
|
||||
</HiddenContainer>
|
||||
)
|
||||
}
|
||||
@@ -0,0 +1,29 @@
|
||||
diff --git a/src/components/organisms/DynamicComponents/molecules/SecretBase64Plain/SecretBase64Plain.tsx b/src/components/organisms/DynamicComponents/molecules/SecretBase64Plain/SecretBase64Plain.tsx
|
||||
--- a/src/components/organisms/DynamicComponents/molecules/SecretBase64Plain/SecretBase64Plain.tsx
|
||||
+++ b/src/components/organisms/DynamicComponents/molecules/SecretBase64Plain/SecretBase64Plain.tsx
|
||||
@@ -145,6 +145,12 @@
|
||||
<Styled.DisabledInput
|
||||
$hidden={effectiveHidden}
|
||||
onClick={e => handleInputClick(e, effectiveHidden, value)}
|
||||
+ onCopy={e => {
|
||||
+ if (!effectiveHidden) {
|
||||
+ e.preventDefault()
|
||||
+ e.clipboardData?.setData('text/plain', value)
|
||||
+ }
|
||||
+ }}
|
||||
value={shownValue}
|
||||
readOnly
|
||||
/>
|
||||
@@ -161,6 +167,12 @@
|
||||
<Styled.DisabledInput
|
||||
$hidden={effectiveHidden}
|
||||
onClick={e => handleInputClick(e, effectiveHidden, value)}
|
||||
+ onCopy={e => {
|
||||
+ if (!effectiveHidden) {
|
||||
+ e.preventDefault()
|
||||
+ e.clipboardData?.setData('text/plain', value)
|
||||
+ }
|
||||
+ }}
|
||||
value={shownValue}
|
||||
readOnly
|
||||
/>
|
||||
@@ -1,6 +1,6 @@
|
||||
{{- $brandingConfig := .Values._cluster.branding | default dict }}
|
||||
|
||||
{{- $tenantText := "v1.0.0-rc.1" }}
|
||||
{{- $tenantText := "v1.0.2" }}
|
||||
{{- $footerText := "Cozystack" }}
|
||||
{{- $titleText := "Cozystack Dashboard" }}
|
||||
{{- $logoText := "" }}
|
||||
|
||||
20
packages/system/dashboard/templates/flowschema.yaml
Normal file
20
packages/system/dashboard/templates/flowschema.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
apiVersion: flowcontrol.apiserver.k8s.io/v1
|
||||
kind: FlowSchema
|
||||
metadata:
|
||||
name: cozy-dashboard-exempt
|
||||
spec:
|
||||
matchingPrecedence: 2
|
||||
priorityLevelConfiguration:
|
||||
name: exempt
|
||||
rules:
|
||||
- subjects:
|
||||
- kind: ServiceAccount
|
||||
serviceAccount:
|
||||
name: incloud-web-web
|
||||
namespace: {{ .Release.Namespace }}
|
||||
resourceRules:
|
||||
- verbs: ["*"]
|
||||
apiGroups: ["*"]
|
||||
resources: ["*"]
|
||||
namespaces: ["*"]
|
||||
clusterScope: true
|
||||
@@ -136,7 +136,7 @@ spec:
|
||||
- name: CUSTOMIZATION_NAVIGATION_RESOURCE_PLURAL
|
||||
value: navigations
|
||||
- name: CUSTOMIZATION_SIDEBAR_FALLBACK_ID
|
||||
value: stock-project-api-table
|
||||
value: ""
|
||||
- name: CUSTOMIZATION_BREADCRUMBS_FALLBACK_ID
|
||||
value: stock-project-api-table
|
||||
- name: INSTANCES_API_GROUP
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
openapiUI:
|
||||
image: ghcr.io/cozystack/cozystack/openapi-ui:v1.0.0-rc.1@sha256:26e787b259ab8722d61f89e07eda200ef2180ed10b0df8596d75bba15526feb0
|
||||
image: ghcr.io/cozystack/cozystack/openapi-ui:v1.0.2@sha256:9b8b357eb49c4dfc7502047117d18559281e60e853793277f339fd73e4ab9102
|
||||
openapiUIK8sBff:
|
||||
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v1.0.0-rc.1@sha256:0f508427bfa5a650eda6c5ef01ea32a586ac485a54902d7649ec49cc84f676f7
|
||||
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v1.0.2@sha256:c938fee904acd948800d4dc5e121c4c5cd64cb4a3160fb8d2f9dbff0e5168740
|
||||
tokenProxy:
|
||||
image: ghcr.io/cozystack/cozystack/token-proxy:v1.0.0-rc.1@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc
|
||||
image: ghcr.io/cozystack/cozystack/token-proxy:v1.0.2@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/grafana-dashboards:v1.0.0-rc.1@sha256:7a3c9af59f8d74d5a23750bbc845c7de64610dbd4d4f84011e10be037b3ce2a0
|
||||
ghcr.io/cozystack/cozystack/grafana-dashboards:v1.0.2@sha256:7a3c9af59f8d74d5a23750bbc845c7de64610dbd4d4f84011e10be037b3ce2a0
|
||||
|
||||
@@ -3,7 +3,7 @@ kamaji:
|
||||
deploy: false
|
||||
image:
|
||||
pullPolicy: IfNotPresent
|
||||
tag: v1.0.0-rc.1@sha256:0ca47c0d72a198f52f029bea30e89557680eac65c7914369b092d8ed9cc9997b
|
||||
tag: v1.0.2@sha256:50db517ebe7698083dd32223a96c987b6ed0c88d3a093969beb571e4a96d18e4
|
||||
repository: ghcr.io/cozystack/cozystack/kamaji
|
||||
resources:
|
||||
limits:
|
||||
@@ -13,4 +13,4 @@ kamaji:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
extraArgs:
|
||||
- --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v1.0.0-rc.1@sha256:0ca47c0d72a198f52f029bea30e89557680eac65c7914369b092d8ed9cc9997b
|
||||
- --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v1.0.2@sha256:50db517ebe7698083dd32223a96c987b6ed0c88d3a093969beb571e4a96d18e4
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
{{- $host := index .Values._cluster "root-host" }}
|
||||
{{- $ingressHost := .Values.ingress.host | default (printf "keycloak.%s" $host) }}
|
||||
{{- $solver := (index .Values._cluster "solver") | default "http01" }}
|
||||
{{- $clusterIssuer := (index .Values._cluster "issuer-name") | default "letsencrypt-prod" }}
|
||||
{{- $exposeIngress := (index .Values._cluster "expose-ingress") | default "tenant-root" }}
|
||||
@@ -19,10 +20,10 @@ spec:
|
||||
ingressClassName: {{ $exposeIngress }}
|
||||
tls:
|
||||
- hosts:
|
||||
- keycloak.{{ $host }}
|
||||
- {{ $ingressHost }}
|
||||
secretName: web-tls
|
||||
rules:
|
||||
- host: keycloak.{{ $host }}
|
||||
- host: {{ $ingressHost }}
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
{{- $host := index .Values._cluster "root-host" }}
|
||||
{{- $ingressHost := .Values.ingress.host | default (printf "keycloak.%s" $host) }}
|
||||
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
|
||||
|
||||
{{- $existingPassword := lookup "v1" "Secret" "cozy-keycloak" (printf "%s-credentials" .Release.Name) }}
|
||||
@@ -81,8 +82,10 @@ spec:
|
||||
value: "ispn"
|
||||
- name: KC_CACHE_STACK
|
||||
value: "kubernetes"
|
||||
- name: KC_PROXY
|
||||
value: "edge"
|
||||
- name: KC_PROXY_HEADERS
|
||||
value: "xforwarded"
|
||||
- name: KC_HTTP_ENABLED
|
||||
value: "true"
|
||||
- name: KEYCLOAK_ADMIN
|
||||
value: admin
|
||||
- name: KEYCLOAK_ADMIN_PASSWORD
|
||||
@@ -120,7 +123,7 @@ spec:
|
||||
- name: KC_FEATURES
|
||||
value: "docker"
|
||||
- name: KC_HOSTNAME
|
||||
value: https://keycloak.{{ $host }}
|
||||
value: https://{{ $ingressHost }}
|
||||
- name: JAVA_OPTS_APPEND
|
||||
value: "-Djgroups.dns.query=keycloak-headless.cozy-keycloak.svc.{{ $clusterDomain }}"
|
||||
ports:
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
image: quay.io/keycloak/keycloak:26.0.4
|
||||
|
||||
ingress:
|
||||
# Custom hostname for the Keycloak Ingress.
|
||||
# If set, this value will be used as the Ingress hostname (e.g., "auth.example.com").
|
||||
# If left empty, defaults to "keycloak.<root-host>" based on the cluster root-host setting.
|
||||
host: ""
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/affinity: "cookie"
|
||||
nginx.ingress.kubernetes.io/session-cookie-expires: "86400"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
portSecurity: true
|
||||
routes: ""
|
||||
image: ghcr.io/cozystack/cozystack/kubeovn-plunger:v1.0.0-rc.1@sha256:6be9aa4b2dda15bf7300bd961cbc71c8bbf9ce97bc3cf613ef5a012d329b4e70
|
||||
image: ghcr.io/cozystack/cozystack/kubeovn-plunger:v1.0.2@sha256:893d385b0831171cef25750b500f4d8a3853df3df3f3baabcbf2a7842235941e
|
||||
ovnCentralName: ovn-central
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
portSecurity: true
|
||||
routes: ""
|
||||
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v1.0.0-rc.1@sha256:e18f9fd679e38f65362a8d0042f25468272f6d081136ad47027168d8e7e07a4a
|
||||
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v1.0.2@sha256:e18f9fd679e38f65362a8d0042f25468272f6d081136ad47027168d8e7e07a4a
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
storageClass: replicated
|
||||
csiDriver:
|
||||
image: ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:604561e23df1b8eb25c24cf73fd93c7aaa6d1e7c56affbbda5c6f0f83424e4b1
|
||||
image: ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:434aa3b8e2a3cbf6681426b174e1c4fde23bafd12a6cccd046b5cb1749092ec4
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
lineageControllerWebhook:
|
||||
image: ghcr.io/cozystack/cozystack/lineage-controller-webhook:v1.0.0-rc.1@sha256:473ed28fcd7ddc35319a0fd33dd0fff3e56491b572677545a6e317b53578c53d
|
||||
image: ghcr.io/cozystack/cozystack/lineage-controller-webhook:v1.0.2@sha256:0720cf65d7a74727020fd90649ab954e89b8e5e3db21fa3c83558029557ade7f
|
||||
debug: false
|
||||
localK8sAPIEndpoint:
|
||||
enabled: true
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
piraeusServer:
|
||||
image:
|
||||
repository: ghcr.io/cozystack/cozystack/piraeus-server
|
||||
tag: 1.32.3@sha256:59806529a090cb42e2fa2696e09814282b80e76a299b5fe9feec46772edd6876
|
||||
tag: 1.32.3@sha256:aa97f39d90c0726b587f0a376504f13d1f308adeb42db7d98cec9ac7de237361
|
||||
# Talos-specific workarounds (disable for generic Linux like Ubuntu/Debian)
|
||||
talos:
|
||||
enabled: true
|
||||
@@ -13,4 +13,4 @@ linstor:
|
||||
linstorCSI:
|
||||
image:
|
||||
repository: ghcr.io/cozystack/cozystack/linstor-csi
|
||||
tag: v1.10.5@sha256:6e6cf48cb994f3918df946e02ec454ac64916678b3e60d78c136b431f1a26155
|
||||
tag: v1.10.5@sha256:3d93a5f30923815c7d7f2de106f88956ebcf9ed52d6f67a7cb714fb571bd7378
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
objectstorage:
|
||||
controller:
|
||||
image: "ghcr.io/cozystack/cozystack/objectstorage-controller:v1.0.0-rc.1@sha256:b4c972769afda76c48b58e7acf0ac66a0abf16a622f245c60338f432872f640a"
|
||||
image: "ghcr.io/cozystack/cozystack/objectstorage-controller:v1.0.2@sha256:e40e94f3014cfd04cce4230597315a1acfcca2daa8051b987614d0c05da6d928"
|
||||
|
||||
@@ -177,7 +177,7 @@ seaweedfs:
|
||||
bucketClassName: "seaweedfs"
|
||||
region: ""
|
||||
sidecar:
|
||||
image: "ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.0.0-rc.1@sha256:235b194a531b70e266a10ef78d2955d19f5b659513f23d8b3cfbbc0dff7fc1c0"
|
||||
image: "ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.0.2@sha256:2a3595cd88b30af55b2000d3ca204899beecef0012b0e0402754c3914aad1f7f"
|
||||
certificates:
|
||||
commonName: "SeaweedFS CA"
|
||||
ipAddresses: []
|
||||
|
||||
Reference in New Issue
Block a user