Compare commits

..

3 Commits

Author SHA1 Message Date
IvanHunters
d21373e780 fix(migration): delete helm release secrets to protect keycloak data
Replace HelmRelease annotation approach with helm secret deletion.
Annotating HelmReleases with resource-policy=keep does not prevent
FluxCD from running helm uninstall when spec.chart changes to
spec.chartRef. Deleting helm release secrets ensures FluxCD has
nothing to uninstall and performs helm install instead, adopting
existing resources.

Update manual migration script with the same protection and fix
error handling for cozy-keycloak namespace check.

Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2026-03-02 21:00:21 +03:00
IvanHunters
87fca4dfe2 fix(migration): protect keycloak HelmReleases during v1.0 upgrade
In v0.41.x keycloak HelmReleases were created directly by the platform
chart (helmreleases.yaml). In v1.0 this template was removed and keycloak
is managed via Package API. Without protection, Helm deletes these
HelmReleases during upgrade, triggering FluxCD to run helm uninstall and
destroying all keycloak data (credentials, realm config, PostgreSQL DB).

Add migration 34 that annotates keycloak, keycloak-operator, and
keycloak-configure HelmReleases with helm.sh/resource-policy=keep before
the platform chart upgrade. This allows the Package operator to adopt
existing HelmReleases and FluxCD to perform helm upgrade instead of
a fresh install, preserving all data.

Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2026-03-02 20:07:01 +03:00
IvanHunters
41f2c64241 fix(migration): protect cozy-keycloak namespace during v1.0 migration
The migration script only annotated cozy-system namespace with
helm.sh/resource-policy=keep, leaving cozy-keycloak unprotected.
This could cause Helm to delete the namespace and all its secrets
(including keycloak credentials and realm data) during the migration
to Package-based configuration.

Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2026-03-02 16:26:01 +03:00
96 changed files with 348 additions and 4461 deletions

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,19 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v1.0.2
-->
## Fixes
* **[platform] Suspend cozy-proxy if it conflicts with installer release during migration**: Added a check in the v0.41→v1.0 migration script to detect and automatically suspend the `cozy-proxy` HelmRelease when its `releaseName` is set to `cozystack`, which conflicts with the installer release and would cause `cozystack-operator` deletion during the upgrade ([**@kvaps**](https://github.com/kvaps) in #2128, #2130).
* **[platform] Fix off-by-one error in run-migrations script**: Fixed a bug in the migration runner where the first required migration was always skipped due to an off-by-one error in the migration range calculation, ensuring all upgrade steps execute correctly ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2126, #2132).
* **[system] Fix Keycloak proxy configuration for v26.x**: Replaced the deprecated `KC_PROXY=edge` environment variable with `KC_PROXY_HEADERS=xforwarded` and `KC_HTTP_ENABLED=true` in the Keycloak StatefulSet template. `KC_PROXY` was removed in Keycloak 26.x, previously causing "Non-secure context detected" warnings and broken cookie handling when running behind a reverse proxy with TLS termination ([**@sircthulhu**](https://github.com/sircthulhu) in #2125, #2134).
* **[dashboard] Allow clearing instanceType field and preserve newlines in secret copy**: Added `allowEmpty: true` to the `instanceType` field in the VMInstance form so users can explicitly clear it to use custom KubeVirt resources without a named instance type. Also fixed newline preservation when copying secrets with CMD+C ([**@sircthulhu**](https://github.com/sircthulhu) in #2135, #2137).
* **[dashboard] Restore stock-instance sidebars for namespace-level pages**: Restored `stock-instance-api-form`, `stock-instance-api-table`, `stock-instance-builtin-form`, and `stock-instance-builtin-table` sidebar resources that were inadvertently removed in #2106. Without these sidebars, namespace-level pages such as Backup Plans rendered as empty pages with no interactive content ([**@sircthulhu**](https://github.com/sircthulhu) in #2136, #2138).
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.1...v1.0.2

View File

@@ -1,17 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v1.0.3
-->
## Fixes
* **[platform] Fix package name conversion in migration script**: Fixed the `migrate-to-version-1.0.sh` script to correctly prepend the `cozystack.` prefix when converting `BUNDLE_DISABLE` and `BUNDLE_ENABLE` package name lists, ensuring packages are properly identified during the v0.41→v1.0 upgrade ([**@myasnikovdaniil**](https://github.com/myasnikovdaniil) in #2144, #2148).
## Documentation
* **[website] Add white labeling guide**: Added a comprehensive guide for configuring white labeling (branding) in Cozystack v1, covering Dashboard fields (`titleText`, `footerText`, `tenantText`, `logoText`, `logoSvg`, `iconSvg`) and Keycloak fields (`brandName`, `brandHtmlName`). Includes SVG preparation workflow with theme-aware template variables, portable base64 encoding, and migration notes from the v0 ConfigMap approach ([**@lexfrei**](https://github.com/lexfrei) in cozystack/website#441).
* **[website] Actualize backup and recovery documentation**: Reworked the backup and recovery docs to be user-focused, separating operator and tenant workflows. Added tenant-facing documentation for `BackupJob` and `Plan` resources and status inspection commands, and added a new Velero administration guide for operators covering storage credentials and backup storage configuration ([**@androndo**](https://github.com/androndo) in cozystack/website#434).
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.2...v1.0.3

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bats
@test "Create and Verify Seeweedfs Bucket" {
# Create the bucket resource with readwrite and readonly users
# Create the bucket resource
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
@@ -9,29 +9,21 @@ kind: Bucket
metadata:
name: ${name}
namespace: tenant-test
spec:
users:
admin: {}
viewer:
readonly: true
spec: {}
EOF
# Wait for the bucket to be ready
kubectl -n tenant-test wait hr bucket-${name} --timeout=100s --for=condition=ready
kubectl -n tenant-test wait bucketclaims.objectstorage.k8s.io bucket-${name} --timeout=300s --for=jsonpath='{.status.bucketReady}'
kubectl -n tenant-test wait bucketaccesses.objectstorage.k8s.io bucket-${name}-admin --timeout=300s --for=jsonpath='{.status.accessGranted}'
kubectl -n tenant-test wait bucketaccesses.objectstorage.k8s.io bucket-${name}-viewer --timeout=300s --for=jsonpath='{.status.accessGranted}'
kubectl -n tenant-test wait bucketaccesses.objectstorage.k8s.io bucket-${name} --timeout=300s --for=jsonpath='{.status.accessGranted}'
# Get admin (readwrite) credentials
kubectl -n tenant-test get secret bucket-${name}-admin -ojsonpath='{.data.BucketInfo}' | base64 -d > bucket-admin-credentials.json
ADMIN_ACCESS_KEY=$(jq -r '.spec.secretS3.accessKeyID' bucket-admin-credentials.json)
ADMIN_SECRET_KEY=$(jq -r '.spec.secretS3.accessSecretKey' bucket-admin-credentials.json)
BUCKET_NAME=$(jq -r '.spec.bucketName' bucket-admin-credentials.json)
# Get and decode credentials
kubectl -n tenant-test get secret bucket-${name} -ojsonpath='{.data.BucketInfo}' | base64 -d > bucket-test-credentials.json
# Get viewer (readonly) credentials
kubectl -n tenant-test get secret bucket-${name}-viewer -ojsonpath='{.data.BucketInfo}' | base64 -d > bucket-viewer-credentials.json
VIEWER_ACCESS_KEY=$(jq -r '.spec.secretS3.accessKeyID' bucket-viewer-credentials.json)
VIEWER_SECRET_KEY=$(jq -r '.spec.secretS3.accessSecretKey' bucket-viewer-credentials.json)
# Get credentials from the secret
ACCESS_KEY=$(jq -r '.spec.secretS3.accessKeyID' bucket-test-credentials.json)
SECRET_KEY=$(jq -r '.spec.secretS3.accessSecretKey' bucket-test-credentials.json)
BUCKET_NAME=$(jq -r '.spec.bucketName' bucket-test-credentials.json)
# Start port-forwarding
bash -c 'timeout 100s kubectl port-forward service/seaweedfs-s3 -n tenant-root 8333:8333 > /dev/null 2>&1 &'
@@ -39,33 +31,17 @@ EOF
# Wait for port-forward to be ready
timeout 30 sh -ec 'until nc -z localhost 8333; do sleep 1; done'
# --- Test readwrite user (admin) ---
mc alias set rw-user https://localhost:8333 $ADMIN_ACCESS_KEY $ADMIN_SECRET_KEY --insecure
# Set up MinIO alias with error handling
mc alias set local https://localhost:8333 $ACCESS_KEY $SECRET_KEY --insecure
# Admin can upload
echo "readwrite test" > /tmp/rw-test.txt
mc cp --insecure /tmp/rw-test.txt rw-user/$BUCKET_NAME/rw-test.txt
# Upload file to bucket
mc cp bucket-test-credentials.json $BUCKET_NAME/bucket-test-credentials.json
# Admin can list
mc ls --insecure rw-user/$BUCKET_NAME/rw-test.txt
# Verify file was uploaded
mc ls $BUCKET_NAME/bucket-test-credentials.json
# Admin can download
mc cp --insecure rw-user/$BUCKET_NAME/rw-test.txt /tmp/rw-test-download.txt
# Clean up uploaded file
mc rm $BUCKET_NAME/bucket-test-credentials.json
# --- Test readonly user (viewer) ---
mc alias set ro-user https://localhost:8333 $VIEWER_ACCESS_KEY $VIEWER_SECRET_KEY --insecure
# Viewer can list
mc ls --insecure ro-user/$BUCKET_NAME/rw-test.txt
# Viewer can download
mc cp --insecure ro-user/$BUCKET_NAME/rw-test.txt /tmp/ro-test-download.txt
# Viewer cannot upload (must fail with Access Denied)
echo "readonly test" > /tmp/ro-test.txt
! mc cp --insecure /tmp/ro-test.txt ro-user/$BUCKET_NAME/ro-test.txt
# --- Cleanup ---
mc rm --insecure rw-user/$BUCKET_NAME/rw-test.txt
kubectl -n tenant-test delete bucket.apps.cozystack.io ${name}
}

View File

@@ -39,6 +39,7 @@ echo "The following resources will be annotated with helm.sh/resource-policy=kee
echo "to prevent Helm from deleting them when the installer release is removed:"
echo " - Namespace: $NAMESPACE"
echo " - ConfigMap: $NAMESPACE/cozystack-version"
echo " - Namespace: cozy-keycloak"
echo ""
read -p "Do you want to annotate these resources? (y/N) " -n 1 -r
echo ""
@@ -48,6 +49,14 @@ if [[ $REPLY =~ ^[Yy]$ ]]; then
kubectl annotate namespace "$NAMESPACE" helm.sh/resource-policy=keep --overwrite
echo "Annotating ConfigMap cozystack-version..."
kubectl annotate configmap -n "$NAMESPACE" cozystack-version helm.sh/resource-policy=keep --overwrite 2>/dev/null || echo " ConfigMap cozystack-version not found, skipping."
echo "Annotating namespace cozy-keycloak..."
if kubectl get namespace cozy-keycloak >/dev/null 2>&1; then
kubectl annotate namespace cozy-keycloak helm.sh/resource-policy=keep --overwrite
else
echo " Namespace cozy-keycloak not found, skipping."
fi
echo ""
echo "Resources annotated successfully."
else
@@ -56,29 +65,45 @@ else
fi
echo ""
# Step 1: Check for cozy-proxy HelmRelease with conflicting releaseName
# In v0.41.x, cozy-proxy was incorrectly configured with releaseName "cozystack",
# which conflicts with the installer helm release name. If not suspended, cozy-proxy
# HelmRelease will overwrite the installer release and delete cozystack-operator.
COZY_PROXY_RELEASE_NAME=$(kubectl get hr -n "$NAMESPACE" cozy-proxy -o jsonpath='{.spec.releaseName}' 2>/dev/null || true)
if [ "$COZY_PROXY_RELEASE_NAME" = "cozystack" ]; then
echo "WARNING: HelmRelease cozy-proxy has releaseName 'cozystack', which conflicts"
echo "with the installer release. It must be suspended before proceeding, otherwise"
echo "it will overwrite the installer and delete cozystack-operator."
echo ""
read -p "Suspend HelmRelease cozy-proxy? (y/N) " -n 1 -r
# Step 0.5: Delete keycloak helm release secrets to prevent FluxCD from running helm uninstall
# When the Package operator recreates keycloak HelmReleases with spec.chartRef
# (replacing spec.chart), FluxCD sees an incompatible change and runs helm uninstall,
# destroying all keycloak data. Without helm secrets, FluxCD has nothing to uninstall
# and performs helm install instead, adopting existing resources.
echo "Step 0.5: Protect keycloak data from destruction"
echo ""
echo "This will delete Helm release secrets for keycloak releases in cozy-keycloak"
echo "namespace. Without these secrets, FluxCD cannot run helm uninstall and will"
echo "adopt existing resources instead of recreating them."
echo ""
if kubectl get namespace cozy-keycloak >/dev/null 2>&1; then
read -p "Do you want to delete keycloak helm release secrets? (y/N) " -n 1 -r
echo ""
if [[ $REPLY =~ ^[Yy]$ ]]; then
kubectl -n "$NAMESPACE" patch hr cozy-proxy --type=merge --field-manager=flux-client-side-apply -p '{"spec":{"suspend":true}}'
echo "HelmRelease cozy-proxy suspended."
for release in keycloak keycloak-operator keycloak-configure; do
echo " Deleting helm release secrets for ${release}..."
kubectl delete secrets -n cozy-keycloak -l "name=${release},owner=helm" --ignore-not-found
# Fallback: delete by name pattern
remaining=$(kubectl get secrets -n cozy-keycloak -o name | { grep "^secret/sh\.helm\.release\.v1\.${release}\." || true; })
if [ -n "$remaining" ]; then
echo "$remaining" | while IFS= read -r secret; do
echo " Deleting $secret"
kubectl delete -n cozy-keycloak "$secret" --ignore-not-found
done
fi
done
echo ""
echo "Keycloak helm release secrets deleted."
else
echo "ERROR: Cannot proceed with conflicting cozy-proxy HelmRelease active."
echo "Please suspend it manually:"
echo " kubectl -n $NAMESPACE patch hr cozy-proxy --type=merge -p '{\"spec\":{\"suspend\":true}}'"
exit 1
echo "WARNING: Skipping. Keycloak data may be lost during upgrade if FluxCD"
echo "runs helm uninstall due to chart source change."
fi
echo ""
else
echo " Namespace cozy-keycloak does not exist, keycloak not deployed — skipping."
fi
echo ""
# Read ConfigMap cozystack
echo "Reading ConfigMap cozystack..."
@@ -155,13 +180,13 @@ fi
if [ -z "$BUNDLE_DISABLE" ]; then
DISABLED_PACKAGES="[]"
else
DISABLED_PACKAGES=$(echo "$BUNDLE_DISABLE" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - cozystack."$0}')
DISABLED_PACKAGES=$(echo "$BUNDLE_DISABLE" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - "$0}')
fi
if [ -z "$BUNDLE_ENABLE" ]; then
ENABLED_PACKAGES="[]"
else
ENABLED_PACKAGES=$(echo "$BUNDLE_ENABLE" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - cozystack."$0}')
ENABLED_PACKAGES=$(echo "$BUNDLE_ENABLE" | sed 's/,/\n/g' | awk 'BEGIN{print}{print " - "$0}')
fi
if [ -z "$EXPOSE_SERVICES" ]; then
@@ -175,7 +200,7 @@ BUNDLE_NAME=$(echo "$BUNDLE_NAME" | sed 's/paas/isp/')
# Extract branding if available
BRANDING=$(echo "$BRANDING_CM" | jq -r '.data // {} | to_entries[] | "\(.key): \"\(.value)\""')
if [ -z "$BRANDING" ]; then
if [ -z "$BRANDING" ]; then
BRANDING="{}"
else
BRANDING=$(echo "$BRANDING" | awk 'BEGIN{print}{print " " $0}')

View File

@@ -195,7 +195,6 @@ func applyListInputOverrides(schema map[string]any, kind string, openAPIProps ma
"valueUri": "/api/clusters/{cluster}/k8s/apis/instancetype.kubevirt.io/v1beta1/virtualmachineclusterinstancetypes",
"keysToValue": []any{"metadata", "name"},
"keysToLabel": []any{"metadata", "name"},
"allowEmpty": true,
},
}
if prop, _ := openAPIProps["instanceType"].(map[string]any); prop != nil {
@@ -215,34 +214,6 @@ func applyListInputOverrides(schema map[string]any, kind string, openAPIProps ma
"keysToLabel": []any{"metadata", "name"},
},
}
case "ClickHouse", "Harbor", "HTTPCache", "Kubernetes", "MariaDB", "MongoDB",
"NATS", "OpenBAO", "Postgres", "Qdrant", "RabbitMQ", "Redis", "VMDisk":
specProps := ensureSchemaPath(schema, "spec")
specProps["storageClass"] = storageClassListInput()
case "FoundationDB":
storageProps := ensureSchemaPath(schema, "spec", "storage")
storageProps["storageClass"] = storageClassListInput()
case "Kafka":
kafkaProps := ensureSchemaPath(schema, "spec", "kafka")
kafkaProps["storageClass"] = storageClassListInput()
zkProps := ensureSchemaPath(schema, "spec", "zookeeper")
zkProps["storageClass"] = storageClassListInput()
}
}
// storageClassListInput returns a listInput field config for a storageClass dropdown
// backed by the cluster's available StorageClasses.
func storageClassListInput() map[string]any {
return map[string]any{
"type": "listInput",
"customProps": map[string]any{
"valueUri": "/api/clusters/{cluster}/k8s/apis/storage.k8s.io/v1/storageclasses",
"keysToValue": []any{"metadata", "name"},
"keysToLabel": []any{"metadata", "name"},
},
}
}

View File

@@ -202,10 +202,6 @@ func TestApplyListInputOverrides_VMInstance(t *testing.T) {
t.Errorf("expected valueUri %s, got %v", expectedURI, customProps["valueUri"])
}
if customProps["allowEmpty"] != true {
t.Errorf("expected allowEmpty true, got %v", customProps["allowEmpty"])
}
// Check disks[].name is a listInput
disks, ok := specProps["disks"].(map[string]any)
if !ok {
@@ -236,72 +232,6 @@ func TestApplyListInputOverrides_VMInstance(t *testing.T) {
}
}
func TestApplyListInputOverrides_StorageClassSimple(t *testing.T) {
for _, kind := range []string{
"ClickHouse", "Harbor", "HTTPCache", "Kubernetes", "MariaDB", "MongoDB",
"NATS", "OpenBAO", "Postgres", "Qdrant", "RabbitMQ", "Redis", "VMDisk",
} {
t.Run(kind, func(t *testing.T) {
schema := map[string]any{}
applyListInputOverrides(schema, kind, map[string]any{})
specProps := schema["properties"].(map[string]any)["spec"].(map[string]any)["properties"].(map[string]any)
sc, ok := specProps["storageClass"].(map[string]any)
if !ok {
t.Fatalf("storageClass not found in spec.properties for kind %s", kind)
}
assertStorageClassListInput(t, sc)
})
}
}
func TestApplyListInputOverrides_StorageClassFoundationDB(t *testing.T) {
schema := map[string]any{}
applyListInputOverrides(schema, "FoundationDB", map[string]any{})
storageProps := schema["properties"].(map[string]any)["spec"].(map[string]any)["properties"].(map[string]any)["storage"].(map[string]any)["properties"].(map[string]any)
sc, ok := storageProps["storageClass"].(map[string]any)
if !ok {
t.Fatal("storageClass not found in spec.storage.properties")
}
assertStorageClassListInput(t, sc)
}
func TestApplyListInputOverrides_StorageClassKafka(t *testing.T) {
schema := map[string]any{}
applyListInputOverrides(schema, "Kafka", map[string]any{})
specProps := schema["properties"].(map[string]any)["spec"].(map[string]any)["properties"].(map[string]any)
kafkaSC, ok := specProps["kafka"].(map[string]any)["properties"].(map[string]any)["storageClass"].(map[string]any)
if !ok {
t.Fatal("storageClass not found in spec.kafka.properties")
}
assertStorageClassListInput(t, kafkaSC)
zkSC, ok := specProps["zookeeper"].(map[string]any)["properties"].(map[string]any)["storageClass"].(map[string]any)
if !ok {
t.Fatal("storageClass not found in spec.zookeeper.properties")
}
assertStorageClassListInput(t, zkSC)
}
// assertStorageClassListInput verifies that a field is a correctly configured storageClass listInput.
func assertStorageClassListInput(t *testing.T, field map[string]any) {
t.Helper()
if field["type"] != "listInput" {
t.Errorf("expected type listInput, got %v", field["type"])
}
customProps, ok := field["customProps"].(map[string]any)
if !ok {
t.Fatal("customProps not found")
}
expectedURI := "/api/clusters/{cluster}/k8s/apis/storage.k8s.io/v1/storageclasses"
if customProps["valueUri"] != expectedURI {
t.Errorf("expected valueUri %s, got %v", expectedURI, customProps["valueUri"])
}
}
func TestApplyListInputOverrides_UnknownKind(t *testing.T) {
schema := map[string]any{}
applyListInputOverrides(schema, "SomeOtherKind", map[string]any{})

View File

@@ -307,10 +307,6 @@ func (m *Manager) buildExpectedResourceSet(crds []cozyv1alpha1.ApplicationDefini
"stock-project-builtin-table",
"stock-project-crd-form",
"stock-project-crd-table",
"stock-instance-api-form",
"stock-instance-api-table",
"stock-instance-builtin-form",
"stock-instance-builtin-table",
}
for _, sidebarID := range stockSidebars {
expected["Sidebar"][sidebarID] = true

View File

@@ -68,46 +68,31 @@ func (m *Manager) ensureMarketplacePanel(ctx context.Context, crd *cozyv1alpha1.
tags[i] = t
}
_, err := controllerutil.CreateOrUpdate(ctx, m.Client, mp, func() error {
specMap := map[string]any{
"description": d.Description,
"name": displayName,
"type": "nonCrd",
"apiGroup": "apps.cozystack.io",
"apiVersion": "v1alpha1",
"plural": app.Plural, // e.g., "buckets"
"disabled": false,
"hidden": false,
"tags": tags,
"icon": d.Icon,
}
specBytes, err := json.Marshal(specMap)
if err != nil {
return reconcile.Result{}, err
}
_, err = controllerutil.CreateOrUpdate(ctx, m.Client, mp, func() error {
if err := controllerutil.SetOwnerReference(crd, mp, m.Scheme); err != nil {
return err
}
// Add dashboard labels to dynamic resources
m.addDashboardLabels(mp, crd, ResourceTypeDynamic)
// Preserve user-set disabled/hidden values from existing resource
disabled := false
hidden := false
if mp.Spec.Raw != nil {
var existing map[string]any
if err := json.Unmarshal(mp.Spec.Raw, &existing); err == nil {
if v, ok := existing["disabled"].(bool); ok {
disabled = v
}
if v, ok := existing["hidden"].(bool); ok {
hidden = v
}
}
}
specMap := map[string]any{
"description": d.Description,
"name": displayName,
"type": "nonCrd",
"apiGroup": "apps.cozystack.io",
"apiVersion": "v1alpha1",
"plural": app.Plural, // e.g., "buckets"
"disabled": disabled,
"hidden": hidden,
"tags": tags,
"icon": d.Icon,
}
specBytes, err := json.Marshal(specMap)
if err != nil {
return err
}
// Only update spec if it's different to avoid unnecessary updates
newSpec := dashv1alpha1.ArbitrarySpec{
JSON: apiextv1.JSON{Raw: specBytes},

View File

@@ -38,23 +38,6 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
}
all = crdList.Items
// 1b) Fetch all MarketplacePanels to determine which resources are hidden
hiddenResources := map[string]bool{}
var mpList dashv1alpha1.MarketplacePanelList
if err := m.List(ctx, &mpList, &client.ListOptions{}); err == nil {
for i := range mpList.Items {
mp := &mpList.Items[i]
if mp.Spec.Raw != nil {
var spec map[string]any
if err := json.Unmarshal(mp.Spec.Raw, &spec); err == nil {
if hidden, ok := spec["hidden"].(bool); ok && hidden {
hiddenResources[mp.Name] = true
}
}
}
}
}
// 2) Build category -> []item map (only for CRDs with spec.dashboard != nil)
type item struct {
Key string
@@ -80,11 +63,6 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
plural := pickPlural(kind, def)
lowerKind := strings.ToLower(kind)
// Skip resources hidden via MarketplacePanel
if hiddenResources[def.Name] {
continue
}
// Check if this resource is a module
if def.Spec.Dashboard.Module {
// Special case: info should have its own keysAndTags, not be in modules
@@ -265,11 +243,6 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
"stock-project-builtin-table",
"stock-project-crd-form",
"stock-project-crd-table",
// stock-instance sidebars (namespace-level pages after namespace is selected)
"stock-instance-api-form",
"stock-instance-api-table",
"stock-instance-builtin-form",
"stock-instance-builtin-table",
}
// Add details sidebars for all CRDs with dashboard config

View File

@@ -1924,12 +1924,12 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "external-ips-table",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/services",
"cluster": "{2}",
"baseprefix": "/openapi-ui",
"customizationId": "factory-details-v1.services",
"pathToItems": ".items",
"id": "external-ips-table",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/services",
"clusterNamePartOfUrl": "{2}",
"baseprefix": "/openapi-ui",
"customizationId": "factory-details-v1.services",
"pathToItems": []any{"items"},
"fieldSelector": map[string]any{
"spec.type": "LoadBalancer",
},

View File

@@ -2,4 +2,5 @@ include ../../../hack/package.mk
generate:
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
yq -o json -i '.properties = {}' values.schema.json
../../../hack/update-crd.sh

View File

@@ -1,13 +1,3 @@
# S3 bucket
## Parameters
### Parameters
| Name | Description | Type | Value |
| ---------------------- | -------------------------------------------------------------------------- | ------------------- | ------- |
| `locking` | Provisions bucket from the `-lock` BucketClass (with object lock enabled). | `bool` | `false` |
| `storagePool` | Selects a specific BucketClass by storage pool name. | `string` | `""` |
| `users` | Users configuration map. | `map[string]object` | `{}` |
| `users[name].readonly` | Whether the user has read-only access. | `bool` | `false` |

View File

@@ -1,22 +1,29 @@
{{- $seaweedfs := .Values._namespace.seaweedfs }}
{{- $pool := .Values.storagePool }}
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketClaim
metadata:
name: {{ .Release.Name }}
spec:
bucketClassName: {{ $seaweedfs }}{{- if $pool }}-{{ $pool }}{{- end }}{{- if .Values.locking }}-lock{{- end }}
bucketClassName: {{ $seaweedfs }}
protocols:
- s3
{{- range $name, $user := .Values.users }}
---
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketAccess
metadata:
name: {{ $.Release.Name }}-{{ $name }}
name: {{ .Release.Name }}
spec:
bucketAccessClassName: {{ $seaweedfs }}{{- if $pool }}-{{ $pool }}{{- end }}{{- if $user.readonly }}-readonly{{- end }}
bucketClaimName: {{ $.Release.Name }}
credentialsSecretName: {{ $.Release.Name }}-{{ $name }}
bucketAccessClassName: {{ $seaweedfs }}
bucketClaimName: {{ .Release.Name }}
credentialsSecretName: {{ .Release.Name }}
protocol: s3
---
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketAccess
metadata:
name: {{ .Release.Name }}-readonly
spec:
bucketAccessClassName: {{ $seaweedfs }}-readonly
bucketClaimName: {{ .Release.Name }}
credentialsSecretName: {{ .Release.Name }}-readonly
protocol: s3
{{- end }}

View File

@@ -8,9 +8,9 @@ rules:
resources:
- secrets
resourceNames:
{{- range $name, $user := .Values.users }}
- {{ $.Release.Name }}-{{ $name }}-credentials
{{- end }}
- {{ .Release.Name }}
- {{ .Release.Name }}-credentials
- {{ .Release.Name }}-readonly
verbs: ["get", "list", "watch"]
- apiGroups:
- networking.k8s.io

View File

@@ -23,4 +23,3 @@ spec:
name: cozystack-values
values:
bucketName: {{ .Release.Name }}
users: {{ .Values.users | toJson }}

View File

@@ -1,30 +1,5 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"locking": {
"description": "Provisions bucket from the `-lock` BucketClass (with object lock enabled).",
"type": "boolean",
"default": false
},
"storagePool": {
"description": "Selects a specific BucketClass by storage pool name.",
"type": "string",
"default": ""
},
"users": {
"description": "Users configuration map.",
"type": "object",
"default": {},
"additionalProperties": {
"type": "object",
"properties": {
"readonly": {
"description": "Whether the user has read-only access.",
"type": "boolean"
}
}
}
}
}
}
"properties": {}
}

View File

@@ -1,11 +1 @@
## @param {bool} locking=false - Provisions bucket from the `-lock` BucketClass (with object lock enabled).
locking: false
## @param {string} [storagePool] - Selects a specific BucketClass by storage pool name.
storagePool: ""
## @typedef {struct} User - Bucket user configuration.
## @field {bool} [readonly] - Whether the user has read-only access.
## @param {map[string]User} users - Users configuration map.
users: {}
{}

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:1c8c842277f45f189a5c645fcf7b2023c8ed7189f44029ce8b988019000da14c
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:434aa3b8e2a3cbf6681426b174e1c4fde23bafd12a6cccd046b5cb1749092ec4

View File

@@ -3,7 +3,6 @@ cilium:
k8sServiceHost: {{ .Release.Name }}.{{ .Release.Namespace }}.svc
k8sServicePort: 6443
routingMode: tunnel
MTU: 1350
enableIPv4Masquerade: true
ipv4NativeRoutingCIDR: ""
{{- if $.Values.addons.gatewayAPI.enabled }}

View File

@@ -4,4 +4,4 @@ description: Managed RabbitMQ service
icon: /logos/rabbitmq.svg
type: application
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process
appVersion: "4.2.4"
appVersion: "3.13.2"

View File

@@ -3,7 +3,3 @@ include ../../../hack/package.mk
generate:
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
../../../hack/update-crd.sh
update:
hack/update-versions.sh
make generate

View File

@@ -23,7 +23,6 @@ The service utilizes official RabbitMQ operator. This ensures the reliability an
| `size` | Persistent Volume Claim size available for application data. | `quantity` | `10Gi` |
| `storageClass` | StorageClass used to store the data. | `string` | `""` |
| `external` | Enable external access from outside the cluster. | `bool` | `false` |
| `version` | RabbitMQ major.minor version to deploy | `string` | `v4.2` |
### Application-specific parameters

View File

@@ -1,4 +0,0 @@
"v4.2": "4.2.4"
"v4.1": "4.1.8"
"v4.0": "4.0.9"
"v3.13": "3.13.7"

View File

@@ -1,129 +0,0 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
RABBITMQ_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
VALUES_FILE="${RABBITMQ_DIR}/values.yaml"
VERSIONS_FILE="${RABBITMQ_DIR}/files/versions.yaml"
GITHUB_API_URL="https://api.github.com/repos/rabbitmq/rabbitmq-server/releases"
# Check if jq is installed
if ! command -v jq &> /dev/null; then
echo "Error: jq is not installed. Please install jq and try again." >&2
exit 1
fi
# Fetch releases from GitHub API
echo "Fetching releases from GitHub API..."
RELEASES_JSON=$(curl -sSL "${GITHUB_API_URL}?per_page=100")
if [ -z "$RELEASES_JSON" ]; then
echo "Error: Could not fetch releases from GitHub API" >&2
exit 1
fi
# Extract stable release tags (format: v3.13.7, v4.0.3, etc.)
# Filter out pre-releases and draft releases
RELEASE_TAGS=$(echo "$RELEASES_JSON" | jq -r '.[] | select(.prerelease == false) | select(.draft == false) | .tag_name' | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | sort -V)
if [ -z "$RELEASE_TAGS" ]; then
echo "Error: Could not find any stable release tags" >&2
exit 1
fi
echo "Found release tags: $(echo "$RELEASE_TAGS" | tr '\n' ' ')"
# Supported major.minor versions (newest first)
# We support the last few minor releases of each active major
SUPPORTED_MAJORS=("4.2" "4.1" "4.0" "3.13")
# Build versions map: major.minor -> latest patch version
declare -A VERSION_MAP
MAJOR_VERSIONS=()
for major_minor in "${SUPPORTED_MAJORS[@]}"; do
# Find the latest patch version for this major.minor
MATCHING=$(echo "$RELEASE_TAGS" | grep -E "^v${major_minor//./\\.}\.[0-9]+$" | tail -n1)
if [ -n "$MATCHING" ]; then
# Strip the 'v' prefix for the value (Docker tag format is e.g. 3.13.7)
TAG_VERSION="${MATCHING#v}"
VERSION_MAP["v${major_minor}"]="${TAG_VERSION}"
MAJOR_VERSIONS+=("v${major_minor}")
echo "Found version: v${major_minor} -> ${TAG_VERSION}"
else
echo "Warning: No stable releases found for ${major_minor}, skipping..." >&2
fi
done
if [ ${#MAJOR_VERSIONS[@]} -eq 0 ]; then
echo "Error: No matching versions found" >&2
exit 1
fi
echo "Major versions to add: ${MAJOR_VERSIONS[*]}"
# Create/update versions.yaml file
echo "Updating $VERSIONS_FILE..."
{
for major_ver in "${MAJOR_VERSIONS[@]}"; do
echo "\"${major_ver}\": \"${VERSION_MAP[$major_ver]}\""
done
} > "$VERSIONS_FILE"
echo "Successfully updated $VERSIONS_FILE"
# Update values.yaml - enum with major.minor versions only
TEMP_FILE=$(mktemp)
trap "rm -f $TEMP_FILE" EXIT
# Build new version section
NEW_VERSION_SECTION="## @enum {string} Version"
for major_ver in "${MAJOR_VERSIONS[@]}"; do
NEW_VERSION_SECTION="${NEW_VERSION_SECTION}
## @value $major_ver"
done
NEW_VERSION_SECTION="${NEW_VERSION_SECTION}
## @param {Version} version - RabbitMQ major.minor version to deploy
version: ${MAJOR_VERSIONS[0]}"
# Check if version section already exists
if grep -q "^## @enum {string} Version" "$VALUES_FILE"; then
# Version section exists, update it using awk
echo "Updating existing version section in $VALUES_FILE..."
awk -v new_section="$NEW_VERSION_SECTION" '
/^## @enum {string} Version/ {
in_section = 1
print new_section
next
}
in_section && /^version: / {
in_section = 0
next
}
in_section {
next
}
{ print }
' "$VALUES_FILE" > "$TEMP_FILE.tmp"
mv "$TEMP_FILE.tmp" "$VALUES_FILE"
else
# Version section doesn't exist, insert it before Application-specific parameters section
echo "Inserting new version section in $VALUES_FILE..."
awk -v new_section="$NEW_VERSION_SECTION" '
/^## @section Application-specific parameters/ {
print new_section
print ""
}
{ print }
' "$VALUES_FILE" > "$TEMP_FILE.tmp"
mv "$TEMP_FILE.tmp" "$VALUES_FILE"
fi
echo "Successfully updated $VALUES_FILE with major.minor versions: ${MAJOR_VERSIONS[*]}"

View File

@@ -1,8 +0,0 @@
{{- define "rabbitmq.versionMap" }}
{{- $versionMap := .Files.Get "files/versions.yaml" | fromYaml }}
{{- if not (hasKey $versionMap .Values.version) }}
{{- printf `RabbitMQ version %s is not supported, allowed versions are %s` $.Values.version (keys $versionMap) | fail }}
{{- end }}
{{- index $versionMap .Values.version }}
{{- end }}

View File

@@ -7,7 +7,6 @@ metadata:
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicas }}
image: 'rabbitmq:{{ include "rabbitmq.versionMap" $ }}-management'
{{- if .Values.external }}
service:
type: LoadBalancer

View File

@@ -92,17 +92,6 @@
}
}
},
"version": {
"description": "RabbitMQ major.minor version to deploy",
"type": "string",
"default": "v4.2",
"enum": [
"v4.2",
"v4.1",
"v4.0",
"v3.13"
]
},
"vhosts": {
"description": "Virtual hosts configuration map.",
"type": "object",

View File

@@ -34,15 +34,6 @@ storageClass: ""
external: false
##
## @enum {string} Version
## @value v4.2
## @value v4.1
## @value v4.0
## @value v3.13
## @param {Version} version - RabbitMQ major.minor version to deploy
version: v4.2
## @section Application-specific parameters
##

View File

@@ -34,12 +34,6 @@ spec:
metadata:
annotations:
kubevirt.io/allow-pod-bridge-network-live-migration: "true"
{{- $ovnIPName := printf "%s.%s" (include "virtual-machine.fullname" .) .Release.Namespace }}
{{- $ovnIP := lookup "kubeovn.io/v1" "IP" "" $ovnIPName }}
{{- if $ovnIP }}
ovn.kubernetes.io/mac_address: {{ $ovnIP.spec.macAddress | quote }}
ovn.kubernetes.io/ip_address: {{ $ovnIP.spec.ipAddress | quote }}
{{- end }}
labels:
{{- include "virtual-machine.labels" . | nindent 8 }}
spec:

View File

@@ -1,9 +1,9 @@
cozystackOperator:
# Deployment variant: talos, generic, hosted
variant: talos
image: ghcr.io/cozystack/cozystack/cozystack-operator:v1.1.1@sha256:1b2b9ca8592799488814472e2d33d8b42fcad73c6ff6dd459c09472f308fb59d
image: ghcr.io/cozystack/cozystack/cozystack-operator:v1.0.0@sha256:9e5229764b6077809a1c16566881a524c33e8986e36597e6833f8857a7e6a335
platformSourceUrl: 'oci://ghcr.io/cozystack/cozystack/cozystack-packages'
platformSourceRef: 'digest=sha256:b11e4ee8e968ee0b039f19a13568273ba922ae01cb8c2c107ca9595cea2d3b53'
platformSourceRef: 'digest=sha256:ef3e4ba7d21572a61794d8be594805f063aa04f4a8c3753351fc89c7804d337e'
# Generic variant configuration (only used when cozystackOperator.variant=generic)
cozystack:
# Kubernetes API server host (IP only, no protocol/port)

View File

@@ -1,46 +1,90 @@
#!/bin/sh
# Migration 34 --> 35
# Backfill spec.version on rabbitmq.apps.cozystack.io resources.
# Protect keycloak data from destruction during v1.0 upgrade.
#
# Before this migration RabbitMQ had no user-selectable version; the
# operator always used its built-in default image (v3.x). A version field
# was added in this release. Without this migration every existing cluster
# would be upgraded to the new default (v4.2) on the next reconcile.
# Problem:
# In v0.41.x keycloak HelmReleases (keycloak, keycloak-operator, keycloak-configure)
# were created via platform chart template (helmreleases.yaml) with spec.chart
# pointing to a HelmRepository. In v1.0 the Package operator recreates these
# HelmReleases with spec.chartRef pointing to an ExternalArtifact.
# FluxCD sees an incompatible chart source change and runs helm uninstall,
# destroying all keycloak resources (PVCs, PostgreSQL databases, credentials,
# realm configuration) before doing a fresh helm install.
#
# Set spec.version to "v3.13" for any rabbitmq app resource that does not
# already have it set.
# Solution:
# Delete Helm release secrets (sh.helm.release.v1.<name>.*) BEFORE the chart
# source change happens. Without these secrets FluxCD has no release to uninstall,
# so it performs helm install directly. Existing resources with correct
# meta.helm.sh/release-name annotations are adopted by the new release.
#
# Pattern from migration 26 (monitoring migration).
set -euo pipefail
DEFAULT_VERSION="v3.13"
NAMESPACE="cozy-keycloak"
# Skip if the CRD does not exist (rabbitmq was never installed)
if ! kubectl api-resources --api-group=apps.cozystack.io -o name 2>/dev/null | grep -q '^rabbitmqs\.'; then
echo "CRD rabbitmqs.apps.cozystack.io not found, skipping migration"
# Delete all helm release secrets for a given release name in a namespace.
# Uses both label selector and name-pattern matching to ensure complete cleanup.
delete_helm_secrets() {
local ns="$1"
local release="$2"
# Primary: delete by label selector
kubectl delete secrets -n "$ns" -l "name=${release},owner=helm" --ignore-not-found
# Fallback: find and delete by name pattern (in case labels were modified)
local remaining
remaining=$(kubectl get secrets -n "$ns" -o name | { grep "^secret/sh\.helm\.release\.v1\.${release}\." || true; })
if [ -n "$remaining" ]; then
echo " Found secrets not matched by label selector, deleting by name..."
echo "$remaining" | while IFS= read -r secret; do
echo " Deleting $secret"
kubectl delete -n "$ns" "$secret" --ignore-not-found
done
fi
# Verify all secrets are gone
remaining=$(kubectl get secrets -n "$ns" -o name | { grep "^secret/sh\.helm\.release\.v1\.${release}\." || true; })
if [ -n "$remaining" ]; then
echo " ERROR: Failed to delete helm release secrets:"
echo "$remaining"
return 1
fi
}
echo "=== Protecting keycloak data from destruction during upgrade ==="
# Check if namespace exists; if not, keycloak was never deployed — skip
if ! kubectl get namespace "$NAMESPACE" >/dev/null 2>&1; then
echo " Namespace $NAMESPACE does not exist, keycloak not deployed — skipping"
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=35 --dry-run=client -o yaml | kubectl apply -f-
exit 0
fi
RABBITMQS=$(kubectl get rabbitmqs.apps.cozystack.io -A -o jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}{"\n"}{end}')
for resource in $RABBITMQS; do
NS="${resource%%/*}"
APP_NAME="${resource##*/}"
for release in keycloak keycloak-operator keycloak-configure; do
# Check if HelmRelease exists (handle missing CRD gracefully)
out=$(kubectl get helmrelease "$release" -n "$NAMESPACE" -o name 2>&1) && found=true || found=false
# Skip if spec.version is already set
CURRENT_VER=$(kubectl get rabbitmqs.apps.cozystack.io -n "$NS" "$APP_NAME" \
-o jsonpath='{.spec.version}')
if [ -n "$CURRENT_VER" ]; then
echo "SKIP $NS/$APP_NAME: spec.version already set to '$CURRENT_VER'"
continue
if [ "$found" = "true" ]; then
echo " [DELETE SECRETS] Removing helm release secrets for ${release} in ${NAMESPACE}"
delete_helm_secrets "$NAMESPACE" "$release"
else
# Distinguish "not found" from real errors
case "$out" in
*"NotFound"*|*"not found"*|*"the server doesn't have a resource type"*|*"no matches for kind"*)
echo " [SKIP] hr/${release} not found in ${NAMESPACE}"
;;
*)
echo " [ERROR] Failed to query hr/${release}: $out" >&2
exit 1
;;
esac
fi
echo "Patching rabbitmq/$APP_NAME in $NS: setting version=$DEFAULT_VERSION"
kubectl patch rabbitmqs.apps.cozystack.io -n "$NS" "$APP_NAME" --type=merge \
--patch "{\"spec\":{\"version\":\"${DEFAULT_VERSION}\"}}"
done
echo "=== Done ==="
# Stamp version
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=35 --dry-run=client -o yaml | kubectl apply -f-

View File

@@ -24,7 +24,7 @@ if [ "$CURRENT_VERSION" -ge "$TARGET_VERSION" ]; then
fi
# Run migrations sequentially from current version to target version
for i in $(seq $CURRENT_VERSION $((TARGET_VERSION - 1))); do
for i in $(seq $((CURRENT_VERSION + 1)) $TARGET_VERSION); do
if [ -f "/migrations/$i" ]; then
echo "Running migration $i"
chmod +x /migrations/$i

View File

@@ -5,7 +5,7 @@ sourceRef:
path: /
migrations:
enabled: false
image: ghcr.io/cozystack/cozystack/platform-migrations:v1.1.1@sha256:bcbe612879cecd2ae1cef91dfff6d34d009c2f7de6592145c04a2d6d21b28f4b
image: ghcr.io/cozystack/cozystack/platform-migrations:v1.0.0@sha256:68dabdebc38ac439228ae07031cc70e0fa184a24bd4e5b3b22c17466b2a55201
targetVersion: 35
# Bundle deployment configuration
bundles:

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v1.1.1@sha256:0eae9f519669667d60b160ebb93c127843c470ad9ca3447fceaa54604503a7ba
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v1.0.0@sha256:0eae9f519669667d60b160ebb93c127843c470ad9ca3447fceaa54604503a7ba

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/matchbox:v1.1.1@sha256:15e85d2740b9337cb73aeb8117fc9132c0552ca010aeabd8ec67b7c053d0eab2
ghcr.io/cozystack/cozystack/matchbox:v1.0.0@sha256:c48eb7b23f01a8ff58d409fdb51c88e771f819cb914eee03da89471e62302f33

View File

@@ -104,7 +104,6 @@ spec:
- {{ .Release.Name }}
secretName: etcd-peer-ca-tls
privateKey:
rotationPolicy: Never
algorithm: RSA
size: 4096
issuerRef:
@@ -131,7 +130,6 @@ spec:
- {{ .Release.Name }}
secretName: etcd-ca-tls
privateKey:
rotationPolicy: Never
algorithm: RSA
size: 4096
issuerRef:

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.1.1@sha256:2a3595cd88b30af55b2000d3ca204899beecef0012b0e0402754c3914aad1f7f
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.0.0@sha256:2a3595cd88b30af55b2000d3ca204899beecef0012b0e0402754c3914aad1f7f

View File

@@ -7,32 +7,10 @@ metadata:
driverName: {{ .Release.Namespace }}.seaweedfs.objectstorage.k8s.io
deletionPolicy: Delete
---
kind: BucketClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ .Release.Namespace }}-lock
driverName: {{ .Release.Namespace }}.seaweedfs.objectstorage.k8s.io
deletionPolicy: Retain
parameters:
objectLockEnabled: "true"
objectLockRetentionMode: "COMPLIANCE"
objectLockRetentionDays: "365"
---
kind: BucketAccessClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ .Release.Namespace }}
driverName: {{ .Release.Namespace }}.seaweedfs.objectstorage.k8s.io
authenticationType: KEY
parameters:
accessPolicy: readwrite
---
kind: BucketAccessClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ .Release.Namespace }}-readonly
driverName: {{ .Release.Namespace }}.seaweedfs.objectstorage.k8s.io
authenticationType: KEY
parameters:
accessPolicy: readonly
{{- end }}

View File

@@ -32,8 +32,8 @@
{{- if not (regexMatch "^[a-z0-9]([a-z0-9-]*[a-z0-9])?$" $poolName) }}
{{- fail (printf "volume.pools key '%s' must be a valid DNS label (lowercase alphanumeric and hyphens, no dots)." $poolName) }}
{{- end }}
{{- if or (hasSuffix "-lock" $poolName) (hasSuffix "-readonly" $poolName) }}
{{- fail (printf "volume.pools key '%s' must not end with '-lock' or '-readonly' (reserved suffixes for COSI resources)." $poolName) }}
{{- if or (hasSuffix "-worm" $poolName) (hasSuffix "-readonly" $poolName) }}
{{- fail (printf "volume.pools key '%s' must not end with '-worm' or '-readonly' (reserved suffixes for COSI resources)." $poolName) }}
{{- end }}
{{- if not $pool.diskType }}
{{- fail (printf "volume.pools.%s.diskType is required." $poolName) }}
@@ -52,8 +52,8 @@
{{- if not (regexMatch "^[a-z0-9]([a-z0-9-]*[a-z0-9])?$" $poolName) }}
{{- fail (printf "volume.zones.%s.pools key '%s' must be a valid DNS label." $zoneName $poolName) }}
{{- end }}
{{- if or (hasSuffix "-lock" $poolName) (hasSuffix "-readonly" $poolName) }}
{{- fail (printf "volume.zones.%s.pools key '%s' must not end with '-lock' or '-readonly' (reserved suffixes for COSI resources)." $zoneName $poolName) }}
{{- if or (hasSuffix "-worm" $poolName) (hasSuffix "-readonly" $poolName) }}
{{- fail (printf "volume.zones.%s.pools key '%s' must not end with '-worm' or '-readonly' (reserved suffixes for COSI resources)." $zoneName $poolName) }}
{{- end }}
{{- if not $pool.diskType }}
{{- fail (printf "volume.zones.%s.pools.%s.diskType is required." $zoneName $poolName) }}

View File

@@ -25,14 +25,14 @@ parameters:
kind: BucketClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ $.Release.Namespace }}-{{ $poolName }}-lock
name: {{ $.Release.Namespace }}-{{ $poolName }}-worm
driverName: {{ $.Release.Namespace }}.seaweedfs.objectstorage.k8s.io
deletionPolicy: Retain
parameters:
disk: {{ $diskType }}
objectLockEnabled: "true"
objectLockRetentionMode: "COMPLIANCE"
objectLockRetentionDays: "365"
objectLockRetentionMode: COMPLIANCE
objectLockRetentionDays: "36500"
---
kind: BucketAccessClass
apiVersion: objectstorage.k8s.io/v1alpha1

View File

@@ -3,14 +3,30 @@ apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: backups.cozystack.io:core-controller
rules:
# Plan: reconcile schedule and update status
- apiGroups: ["backups.cozystack.io"]
resources: ["plans"]
verbs: ["get", "list", "watch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["plans/status"]
verbs: ["get", "update", "patch"]
# BackupJob: create when schedule fires (status is updated by backupstrategy-controller)
- apiGroups: ["backups.cozystack.io"]
resources: ["backupjobs"]
verbs: ["create", "get", "list", "watch", "update", "patch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["backupjobs/status"]
verbs: ["get", "update", "patch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["backups"]
verbs: ["create", "get", "list", "watch"]
- apiGroups: ["apps.cozystack.io"]
resources: ["buckets", "bucketaccesses", "virtualmachines"]
verbs: ["get", "list", "watch"]
- apiGroups: ["objectstorage.k8s.io"]
resources: ["buckets", "bucketaccesses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "list", "watch", "update", "patch"]
- apiGroups: ["kubevirt.io"]
resources: ["virtualmachines"]
verbs: ["get", "list", "watch"]
- apiGroups: ["velero.io"]
resources: ["backups", "backupstoragelocations", "volumesnapshotlocations", "restores"]
verbs: ["create", "get", "list", "watch", "update", "patch"]

View File

@@ -1,5 +1,5 @@
backupController:
image: "ghcr.io/cozystack/cozystack/backup-controller:v1.1.1@sha256:628a8e36fe1fbd6bd7631f0ab68c54647b4247a6f3168fec8ed9c07c9369f888"
image: "ghcr.io/cozystack/cozystack/backup-controller:v1.0.0@sha256:e1a6c8ac7ba64442812464b59c53e782e373a339c18b379c2692921b44c6edb5"
replicas: 2
debug: false
metrics:

View File

@@ -3,34 +3,9 @@ apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: backups.cozystack.io:strategy-controller
rules:
# Strategy types (Velero, Job)
- apiGroups: ["strategy.backups.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
# BackupClass: resolve strategy per application
- apiGroups: ["backups.cozystack.io"]
resources: ["backupclasses"]
verbs: ["get", "list", "watch"]
# BackupJob / RestoreJob: reconcile and update status
- apiGroups: ["backups.cozystack.io"]
resources: ["backupjobs", "restorejobs"]
verbs: ["get", "list", "watch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["backupjobs/status", "restorejobs/status"]
verbs: ["get", "update", "patch"]
# Backup: create after Velero backup completes
- apiGroups: ["backups.cozystack.io"]
resources: ["backups"]
verbs: ["create", "get", "list", "watch"]
# Application refs (e.g. VMInstance, VirtualMachine) for backup/restore scope
- apiGroups: ["apps.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
# Velero Backup/Restore in cozy-velero namespace
- apiGroups: ["velero.io"]
resources: ["backups", "restores"]
verbs: ["create", "get", "list", "watch", "update", "patch"]
# Leader election (--leader-elect)
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list", "watch", "create", "update", "patch"]

View File

@@ -1,5 +1,5 @@
backupStrategyController:
image: "ghcr.io/cozystack/cozystack/backupstrategy-controller:v1.1.1@sha256:5902db0bd64e416eacea4cd42b76cb86698276cfc9eadcb2df63a0e630d19100"
image: "ghcr.io/cozystack/cozystack/backupstrategy-controller:v1.0.0@sha256:29735d945c69c6bbaab21068bf4ea30f6b63f4c71a7a8d95590f370abcb4b328"
replicas: 2
debug: false
metrics:

View File

@@ -8,7 +8,7 @@ spec:
plural: buckets
singular: bucket
openAPISchema: |-
{"title":"Chart Values","type":"object","properties":{"locking":{"description":"Provisions bucket from the `-lock` BucketClass (with object lock enabled).","type":"boolean","default":false},"storagePool":{"description":"Selects a specific BucketClass by storage pool name.","type":"string","default":""},"users":{"description":"Users configuration map.","type":"object","default":{},"additionalProperties":{"type":"object","properties":{"readonly":{"description":"Whether the user has read-only access.","type":"boolean"}}}}}}
{"title":"Chart Values","type":"object","properties":{}}
release:
prefix: bucket-
labels:
@@ -26,14 +26,14 @@ spec:
tags:
- storage
icon: PHN2ZyB3aWR0aD0iMTQ0IiBoZWlnaHQ9IjE0NCIgdmlld0JveD0iMCAwIDE0NCAxNDQiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxyZWN0IHdpZHRoPSIxNDQiIGhlaWdodD0iMTQ0IiByeD0iMjQiIGZpbGw9InVybCgjcGFpbnQwX2xpbmVhcl82ODNfMzA5MSkiLz4KPHBhdGggZmlsbC1ydWxlPSJldmVub2RkIiBjbGlwLXJ1bGU9ImV2ZW5vZGQiIGQ9Ik03MiAzMC4xNjQxTDExNy45ODMgMzYuNzc4OVY0MC42NzM5QzExNy45ODMgNDYuNDY1MyA5Ny4zODYyIDUxLjEzMzIgNzEuOTgyNyA1MS4xMzMyQzQ2LjU3OTIgNTEuMTMzMiAyNiA0Ni40NjUzIDI2IDQwLjY3MzlWMzYuNDQzMUw3MiAzMC4xNjQxWk03MiA1OC4yNjc4QzkxLjIwODQgNTguMjY3OCAxMDcuNjU4IDU1LjU5ODYgMTE0LjU0NyA1MS44MDQ4TDExNi44MDMgNDguMTExTDExNy43MjMgNDQuNzUzVjQ4LjkxNzFMMTAyLjY3OSAxMTEuMDMzQzEwMi42NzkgMTE0Ljg5NSA4OC45NTMzIDExOCA3Mi4wMTcyIDExOEM1NS4wODEyIDExOCA0MS4zNzQzIDExNC44OTUgNDEuMzc0MyAxMTEuMDMzTDI2LjMzIDQ4LjkxNzFWNDQuODM2OUwyOS44MDA3IDUxLjkzODJDMzYuNzA2NSA1NS42NjUzIDUyLjk5OTcgNTguMjY3OCA3MiA1OC4yNjc4WiIgZmlsbD0iIzhDMzEyMyIvPgo8cGF0aCBmaWxsLXJ1bGU9ImV2ZW5vZGQiIGNsaXAtcnVsZT0iZXZlbm9kZCIgZD0iTTcyLjAwMDMgMjZDOTcuNDAzOCAyNiAxMTggMzAuNjgzOSAxMTggMzYuNDQyQzExOCA0Mi4yIDk3LjM4NjYgNDYuODUwNyA3Mi4wMDAzIDQ2Ljg1MDdDNDYuNjE0MSA0Ni44NTA3IDI2LjAxNzYgNDIuMjM0NSAyNi4wMTc2IDM2LjQ0MkMyNi4wMTc2IDMwLjY0OTQgNDYuNTk2OCAyNiA3Mi4wMDAzIDI2Wk03Mi4wMDAzIDU0LjEwMzdDOTUuNjg1NyA1NC4xMDM3IDExNS4xNzIgNTAuMDU4IDExNy43MDYgNDQuODE5N0wxMDIuNjYyIDEwNi45MzdDMTAyLjY2MiAxMTAuNzk5IDg4LjkzNjQgMTEzLjkwNSA3Mi4wMDAzIDExMy45MDVDNTUuMDY0MyAxMTMuOTA1IDQxLjMzOSAxMTAuODE2IDQxLjMzOSAxMDYuOTU0TDI2LjI5NTkgNDQuODM3QzI4Ljg0NjYgNTAuMDU4IDQ4LjMzMzMgNTQuMTAzNyA3Mi4wMDAzIDU0LjEwMzdaIiBmaWxsPSIjRTA1MjQzIi8+CjxwYXRoIGZpbGwtcnVsZT0iZXZlbm9kZCIgY2xpcC1ydWxlPSJldmVub2RkIiBkPSJNNjEuMTcyNSA2MC4wMjkzSDgxLjA5MjhWNzkuMTY3Nkg2MS4xNzI1VjYwLjAyOTNaTTQ1LjMzMDEgOTUuMzY4OEM0NS4zMzAxIDkwLjE0MiA0OS43MTA0IDg1LjkzNDIgNTUuMTUxMSA4NS45MzQyQzYwLjU5MTcgODUuOTM0MiA2NC45NzIxIDkwLjE0MiA2NC45NzIxIDk1LjM2ODhDNjQuOTcyMSAxMDAuNTk2IDYwLjU5MTcgMTA0LjgwMyA1NS4xNTExIDEwNC44MDNDNDkuNzEwNCAxMDQuODAzIDQ1LjMzMDEgMTAwLjU5NiA0NS4zMzAxIDk1LjM2ODhaTTk2LjQ0ODcgMTA0LjM2OEg3Ni43NzIyTDg2LjYxMDUgODYuNzczN0w5Ni40NDg3IDEwNC4zNjhaIiBmaWxsPSJ3aGl0ZSIvPgo8ZGVmcz4KPGxpbmVhckdyYWRpZW50IGlkPSJwYWludDBfbGluZWFyXzY4M18zMDkxIiB4MT0iMCIgeTE9IjAiIHgyPSIxNTEiIHkyPSIxODAiIGdyYWRpZW50VW5pdHM9InVzZXJTcGFjZU9uVXNlIj4KPHN0b3Agc3RvcC1jb2xvcj0iI0ZGRjBFRSIvPgo8c3RvcCBvZmZzZXQ9IjEiIHN0b3AtY29sb3I9IiNFQzg4N0QiLz4KPC9saW5lYXJHcmFkaWVudD4KPC9kZWZzPgo8L3N2Zz4K
keysOrder: [["apiVersion"], ["appVersion"], ["kind"], ["metadata"], ["metadata", "name"], ["spec", "locking"], ["spec", "storagePool"], ["spec", "users"]]
keysOrder: [["apiVersion"], ["appVersion"], ["kind"], ["metadata"], ["metadata", "name"]]
secrets:
exclude: []
include:
- resourceNames:
- bucket-{{ .name }}
- bucket-{{ .name }}-credentials
- matchLabels:
apps.cozystack.io/user-secret: "true"
- bucket-{{ .name }}-readonly
ingresses:
exclude: []
include:

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:5a7cae722ff6b424bdfbc4aba9d072c11b6930e2ee0f5fa97c3a565bd1c8dc88
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:279008f87460d709e99ed25ee8a1e4568a290bb9afa0e3dd3a06d524163a132b

View File

@@ -9,7 +9,6 @@ WORKDIR /usr/src/app
RUN wget -O- https://github.com/cloudlena/s3manager/archive/9a7c8e446b422f8973b8c461990f39fdafee9c27.tar.gz | tar -xzf- --strip 1
ADD cozystack.patch /
RUN git apply /cozystack.patch
RUN go mod tidy
RUN GOOS=$TARGETOS GOARCH=$TARGETARCH CGO_ENABLED=0 go build -ldflags="-s -w" -a -installsuffix cgo -o bin/s3manager
FROM docker.io/library/alpine:latest

View File

@@ -1,235 +1,3 @@
diff --git a/go.mod b/go.mod
index b5d8540..6ede8e8 100644
--- a/go.mod
+++ b/go.mod
@@ -1,10 +1,11 @@
module github.com/cloudlena/s3manager
-go 1.22.5
+go 1.23
require (
github.com/cloudlena/adapters v0.0.0-20240708203353-a39be02cc801
github.com/gorilla/mux v1.8.1
+ github.com/gorilla/sessions v1.4.0
github.com/matryer/is v1.4.1
github.com/minio/minio-go/v7 v7.0.74
github.com/spf13/viper v1.19.0
@@ -16,6 +17,7 @@ require (
github.com/go-ini/ini v1.67.0 // indirect
github.com/goccy/go-json v0.10.3 // indirect
github.com/google/uuid v1.6.0 // indirect
+ github.com/gorilla/securecookie v1.1.2 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/klauspost/cpuid/v2 v2.2.8 // indirect
diff --git a/go.sum b/go.sum
index 1ea1b16..d7866ce 100644
--- a/go.sum
+++ b/go.sum
@@ -16,10 +16,16 @@ github.com/goccy/go-json v0.10.3 h1:KZ5WoDbxAIgm2HNbYckL0se1fHD6rz5j4ywS6ebzDqA=
github.com/goccy/go-json v0.10.3/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
+github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
+github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
+github.com/gorilla/securecookie v1.1.2 h1:YCIWL56dvtr73r6715mJs5ZvhtnY73hBvEF8kXD8ePA=
+github.com/gorilla/securecookie v1.1.2/go.mod h1:NfCASbcHqRSY+3a8tlWJwsQap2VX5pwzwo4h3eOamfo=
+github.com/gorilla/sessions v1.4.0 h1:kpIYOp/oi6MG/p5PgxApU8srsSw9tuFbt46Lt7auzqQ=
+github.com/gorilla/sessions v1.4.0/go.mod h1:FLWm50oby91+hl7p/wRxDth9bWSuk0qVL2emc7lT5ik=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
diff --git a/main.go b/main.go
index 2ffe8ab..723a1b8 100644
--- a/main.go
+++ b/main.go
@@ -41,10 +41,12 @@ type configuration struct {
Timeout int32
SseType string
SseKey string
+ LoginMode bool
}
func parseConfiguration() configuration {
var accessKeyID, secretAccessKey, iamEndpoint string
+ var loginMode bool
viper.AutomaticEnv()
@@ -57,13 +59,10 @@ func parseConfiguration() configuration {
iamEndpoint = viper.GetString("IAM_ENDPOINT")
} else {
accessKeyID = viper.GetString("ACCESS_KEY_ID")
- if len(accessKeyID) == 0 {
- log.Fatal("please provide ACCESS_KEY_ID")
- }
-
secretAccessKey = viper.GetString("SECRET_ACCESS_KEY")
- if len(secretAccessKey) == 0 {
- log.Fatal("please provide SECRET_ACCESS_KEY")
+ if len(accessKeyID) == 0 || len(secretAccessKey) == 0 {
+ log.Println("ACCESS_KEY_ID or SECRET_ACCESS_KEY not set, starting in login mode")
+ loginMode = true
}
}
@@ -115,6 +114,7 @@ func parseConfiguration() configuration {
Timeout: timeout,
SseType: sseType,
SseKey: sseKey,
+ LoginMode: loginMode,
}
}
@@ -135,57 +135,96 @@ func main() {
log.Fatal(err)
}
- // Set up S3 client
- opts := &minio.Options{
- Secure: configuration.UseSSL,
- }
- if configuration.UseIam {
- opts.Creds = credentials.NewIAM(configuration.IamEndpoint)
- } else {
- var signatureType credentials.SignatureType
-
- switch configuration.SignatureType {
- case "V2":
- signatureType = credentials.SignatureV2
- case "V4":
- signatureType = credentials.SignatureV4
- case "V4Streaming":
- signatureType = credentials.SignatureV4Streaming
- case "Anonymous":
- signatureType = credentials.SignatureAnonymous
- default:
- log.Fatalf("Invalid SIGNATURE_TYPE: %s", configuration.SignatureType)
+ // Set up router
+ r := mux.NewRouter()
+ r.PathPrefix("/static/").Handler(http.StripPrefix("/static/", http.FileServer(http.FS(statics)))).Methods(http.MethodGet)
+
+ if configuration.LoginMode {
+ // Login mode: no pre-configured S3 client, per-session credentials
+ sessionCfg := &s3manager.SessionConfig{
+ Store: s3manager.NewSessionStore(),
+ Endpoint: configuration.Endpoint,
+ UseSSL: configuration.UseSSL,
+ SkipSSLVerify: configuration.SkipSSLVerification,
+ AllowDelete: configuration.AllowDelete,
+ ForceDownload: configuration.ForceDownload,
+ ListRecursive: configuration.ListRecursive,
+ SseInfo: sseType,
+ Templates: templates,
}
- opts.Creds = credentials.NewStatic(configuration.AccessKeyID, configuration.SecretAccessKey, "", signatureType)
- }
+ // Public routes (no auth required)
+ r.Handle("/login", s3manager.HandleLoginView(templates)).Methods(http.MethodGet)
+ r.Handle("/login", s3manager.HandleLogin(sessionCfg)).Methods(http.MethodPost)
+ r.Handle("/logout", s3manager.HandleLogout(sessionCfg)).Methods(http.MethodPost)
+
+ // Protected routes (auth required via middleware)
+ protected := mux.NewRouter()
+ protected.Handle("/", http.RedirectHandler("/buckets", http.StatusPermanentRedirect)).Methods(http.MethodGet)
+ protected.Handle("/buckets", s3manager.HandleBucketsViewDynamic(templates, configuration.AllowDelete)).Methods(http.MethodGet)
+ protected.PathPrefix("/buckets/").Handler(s3manager.HandleBucketViewDynamic(templates, configuration.AllowDelete, configuration.ListRecursive)).Methods(http.MethodGet)
+ protected.Handle("/api/buckets", s3manager.HandleCreateBucketDynamic()).Methods(http.MethodPost)
+ if configuration.AllowDelete {
+ protected.Handle("/api/buckets/{bucketName}", s3manager.HandleDeleteBucketDynamic()).Methods(http.MethodDelete)
+ }
+ protected.Handle("/api/buckets/{bucketName}/objects", s3manager.HandleCreateObjectDynamic(sseType)).Methods(http.MethodPost)
+ protected.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}/url", s3manager.HandleGenerateUrlDynamic()).Methods(http.MethodGet)
+ protected.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleGetObjectDynamic(configuration.ForceDownload)).Methods(http.MethodGet)
+ if configuration.AllowDelete {
+ protected.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleDeleteObjectDynamic()).Methods(http.MethodDelete)
+ }
- if configuration.Region != "" {
- opts.Region = configuration.Region
- }
- if configuration.UseSSL && configuration.SkipSSLVerification {
- opts.Transport = &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}} //nolint:gosec
- }
- s3, err := minio.New(configuration.Endpoint, opts)
- if err != nil {
- log.Fatalln(fmt.Errorf("error creating s3 client: %w", err))
- }
+ r.PathPrefix("/").Handler(s3manager.RequireAuth(sessionCfg, protected))
+ } else {
+ // Pre-configured mode: existing behavior with static S3 client
+ opts := &minio.Options{
+ Secure: configuration.UseSSL,
+ }
+ if configuration.UseIam {
+ opts.Creds = credentials.NewIAM(configuration.IamEndpoint)
+ } else {
+ var signatureType credentials.SignatureType
+
+ switch configuration.SignatureType {
+ case "V2":
+ signatureType = credentials.SignatureV2
+ case "V4":
+ signatureType = credentials.SignatureV4
+ case "V4Streaming":
+ signatureType = credentials.SignatureV4Streaming
+ case "Anonymous":
+ signatureType = credentials.SignatureAnonymous
+ default:
+ log.Fatalf("Invalid SIGNATURE_TYPE: %s", configuration.SignatureType)
+ }
+
+ opts.Creds = credentials.NewStatic(configuration.AccessKeyID, configuration.SecretAccessKey, "", signatureType)
+ }
- // Set up router
- r := mux.NewRouter()
- r.Handle("/", http.RedirectHandler("/buckets", http.StatusPermanentRedirect)).Methods(http.MethodGet)
- r.PathPrefix("/static/").Handler(http.StripPrefix("/static/", http.FileServer(http.FS(statics)))).Methods(http.MethodGet)
- r.Handle("/buckets", s3manager.HandleBucketsView(s3, templates, configuration.AllowDelete)).Methods(http.MethodGet)
- r.PathPrefix("/buckets/").Handler(s3manager.HandleBucketView(s3, templates, configuration.AllowDelete, configuration.ListRecursive)).Methods(http.MethodGet)
- r.Handle("/api/buckets", s3manager.HandleCreateBucket(s3)).Methods(http.MethodPost)
- if configuration.AllowDelete {
- r.Handle("/api/buckets/{bucketName}", s3manager.HandleDeleteBucket(s3)).Methods(http.MethodDelete)
- }
- r.Handle("/api/buckets/{bucketName}/objects", s3manager.HandleCreateObject(s3, sseType)).Methods(http.MethodPost)
- r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}/url", s3manager.HandleGenerateUrl(s3)).Methods(http.MethodGet)
- r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleGetObject(s3, configuration.ForceDownload)).Methods(http.MethodGet)
- if configuration.AllowDelete {
- r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleDeleteObject(s3)).Methods(http.MethodDelete)
+ if configuration.Region != "" {
+ opts.Region = configuration.Region
+ }
+ if configuration.UseSSL && configuration.SkipSSLVerification {
+ opts.Transport = &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}} //nolint:gosec
+ }
+ s3, err := minio.New(configuration.Endpoint, opts)
+ if err != nil {
+ log.Fatalln(fmt.Errorf("error creating s3 client: %w", err))
+ }
+
+ r.Handle("/", http.RedirectHandler("/buckets", http.StatusPermanentRedirect)).Methods(http.MethodGet)
+ r.Handle("/buckets", s3manager.HandleBucketsView(s3, templates, configuration.AllowDelete)).Methods(http.MethodGet)
+ r.PathPrefix("/buckets/").Handler(s3manager.HandleBucketView(s3, templates, configuration.AllowDelete, configuration.ListRecursive)).Methods(http.MethodGet)
+ r.Handle("/api/buckets", s3manager.HandleCreateBucket(s3)).Methods(http.MethodPost)
+ if configuration.AllowDelete {
+ r.Handle("/api/buckets/{bucketName}", s3manager.HandleDeleteBucket(s3)).Methods(http.MethodDelete)
+ }
+ r.Handle("/api/buckets/{bucketName}/objects", s3manager.HandleCreateObject(s3, sseType)).Methods(http.MethodPost)
+ r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}/url", s3manager.HandleGenerateUrl(s3)).Methods(http.MethodGet)
+ r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleGetObject(s3, configuration.ForceDownload)).Methods(http.MethodGet)
+ if configuration.AllowDelete {
+ r.Handle("/api/buckets/{bucketName}/objects/{objectName:.*}", s3manager.HandleDeleteObject(s3)).Methods(http.MethodDelete)
+ }
}
lr := logging.Handler(os.Stdout)(r)
diff --git a/web/template/bucket.html.tmpl b/web/template/bucket.html.tmpl
index e2f8d28..87add13 100644
--- a/web/template/bucket.html.tmpl
@@ -256,298 +24,3 @@ index c7ea184..fb1dce7 100644
</div>
</nav>
diff --git a/internal/app/s3manager/auth.go b/internal/app/s3manager/auth.go
new file mode 100644
index 0000000..58589e2
--- /dev/null
+++ b/internal/app/s3manager/auth.go
@@ -0,0 +1,237 @@
+package s3manager
+
+import (
+ "context"
+ "crypto/rand"
+ "crypto/tls"
+ "fmt"
+ "html/template"
+ "io/fs"
+ "log"
+ "net/http"
+
+ "github.com/gorilla/sessions"
+ "github.com/minio/minio-go/v7"
+ "github.com/minio/minio-go/v7/pkg/credentials"
+)
+
+type contextKey string
+
+const s3ContextKey contextKey = "s3client"
+
+// SessionConfig holds session store and S3 connection settings for login mode.
+type SessionConfig struct {
+ Store *sessions.CookieStore
+ Endpoint string
+ UseSSL bool
+ SkipSSLVerify bool
+ AllowDelete bool
+ ForceDownload bool
+ ListRecursive bool
+ SseInfo SSEType
+ Templates fs.FS
+}
+
+// NewSessionStore creates a CookieStore with a random encryption key.
+func NewSessionStore() *sessions.CookieStore {
+ key := make([]byte, 32)
+ if _, err := rand.Read(key); err != nil {
+ log.Fatal("failed to generate session key:", err)
+ }
+ store := sessions.NewCookieStore(key)
+ store.Options = &sessions.Options{
+ Path: "/",
+ MaxAge: 86400,
+ HttpOnly: true,
+ Secure: true,
+ SameSite: http.SameSiteLaxMode,
+ }
+ return store
+}
+
+// NewS3Client creates a minio client from user-provided credentials.
+func NewS3Client(endpoint, accessKey, secretKey string, useSSL, skipSSLVerify bool) (*minio.Client, error) {
+ opts := &minio.Options{
+ Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
+ Secure: useSSL,
+ }
+ if useSSL && skipSSLVerify {
+ opts.Transport = &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}} //nolint:gosec
+ }
+ return minio.New(endpoint, opts)
+}
+
+// S3FromContext retrieves the S3 client stored in request context.
+func S3FromContext(ctx context.Context) S3 {
+ if s3, ok := ctx.Value(s3ContextKey).(S3); ok {
+ return s3
+ }
+ return nil
+}
+
+func contextWithS3(ctx context.Context, s3 S3) context.Context {
+ return context.WithValue(ctx, s3ContextKey, s3)
+}
+
+// RequireAuth is middleware that validates session credentials and injects
+// an S3 client into the request context. Redirects to /login if no session.
+func RequireAuth(cfg *SessionConfig, next http.Handler) http.Handler {
+ return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ session, _ := cfg.Store.Get(r, "s3session")
+ accessKey, ok1 := session.Values["accessKey"].(string)
+ secretKey, ok2 := session.Values["secretKey"].(string)
+ if !ok1 || !ok2 || accessKey == "" || secretKey == "" {
+ http.Redirect(w, r, "/login", http.StatusFound)
+ return
+ }
+
+ s3, err := NewS3Client(cfg.Endpoint, accessKey, secretKey, cfg.UseSSL, cfg.SkipSSLVerify)
+ if err != nil {
+ // Session has bad credentials — clear and redirect to login
+ session.Options.MaxAge = -1
+ _ = session.Save(r, w)
+ http.Redirect(w, r, "/login", http.StatusFound)
+ return
+ }
+
+ ctx := contextWithS3(r.Context(), s3)
+ next.ServeHTTP(w, r.WithContext(ctx))
+ })
+}
+
+// HandleLoginView renders the login page.
+func HandleLoginView(templates fs.FS) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ errorMsg := r.URL.Query().Get("error")
+
+ data := struct {
+ Error string
+ }{
+ Error: errorMsg,
+ }
+
+ t, err := template.ParseFS(templates, "layout.html.tmpl", "login.html.tmpl")
+ if err != nil {
+ handleHTTPError(w, fmt.Errorf("error parsing login template: %w", err))
+ return
+ }
+ err = t.ExecuteTemplate(w, "layout", data)
+ if err != nil {
+ handleHTTPError(w, fmt.Errorf("error executing login template: %w", err))
+ return
+ }
+ }
+}
+
+// HandleLogin processes the login form POST.
+func HandleLogin(cfg *SessionConfig) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ accessKey := r.FormValue("accessKey")
+ secretKey := r.FormValue("secretKey")
+
+ if accessKey == "" || secretKey == "" {
+ http.Redirect(w, r, "/login?error=credentials+required", http.StatusFound)
+ return
+ }
+
+ // Validate credentials by attempting ListBuckets
+ s3, err := NewS3Client(cfg.Endpoint, accessKey, secretKey, cfg.UseSSL, cfg.SkipSSLVerify)
+ if err != nil {
+ http.Redirect(w, r, "/login?error=connection+failed", http.StatusFound)
+ return
+ }
+ _, err = s3.ListBuckets(r.Context())
+ if err != nil {
+ http.Redirect(w, r, "/login?error=invalid+credentials", http.StatusFound)
+ return
+ }
+
+ // Save credentials to session
+ session, _ := cfg.Store.Get(r, "s3session")
+ session.Values["accessKey"] = accessKey
+ session.Values["secretKey"] = secretKey
+ err = session.Save(r, w)
+ if err != nil {
+ handleHTTPError(w, fmt.Errorf("error saving session: %w", err))
+ return
+ }
+
+ http.Redirect(w, r, "/buckets", http.StatusFound)
+ }
+}
+
+// HandleLogout destroys the session and redirects to login.
+func HandleLogout(cfg *SessionConfig) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ session, _ := cfg.Store.Get(r, "s3session")
+ session.Options.MaxAge = -1
+ _ = session.Save(r, w)
+ http.Redirect(w, r, "/login", http.StatusFound)
+ }
+}
+
+// Dynamic handler wrappers — extract S3 from context, delegate to original handlers.
+
+// HandleBucketsViewDynamic wraps HandleBucketsView for login mode.
+func HandleBucketsViewDynamic(templates fs.FS, allowDelete bool) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleBucketsView(s3, templates, allowDelete).ServeHTTP(w, r)
+ }
+}
+
+// HandleBucketViewDynamic wraps HandleBucketView for login mode.
+func HandleBucketViewDynamic(templates fs.FS, allowDelete bool, listRecursive bool) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleBucketView(s3, templates, allowDelete, listRecursive).ServeHTTP(w, r)
+ }
+}
+
+// HandleCreateBucketDynamic wraps HandleCreateBucket for login mode.
+func HandleCreateBucketDynamic() http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleCreateBucket(s3).ServeHTTP(w, r)
+ }
+}
+
+// HandleDeleteBucketDynamic wraps HandleDeleteBucket for login mode.
+func HandleDeleteBucketDynamic() http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleDeleteBucket(s3).ServeHTTP(w, r)
+ }
+}
+
+// HandleCreateObjectDynamic wraps HandleCreateObject for login mode.
+func HandleCreateObjectDynamic(sseInfo SSEType) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleCreateObject(s3, sseInfo).ServeHTTP(w, r)
+ }
+}
+
+// HandleGenerateUrlDynamic wraps HandleGenerateUrl for login mode.
+func HandleGenerateUrlDynamic() http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleGenerateUrl(s3).ServeHTTP(w, r)
+ }
+}
+
+// HandleGetObjectDynamic wraps HandleGetObject for login mode.
+func HandleGetObjectDynamic(forceDownload bool) http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleGetObject(s3, forceDownload).ServeHTTP(w, r)
+ }
+}
+
+// HandleDeleteObjectDynamic wraps HandleDeleteObject for login mode.
+func HandleDeleteObjectDynamic() http.HandlerFunc {
+ return func(w http.ResponseWriter, r *http.Request) {
+ s3 := S3FromContext(r.Context())
+ HandleDeleteObject(s3).ServeHTTP(w, r)
+ }
+}
diff --git a/web/template/login.html.tmpl b/web/template/login.html.tmpl
new file mode 100644
index 0000000..f153018
--- /dev/null
+++ b/web/template/login.html.tmpl
@@ -0,0 +1,46 @@
+{{ define "content" }}
+<nav>
+ <div class="nav-wrapper container">
+ <a href="/" class="brand-logo">Cozystack S3 Manager</a>
+ </div>
+</nav>
+
+<div class="container">
+ <div class="section">
+ <div class="row">
+ <div class="col l6 offset-l3 m8 offset-m2 s12">
+ <div class="card">
+ <div class="card-content">
+ <span class="card-title">Sign In</span>
+ <p>Enter your S3 credentials to access the bucket manager.</p>
+ <br>
+
+ {{ if .Error }}
+ <div class="card-panel red lighten-4 red-text text-darken-4">
+ <i class="material-icons tiny">error</i> {{ .Error }}
+ </div>
+ {{ end }}
+
+ <form method="POST" action="/login">
+ <div class="input-field">
+ <i class="material-icons prefix">vpn_key</i>
+ <input id="accessKey" name="accessKey" type="text" required>
+ <label for="accessKey">Access Key ID</label>
+ </div>
+ <div class="input-field">
+ <i class="material-icons prefix">lock</i>
+ <input id="secretKey" name="secretKey" type="password" required>
+ <label for="secretKey">Secret Access Key</label>
+ </div>
+ <br>
+ <button type="submit" class="btn waves-effect waves-light" style="width:100%;">
+ Sign In <i class="material-icons right">send</i>
+ </button>
+ </form>
+ </div>
+ </div>
+ </div>
+ </div>
+ </div>
+</div>
+{{ end }}

View File

@@ -17,6 +17,19 @@ spec:
image: "{{ $.Files.Get "images/s3manager.tag" | trim }}"
env:
- name: ENDPOINT
value: "s3.{{ .Values._namespace.host }}"
valueFrom:
secretKeyRef:
name: {{ .Values.bucketName }}-credentials
key: endpoint
- name: SKIP_SSL_VERIFICATION
value: "true"
- name: ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: {{ .Values.bucketName }}-credentials
key: accessKey
- name: SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.bucketName }}-credentials
key: secretKey

View File

@@ -8,6 +8,9 @@ kind: Ingress
metadata:
name: {{ .Values.bucketName }}-ui
annotations:
nginx.ingress.kubernetes.io/auth-type: "basic"
nginx.ingress.kubernetes.io/auth-secret: "{{ .Values.bucketName }}-ui-auth"
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "99999"
nginx.ingress.kubernetes.io/proxy-send-timeout: "99999"

View File

@@ -1,2 +1,24 @@
{{/* Secrets previously used for s3manager credential injection and nginx basic auth */}}
{{/* are no longer needed — s3manager now handles authentication via its own login page */}}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace .Values.bucketName }}
{{- $bucketInfo := fromJson (b64dec (index $existingSecret.data "BucketInfo")) }}
{{- $accessKeyID := index $bucketInfo.spec.secretS3 "accessKeyID" }}
{{- $accessSecretKey := index $bucketInfo.spec.secretS3 "accessSecretKey" }}
{{- $endpoint := index $bucketInfo.spec.secretS3 "endpoint" }}
{{- $bucketName := index $bucketInfo.spec "bucketName" }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.bucketName }}-credentials
type: Opaque
stringData:
accessKey: {{ $accessKeyID | quote }}
secretKey: {{ $accessSecretKey | quote }}
endpoint: {{ trimPrefix "https://" $endpoint }}
bucketName: {{ $bucketName | quote }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.bucketName }}-ui-auth
data:
auth: {{ htpasswd $accessKeyID $accessSecretKey | b64enc | quote }}

View File

@@ -1,20 +0,0 @@
{{- range $name, $user := .Values.users }}
{{- $secretName := printf "%s-%s" $.Values.bucketName $name }}
{{- $existingSecret := lookup "v1" "Secret" $.Release.Namespace $secretName }}
{{- if $existingSecret }}
{{- $bucketInfo := fromJson (b64dec (index $existingSecret.data "BucketInfo")) }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ $secretName }}-credentials
labels:
apps.cozystack.io/user-secret: "true"
type: Opaque
stringData:
accessKey: {{ index $bucketInfo.spec.secretS3 "accessKeyID" | quote }}
secretKey: {{ index $bucketInfo.spec.secretS3 "accessSecretKey" | quote }}
endpoint: {{ trimPrefix "https://" (index $bucketInfo.spec.secretS3 "endpoint") }}
bucketName: {{ index $bucketInfo.spec "bucketName" | quote }}
{{- end }}
{{- end }}

View File

@@ -1,2 +1 @@
bucketName: "cozystack"
users: {}

View File

@@ -18,8 +18,6 @@ spec:
issuerRef:
name: cozystack-api-selfsigned
isCA: true
privateKey:
rotationPolicy: Never
---
apiVersion: cert-manager.io/v1
kind: Issuer

View File

@@ -1,3 +1,3 @@
cozystackAPI:
image: ghcr.io/cozystack/cozystack/cozystack-api:v1.1.1@sha256:07a5437746c8dca8511ea545defc88d88d11ddf1ac4c989d276d261509514360
image: ghcr.io/cozystack/cozystack/cozystack-api:v1.0.0@sha256:bd70ecb944bde9a0d6b88114aea89bdbbe2d07e33f03175cfd885de013e88294
replicas: 2

View File

@@ -1,4 +1,4 @@
cozystackController:
image: ghcr.io/cozystack/cozystack/cozystack-controller:v1.1.1@sha256:01a242eb2b1edb2c19662205c69db4415e684f6ff84496d10b82712e3ef8ead0
image: ghcr.io/cozystack/cozystack/cozystack-controller:v1.0.0@sha256:da01085026a4a01514ae435c7bfb48cca2cf00eb17feb2ed7ae88711f82693e0
debug: false
disableTelemetry: false

View File

@@ -6,7 +6,7 @@ FROM node:${NODE_VERSION}-alpine AS openapi-k8s-toolkit-builder
RUN apk add git
WORKDIR /src
# release/1.4.0
ARG COMMIT=d6b9e4ad0d1eb9d3730f7f0c664792c8dda3214d
ARG COMMIT=c67029cc7b7495c65ee0406033576e773a73bb01
RUN wget -O- https://github.com/PRO-Robotech/openapi-k8s-toolkit/archive/${COMMIT}.tar.gz | tar -xzvf- --strip-components=1
COPY openapi-k8s-toolkit/patches /patches

View File

@@ -1,37 +0,0 @@
diff --git a/src/localTypes/formExtensions.ts b/src/localTypes/formExtensions.ts
--- a/src/localTypes/formExtensions.ts
+++ b/src/localTypes/formExtensions.ts
@@ -59,2 +59,4 @@
relatedValuePath?: string
+ allowEmpty?: boolean
+ persistType?: 'str' | 'number' | 'arr' | 'obj'
}
diff --git a/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx b/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
--- a/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
+++ b/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
@@ -149,3 +149,10 @@
}, [relatedPath, form, arrName, fixedName, relatedFieldValue, onValuesChangeCallBack, isTouchedPeristed])
+ // When allowEmpty is set, auto-persist the field so the BFF preserves empty values
+ useEffect(() => {
+ if (customProps.allowEmpty) {
+ persistedControls.onPersistMark(persistName || name, customProps.persistType ?? 'str')
+ }
+ }, [customProps.allowEmpty, customProps.persistType, persistedControls, persistName, name])
+
const uri = prepareTemplate({
@@ -267,5 +274,14 @@
validateTrigger="onBlur"
hasFeedback={designNewLayout ? { icons: feedbackIcons } : true}
style={{ flex: 1 }}
+ normalize={(value: unknown) => {
+ if (customProps.allowEmpty && (value === undefined || value === null)) {
+ if (customProps.persistType === 'number') return 0
+ if (customProps.persistType === 'arr') return []
+ if (customProps.persistType === 'obj') return {}
+ return ''
+ }
+ return value
+ }}
>
<Select

View File

@@ -0,0 +1,49 @@
diff --git a/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx b/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
index d5e5230..9038dbb 100644
--- a/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
+++ b/src/components/molecules/BlackholeForm/molecules/FormListInput/FormListInput.tsx
@@ -259,14 +259,15 @@ export const FormListInput: FC<TFormListInputProps> = ({
<PersistedCheckbox formName={persistName || name} persistedControls={persistedControls} type="arr" />
</Flex>
</Flex>
- <ResetedFormItem
- key={arrKey !== undefined ? arrKey : Array.isArray(name) ? name.slice(-1)[0] : name}
- name={arrName || fixedName}
- rules={[getRequiredRule(forceNonRequired === false && !!required?.includes(getStringByName(name)), name)]}
- validateTrigger="onBlur"
- hasFeedback={designNewLayout ? { icons: feedbackIcons } : true}
- >
- <Flex gap={8} align="center">
+ <Flex gap={8} align="center">
+ <ResetedFormItem
+ key={arrKey !== undefined ? arrKey : Array.isArray(name) ? name.slice(-1)[0] : name}
+ name={arrName || fixedName}
+ rules={[getRequiredRule(forceNonRequired === false && !!required?.includes(getStringByName(name)), name)]}
+ validateTrigger="onBlur"
+ hasFeedback={designNewLayout ? { icons: feedbackIcons } : true}
+ style={{ flex: 1 }}
+ >
<Select
mode={customProps.mode}
placeholder="Select"
@@ -277,13 +278,13 @@ export const FormListInput: FC<TFormListInputProps> = ({
showSearch
style={{ width: '100%' }}
/>
- {relatedValueTooltip && (
- <Tooltip title={relatedValueTooltip}>
- <QuestionCircleOutlined />
- </Tooltip>
- )}
- </Flex>
- </ResetedFormItem>
+ </ResetedFormItem>
+ {relatedValueTooltip && (
+ <Tooltip title={relatedValueTooltip}>
+ <QuestionCircleOutlined />
+ </Tooltip>
+ )}
+ </Flex>
</HiddenContainer>
)
}

View File

@@ -1,29 +0,0 @@
diff --git a/src/components/organisms/DynamicComponents/molecules/SecretBase64Plain/SecretBase64Plain.tsx b/src/components/organisms/DynamicComponents/molecules/SecretBase64Plain/SecretBase64Plain.tsx
--- a/src/components/organisms/DynamicComponents/molecules/SecretBase64Plain/SecretBase64Plain.tsx
+++ b/src/components/organisms/DynamicComponents/molecules/SecretBase64Plain/SecretBase64Plain.tsx
@@ -145,6 +145,12 @@
<Styled.DisabledInput
$hidden={effectiveHidden}
onClick={e => handleInputClick(e, effectiveHidden, value)}
+ onCopy={e => {
+ if (!effectiveHidden) {
+ e.preventDefault()
+ e.clipboardData?.setData('text/plain', value)
+ }
+ }}
value={shownValue}
readOnly
/>
@@ -161,6 +167,12 @@
<Styled.DisabledInput
$hidden={effectiveHidden}
onClick={e => handleInputClick(e, effectiveHidden, value)}
+ onCopy={e => {
+ if (!effectiveHidden) {
+ e.preventDefault()
+ e.clipboardData?.setData('text/plain', value)
+ }
+ }}
value={shownValue}
readOnly
/>

View File

@@ -1,6 +1,6 @@
{{- $brandingConfig := .Values._cluster.branding | default dict }}
{{- $tenantText := "v1.1.1" }}
{{- $tenantText := "v1.0.0" }}
{{- $footerText := "Cozystack" }}
{{- $titleText := "Cozystack Dashboard" }}
{{- $logoText := "" }}

View File

@@ -1,6 +1,6 @@
openapiUI:
image: ghcr.io/cozystack/cozystack/openapi-ui:v1.1.1@sha256:0c27362f075f9637a1fc4f716229ab6dab16ffa2b3c858b3e8c542502d6b244c
image: ghcr.io/cozystack/cozystack/openapi-ui:v1.0.0@sha256:73a8bd4283a46a99d22536eece9c2059fa2fb1c17b43ddefe6716e8960e4731e
openapiUIK8sBff:
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v1.1.1@sha256:c938fee904acd948800d4dc5e121c4c5cd64cb4a3160fb8d2f9dbff0e5168740
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v1.0.0@sha256:c938fee904acd948800d4dc5e121c4c5cd64cb4a3160fb8d2f9dbff0e5168740
tokenProxy:
image: ghcr.io/cozystack/cozystack/token-proxy:v1.1.1@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc
image: ghcr.io/cozystack/cozystack/token-proxy:v1.0.0@sha256:2e280991e07853ea48f97b0a42946afffa10d03d6a83d41099ed83e6ffc94fdc

View File

@@ -38,8 +38,8 @@
| kubeRbacProxy.args[2] | string | `"--logtostderr=true"` | |
| kubeRbacProxy.args[3] | string | `"--v=0"` | |
| kubeRbacProxy.image.pullPolicy | string | `"IfNotPresent"` | Image pull policy |
| kubeRbacProxy.image.repository | string | `"quay.io/brancz/kube-rbac-proxy"` | Image repository |
| kubeRbacProxy.image.tag | string | `"v0.18.1"` | Version of image |
| kubeRbacProxy.image.repository | string | `"gcr.io/kubebuilder/kube-rbac-proxy"` | Image repository |
| kubeRbacProxy.image.tag | string | `"v0.16.0"` | Version of image |
| kubeRbacProxy.livenessProbe | object | `{}` | https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ |
| kubeRbacProxy.readinessProbe | object | `{}` | https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ |
| kubeRbacProxy.resources | object | `{"limits":{"cpu":"250m","memory":"128Mi"},"requests":{"cpu":"100m","memory":"64Mi"}}` | ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |

View File

@@ -98,13 +98,13 @@ kubeRbacProxy:
image:
# -- Image repository
repository: quay.io/brancz/kube-rbac-proxy
repository: gcr.io/kubebuilder/kube-rbac-proxy
# -- Image pull policy
pullPolicy: IfNotPresent
# -- Version of image
tag: v0.18.1
tag: v0.16.0
args:
- --secure-listen-address=0.0.0.0:8443

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/grafana-dashboards:v1.1.1@sha256:2c9aa0b48e2bf6167db198f4d15882bfe51700108edf2e9f6d0942940a2c1204
ghcr.io/cozystack/cozystack/grafana-dashboards:v1.0.0@sha256:7a3c9af59f8d74d5a23750bbc845c7de64610dbd4d4f84011e10be037b3ce2a0

View File

@@ -8,6 +8,5 @@ update:
helm repo update ingress-nginx
helm pull ingress-nginx/ingress-nginx --untar --untardir charts
patch --no-backup-if-mismatch -p 3 < patches/add-metrics2.patch
patch --no-backup-if-mismatch -p4 < patches/disable-ca-key-rotation.patch
rm -f charts/ingress-nginx/templates/controller-deployment.yaml.orig
rm -rf charts/ingress-nginx/changelog/

View File

@@ -23,8 +23,6 @@ spec:
name: {{ include "ingress-nginx.fullname" . }}-self-signed-issuer
commonName: "ca.webhook.ingress-nginx"
isCA: true
privateKey:
rotationPolicy: Never
subject:
organizations:
- ingress-nginx

View File

@@ -1,13 +0,0 @@
diff --git a/packages/system/ingress-nginx/charts/ingress-nginx/templates/admission-webhooks/cert-manager.yaml b/packages/system/ingress-nginx/charts/ingress-nginx/templates/admission-webhooks/cert-manager.yaml
index db2946c3..fab1e02e 100644
--- a/packages/system/ingress-nginx/charts/ingress-nginx/templates/admission-webhooks/cert-manager.yaml
+++ b/packages/system/ingress-nginx/charts/ingress-nginx/templates/admission-webhooks/cert-manager.yaml
@@ -23,6 +23,8 @@ spec:
name: {{ include "ingress-nginx.fullname" . }}-self-signed-issuer
commonName: "ca.webhook.ingress-nginx"
isCA: true
+ privateKey:
+ rotationPolicy: Never
subject:
organizations:
- ingress-nginx

View File

@@ -3,7 +3,7 @@ kamaji:
deploy: false
image:
pullPolicy: IfNotPresent
tag: v1.1.1@sha256:914d04f7442f0faecf18f8282c192dee9fe244a711494a8c892e2f9e2ad415f7
tag: v1.0.0@sha256:50db517ebe7698083dd32223a96c987b6ed0c88d3a093969beb571e4a96d18e4
repository: ghcr.io/cozystack/cozystack/kamaji
resources:
limits:
@@ -13,4 +13,4 @@ kamaji:
cpu: 100m
memory: 100Mi
extraArgs:
- --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v1.1.1@sha256:914d04f7442f0faecf18f8282c192dee9fe244a711494a8c892e2f9e2ad415f7
- --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v1.0.0@sha256:50db517ebe7698083dd32223a96c987b6ed0c88d3a093969beb571e4a96d18e4

View File

@@ -76,18 +76,14 @@ spec:
{{- end }}
- name: KC_METRICS_ENABLED
value: "true"
- name: KC_HEALTH_ENABLED
value: "true"
- name: KC_LOG_LEVEL
value: "info"
- name: KC_CACHE
value: "ispn"
- name: KC_CACHE_STACK
value: "kubernetes"
- name: KC_PROXY_HEADERS
value: "xforwarded"
- name: KC_HTTP_ENABLED
value: "true"
- name: KC_PROXY
value: "edge"
- name: KEYCLOAK_ADMIN
value: admin
- name: KEYCLOAK_ADMIN_PASSWORD
@@ -132,27 +128,16 @@ spec:
- name: http
containerPort: 8080
protocol: TCP
- name: management
containerPort: 9000
protocol: TCP
startupProbe:
httpGet:
path: /health/ready
port: management
failureThreshold: 30
periodSeconds: 10
livenessProbe:
httpGet:
path: /health/live
port: management
periodSeconds: 15
path: /
port: http
initialDelaySeconds: 120
timeoutSeconds: 5
failureThreshold: 5
readinessProbe:
httpGet:
path: /health/ready
port: management
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
path: /realms/master
port: http
initialDelaySeconds: 60
timeoutSeconds: 1
terminationGracePeriodSeconds: 60

View File

@@ -1,4 +1,4 @@
portSecurity: true
routes: ""
image: ghcr.io/cozystack/cozystack/kubeovn-plunger:v1.1.1@sha256:79bfdea16ad23c3e7121b0ec0abf016ba1d841af0d955e95d258a2f4da28f285
image: ghcr.io/cozystack/cozystack/kubeovn-plunger:v1.0.0@sha256:b6045fdb4f324b9b1cb44a218c40422aafbbc600b085c819ff58809bb6e97220
ovnCentralName: ovn-central

View File

@@ -18,8 +18,6 @@ spec:
issuerRef:
name: {{ include "namespace-annotation-webhook.fullname" . }}-selfsigned-issuer
isCA: true
privateKey:
rotationPolicy: Never
---
apiVersion: cert-manager.io/v1
kind: Issuer

View File

@@ -1,3 +1,3 @@
portSecurity: true
routes: ""
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v1.1.1@sha256:e18f9fd679e38f65362a8d0042f25468272f6d081136ad47027168d8e7e07a4a
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v1.0.0@sha256:e18f9fd679e38f65362a8d0042f25468272f6d081136ad47027168d8e7e07a4a

View File

@@ -1,3 +1,3 @@
storageClass: replicated
csiDriver:
image: ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:1c8c842277f45f189a5c645fcf7b2023c8ed7189f44029ce8b988019000da14c
image: ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:434aa3b8e2a3cbf6681426b174e1c4fde23bafd12a6cccd046b5cb1749092ec4

View File

@@ -18,8 +18,6 @@ spec:
issuerRef:
name: lineage-controller-webhook-selfsigned
isCA: true
privateKey:
rotationPolicy: Never
---
apiVersion: cert-manager.io/v1
kind: Issuer

View File

@@ -1,5 +1,5 @@
lineageControllerWebhook:
image: ghcr.io/cozystack/cozystack/lineage-controller-webhook:v1.1.1@sha256:f2c0f41a8d5bdbddc38c4f27f9242e581a3d503e039597866d0899de41fde7bb
image: ghcr.io/cozystack/cozystack/lineage-controller-webhook:v1.0.0@sha256:af765c2829db4f513084522a384710acc321bd4a332eaf7fe814fecacea1022f
debug: false
localK8sAPIEndpoint:
enabled: true

View File

@@ -8,4 +8,3 @@ update:
helm repo add piraeus-charts https://piraeus.io/helm-charts/
helm repo update piraeus-charts
helm pull piraeus-charts/linstor-scheduler --untar --untardir charts
patch --no-backup-if-mismatch -p4 < patches/disable-ca-key-rotation.patch

View File

@@ -24,8 +24,6 @@ spec:
issuerRef:
name: {{ include "linstor-scheduler.fullname" . }}-admission-selfsigned
isCA: true
privateKey:
rotationPolicy: Never
---
apiVersion: cert-manager.io/v1
kind: Issuer

View File

@@ -1,13 +0,0 @@
diff --git a/packages/system/linstor-scheduler/charts/linstor-scheduler/templates/certmanager.yaml b/packages/system/linstor-scheduler/charts/linstor-scheduler/templates/certmanager.yaml
index 3942555b..760a9d5d 100644
--- a/packages/system/linstor-scheduler/charts/linstor-scheduler/templates/certmanager.yaml
+++ b/packages/system/linstor-scheduler/charts/linstor-scheduler/templates/certmanager.yaml
@@ -24,6 +24,8 @@ spec:
issuerRef:
name: {{ include "linstor-scheduler.fullname" . }}-admission-selfsigned
isCA: true
+ privateKey:
+ rotationPolicy: Never
---
apiVersion: cert-manager.io/v1
kind: Issuer

View File

@@ -9,8 +9,6 @@ spec:
secretName: linstor-api-ca
duration: 87600h # 10 years
isCA: true
privateKey:
rotationPolicy: Never
usages:
- signing
- key encipherment

View File

@@ -9,8 +9,6 @@ spec:
secretName: linstor-internal-ca
duration: 87600h # 10 years
isCA: true
privateKey:
rotationPolicy: Never
usages:
- signing
- key encipherment

View File

@@ -13,4 +13,4 @@ linstor:
linstorCSI:
image:
repository: ghcr.io/cozystack/cozystack/linstor-csi
tag: v1.10.5@sha256:21d48617cff1448e759be8fb9a9cc3d03ded97e2a7045b37f3558d317e966741
tag: v1.10.5@sha256:c87b6f6dadaa6e3a3643d3279e81742830147f6c38f99e9232d9780abbcac897

View File

@@ -45,5 +45,3 @@ hubble/l7-http-metrics
hubble/network-overview
nats/nats-jetstream
nats/nats-server
mongodb/mongodb-overview
mongodb/mongodb-inmemory

View File

@@ -1,3 +1,3 @@
objectstorage:
controller:
image: "ghcr.io/cozystack/cozystack/objectstorage-controller:v1.1.1@sha256:e40e94f3014cfd04cce4230597315a1acfcca2daa8051b987614d0c05da6d928"
image: "ghcr.io/cozystack/cozystack/objectstorage-controller:v1.0.0@sha256:e40e94f3014cfd04cce4230597315a1acfcca2daa8051b987614d0c05da6d928"

View File

@@ -8,7 +8,7 @@ spec:
plural: rabbitmqs
singular: rabbitmq
openAPISchema: |-
{"title":"Chart Values","type":"object","properties":{"external":{"description":"Enable external access from outside the cluster.","type":"boolean","default":false},"replicas":{"description":"Number of RabbitMQ replicas.","type":"integer","default":3},"resources":{"description":"Explicit CPU and memory configuration for each RabbitMQ replica. When omitted, the preset defined in `resourcesPreset` is applied.","type":"object","default":{},"properties":{"cpu":{"description":"CPU available to each replica.","pattern":"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$","anyOf":[{"type":"integer"},{"type":"string"}],"x-kubernetes-int-or-string":true},"memory":{"description":"Memory (RAM) available to each replica.","pattern":"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$","anyOf":[{"type":"integer"},{"type":"string"}],"x-kubernetes-int-or-string":true}}},"resourcesPreset":{"description":"Default sizing preset used when `resources` is omitted.","type":"string","default":"nano","enum":["nano","micro","small","medium","large","xlarge","2xlarge"]},"size":{"description":"Persistent Volume Claim size available for application data.","default":"10Gi","pattern":"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$","anyOf":[{"type":"integer"},{"type":"string"}],"x-kubernetes-int-or-string":true},"storageClass":{"description":"StorageClass used to store the data.","type":"string","default":""},"users":{"description":"Users configuration map.","type":"object","default":{},"additionalProperties":{"type":"object","properties":{"password":{"description":"Password for the user.","type":"string"}}}},"version":{"description":"RabbitMQ major.minor version to deploy","type":"string","default":"v4.2","enum":["v4.2","v4.1","v4.0","v3.13"]},"vhosts":{"description":"Virtual hosts configuration map.","type":"object","default":{},"additionalProperties":{"type":"object","required":["roles"],"properties":{"roles":{"description":"Virtual host roles list.","type":"object","properties":{"admin":{"description":"List of admin users.","type":"array","items":{"type":"string"}},"readonly":{"description":"List of readonly users.","type":"array","items":{"type":"string"}}}}}}}}}
{"title":"Chart Values","type":"object","properties":{"external":{"description":"Enable external access from outside the cluster.","type":"boolean","default":false},"replicas":{"description":"Number of RabbitMQ replicas.","type":"integer","default":3},"resources":{"description":"Explicit CPU and memory configuration for each RabbitMQ replica. When omitted, the preset defined in `resourcesPreset` is applied.","type":"object","default":{},"properties":{"cpu":{"description":"CPU available to each replica.","pattern":"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$","anyOf":[{"type":"integer"},{"type":"string"}],"x-kubernetes-int-or-string":true},"memory":{"description":"Memory (RAM) available to each replica.","pattern":"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$","anyOf":[{"type":"integer"},{"type":"string"}],"x-kubernetes-int-or-string":true}}},"resourcesPreset":{"description":"Default sizing preset used when `resources` is omitted.","type":"string","default":"nano","enum":["nano","micro","small","medium","large","xlarge","2xlarge"]},"size":{"description":"Persistent Volume Claim size available for application data.","default":"10Gi","pattern":"^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$","anyOf":[{"type":"integer"},{"type":"string"}],"x-kubernetes-int-or-string":true},"storageClass":{"description":"StorageClass used to store the data.","type":"string","default":""},"users":{"description":"Users configuration map.","type":"object","default":{},"additionalProperties":{"type":"object","properties":{"password":{"description":"Password for the user.","type":"string"}}}},"vhosts":{"description":"Virtual hosts configuration map.","type":"object","default":{},"additionalProperties":{"type":"object","required":["roles"],"properties":{"roles":{"description":"Virtual host roles list.","type":"object","properties":{"admin":{"description":"List of admin users.","type":"array","items":{"type":"string"}},"readonly":{"description":"List of readonly users.","type":"array","items":{"type":"string"}}}}}}}}}
release:
prefix: rabbitmq-
labels:
@@ -25,7 +25,7 @@ spec:
tags:
- messaging
icon: PHN2ZyB3aWR0aD0iMTQ0IiBoZWlnaHQ9IjE0NCIgdmlld0JveD0iMCAwIDE0NCAxNDQiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxyZWN0IHg9Ii0wLjAwMTk1MzEyIiB3aWR0aD0iMTQ0IiBoZWlnaHQ9IjE0NCIgcng9IjI0IiBmaWxsPSJ1cmwoI3BhaW50MF9saW5lYXJfNjgzXzI5NzIpIi8+CjxwYXRoIGQ9Ik0xMTEuNDExIDYyLjhIODIuNDkzOUM4MS43OTY5IDYyLjc5OTcgODEuMTI4NSA2Mi41MjI4IDgwLjYzNTYgNjIuMDMwMUM4MC4xNDI3IDYxLjUzNzMgNzkuODY1NiA2MC44NjkxIDc5Ljg2NTMgNjAuMTcyMlYzMC4wNDEyQzc5Ljg2NTMgMjcuODEgNzguMDU1NiAyNiA3NS44MjQ5IDI2SDY1LjUwMjFDNjMuMjcgMjYgNjEuNDYxNCAyNy44MSA2MS40NjE0IDMwLjA0MTJWNTkuOTg5OEM2MS40NjE0IDYxLjU0MzUgNjAuMjA1IDYyLjgwNjUgNTguNjUwOCA2Mi44MTMzTDQ5LjE3NDMgNjIuODU4NEM0Ny42MDY5IDYyLjg2NjkgNDYuMzMzNiA2MS41OTU1IDQ2LjMzNjYgNjAuMDI5OEw0Ni4zOTU0IDMwLjA0OEM0Ni40MDA1IDI3LjgxMzQgNDQuNTkwMiAyNiA0Mi4zNTUgMjZIMzIuMDQwN0MyOS44MDgzIDI2IDI4IDI3LjgxIDI4IDMwLjA0MTJWMTE0LjQxMkMyOCAxMTYuMzk0IDI5LjYwNjEgMTE4IDMxLjU4NzEgMTE4SDExMS40MTFDMTEzLjM5NCAxMTggMTE1IDExNi4zOTQgMTE1IDExNC40MTJWNjYuMzg4QzExNSA2NC40MDU4IDExMy4zOTQgNjIuOCAxMTEuNDExIDYyLjhaTTk3Ljg1MDggOTQuNDc3OUM5Ny44NTA4IDk3LjA3NTUgOTUuNzQ0NSA5OS4xODE3IDkzLjE0NjQgOTkuMTgxN0g4NC45ODg0QzgyLjM5IDk5LjE4MTcgODAuMjgzNiA5Ny4wNzU1IDgwLjI4MzYgOTQuNDc3OVY4Ni4zMjE3QzgwLjI4MzYgODMuNzIzOCA4Mi4zOSA4MS42MTc5IDg0Ljk4ODQgODEuNjE3OUg5My4xNDY0Qzk1Ljc0NDUgODEuNjE3OSA5Ny44NTA4IDgzLjcyMzggOTcuODUwOCA4Ni4zMjE3Vjk0LjQ3NzlaIiBmaWxsPSJ3aGl0ZSIvPgo8ZGVmcz4KPGxpbmVhckdyYWRpZW50IGlkPSJwYWludDBfbGluZWFyXzY4M18yOTcyIiB4MT0iNSIgeTE9Ii03LjUiIHgyPSIxNDEiIHkyPSIxMjQuNSIgZ3JhZGllbnRVbml0cz0idXNlclNwYWNlT25Vc2UiPgo8c3RvcCBzdG9wLWNvbG9yPSIjRkY4MjJGIi8+CjxzdG9wIG9mZnNldD0iMSIgc3RvcC1jb2xvcj0iI0ZGNjYwMCIvPgo8L2xpbmVhckdyYWRpZW50Pgo8L2RlZnM+Cjwvc3ZnPgo=
keysOrder: [["apiVersion"], ["appVersion"], ["kind"], ["metadata"], ["metadata", "name"], ["spec", "replicas"], ["spec", "resources"], ["spec", "resourcesPreset"], ["spec", "size"], ["spec", "storageClass"], ["spec", "external"], ["spec", "version"], ["spec", "users"], ["spec", "vhosts"]]
keysOrder: [["apiVersion"], ["appVersion"], ["kind"], ["metadata"], ["metadata", "name"], ["spec", "replicas"], ["spec", "resources"], ["spec", "resourcesPreset"], ["spec", "size"], ["spec", "storageClass"], ["spec", "external"], ["spec", "users"], ["spec", "vhosts"]]
secrets:
exclude: []
include:

View File

@@ -11,6 +11,5 @@ update:
tar xzvf - --strip 3 -C charts seaweedfs-$${version}/k8s/charts/seaweedfs
patch --no-backup-if-mismatch -p4 < patches/resize-api-server-annotation.diff
patch --no-backup-if-mismatch -p4 < patches/s3-traffic-distribution.patch
patch --no-backup-if-mismatch -p4 < patches/disable-ca-key-rotation.patch
#patch --no-backup-if-mismatch -p4 < patches/retention-policy-delete.yaml

View File

@@ -13,8 +13,6 @@ spec:
secretName: {{ template "seaweedfs.name" . }}-ca-cert
commonName: "{{ template "seaweedfs.name" . }}-root-ca"
isCA: true
privateKey:
rotationPolicy: Never
{{- if .Values.certificates.ca.duration }}
duration: {{ .Values.certificates.ca.duration }}
{{- end }}

View File

@@ -7,25 +7,12 @@ metadata:
driverName: {{ .Values.cosi.driverName }}
deletionPolicy: Delete
---
kind: BucketClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ .Values.cosi.bucketClassName }}-lock
driverName: {{ .Values.cosi.driverName }}
deletionPolicy: Retain
parameters:
objectLockEnabled: "true"
objectLockRetentionMode: "COMPLIANCE"
objectLockRetentionDays: "365"
---
kind: BucketAccessClass
apiVersion: objectstorage.k8s.io/v1alpha1
metadata:
name: {{ .Values.cosi.bucketClassName }}
driverName: {{ .Values.cosi.driverName }}
authenticationType: KEY
parameters:
accessPolicy: readwrite
---
kind: BucketAccessClass
apiVersion: objectstorage.k8s.io/v1alpha1
@@ -34,5 +21,5 @@ metadata:
driverName: {{ .Values.cosi.driverName }}
authenticationType: KEY
parameters:
accessPolicy: readonly
accessPolicy: "readonly"
{{- end }}

View File

@@ -1546,7 +1546,7 @@ allInOne:
# For more information, visit: https://container-object-storage-interface.github.io/docs/deployment-guide
cosi:
enabled: false
image: "ghcr.io/seaweedfs/seaweedfs-cosi-driver:v0.3.0"
image: "ghcr.io/seaweedfs/seaweedfs-cosi-driver:v0.1.2"
driverName: "seaweedfs.objectstorage.k8s.io"
bucketClassName: "seaweedfs"
endpoint: ""

View File

@@ -1,13 +0,0 @@
diff --git a/packages/system/seaweedfs/charts/seaweedfs/templates/cert/ca-cert.yaml b/packages/system/seaweedfs/charts/seaweedfs/templates/cert/ca-cert.yaml
index b01a8dcc..a38287de 100644
--- a/packages/system/seaweedfs/charts/seaweedfs/templates/cert/ca-cert.yaml
+++ b/packages/system/seaweedfs/charts/seaweedfs/templates/cert/ca-cert.yaml
@@ -13,6 +13,8 @@ spec:
secretName: {{ template "seaweedfs.name" . }}-ca-cert
commonName: "{{ template "seaweedfs.name" . }}-root-ca"
isCA: true
+ privateKey:
+ rotationPolicy: Never
{{- if .Values.certificates.ca.duration }}
duration: {{ .Values.certificates.ca.duration }}
{{- end }}

View File

@@ -177,7 +177,7 @@ seaweedfs:
bucketClassName: "seaweedfs"
region: ""
sidecar:
image: "ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.1.1@sha256:2a3595cd88b30af55b2000d3ca204899beecef0012b0e0402754c3914aad1f7f"
image: "ghcr.io/cozystack/cozystack/objectstorage-sidecar:v1.0.0@sha256:2a3595cd88b30af55b2000d3ca204899beecef0012b0e0402754c3914aad1f7f"
certificates:
commonName: "SeaweedFS CA"
ipAddresses: []

View File

@@ -9,4 +9,3 @@ update:
helm repo add vm https://victoriametrics.github.io/helm-charts/
helm repo update vm
helm pull vm/victoria-metrics-operator --untar --untardir charts
patch --no-backup-if-mismatch -p4 < patches/disable-ca-key-rotation.patch

View File

@@ -78,8 +78,6 @@ spec:
name: {{ $fullname }}-root
commonName: {{ $certManager.ca.commonName }}
isCA: true
privateKey:
rotationPolicy: Never
---
apiVersion: cert-manager.io/v1
kind: Issuer

View File

@@ -1,13 +0,0 @@
diff --git a/packages/system/victoria-metrics-operator/charts/victoria-metrics-operator/templates/webhook.yaml b/packages/system/victoria-metrics-operator/charts/victoria-metrics-operator/templates/webhook.yaml
index 2e027ab4..1bead84d 100644
--- a/packages/system/victoria-metrics-operator/charts/victoria-metrics-operator/templates/webhook.yaml
+++ b/packages/system/victoria-metrics-operator/charts/victoria-metrics-operator/templates/webhook.yaml
@@ -78,6 +78,8 @@ spec:
name: {{ $fullname }}-root
commonName: {{ $certManager.ca.commonName }}
isCA: true
+ privateKey:
+ rotationPolicy: Never
---
apiVersion: cert-manager.io/v1
kind: Issuer