Compare commits

..

1 Commits

Author SHA1 Message Date
Andrei Kvapil
58292e6095 Draft AGENTS.md file
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-11-08 00:18:44 +01:00
76 changed files with 1093 additions and 4096 deletions

View File

@@ -33,9 +33,6 @@ jobs:
fetch-depth: 0
fetch-tags: true
- name: Run unit tests
run: make unit-tests
- name: Set up Docker config
run: |
if [ -d ~/.docker ]; then

449
AGENTS.md Normal file
View File

@@ -0,0 +1,449 @@
# AGENTS.md
This file provides structured guidance for AI coding assistants and agents
working with the **Cozystack** project.
## Project Overview
Cozystack is an open-source Kubernetes-based platform and framework for building cloud infrastructure. It provides:
- **Managed Services**: Databases, VMs, Kubernetes clusters, object storage, and more
- **Multi-tenancy**: Full isolation and self-service for tenants
- **GitOps-driven**: FluxCD-based continuous delivery
- **Modular Architecture**: Extensible with custom packages and services
- **Developer Experience**: Simplified local development with cozypkg tool
The platform exposes infrastructure services via the Kubernetes API with ready-made configs, built-in monitoring, and alerts.
## Code Layout
```
.
├── packages/ # Main directory for cozystack packages
│ ├── core/ # Core platform logic charts (installer, platform)
│ ├── system/ # System charts (CSI, CNI, operators, etc.)
│ ├── apps/ # User-facing charts shown in dashboard catalog
│ └── extra/ # Tenant-specific applications
├── dashboards/ # Grafana dashboards for monitoring
├── hack/ # Helper scripts for local development
│ └── e2e-apps/ # End-to-end application tests
├── scripts/ # Scripts used by cozystack container
│ └── migrations/ # Version migration scripts
├── docs/ # Documentation
│ └── changelogs/ # Release changelogs
├── cmd/ # Go command entry points
│ ├── cozystack-api/
│ ├── cozystack-controller/
│ └── cozystack-assets-server/
├── internal/ # Internal Go packages
│ ├── controller/ # Controller implementations
│ └── lineagecontrollerwebhook/
├── pkg/ # Public Go packages
│ ├── apis/
│ ├── apiserver/
│ └── registry/
└── api/ # Kubernetes API definitions (CRDs)
└── v1alpha1/
```
### Package Structure
Every package is a Helm chart following the umbrella chart pattern:
```
packages/<category>/<package-name>/
├── Chart.yaml # Chart definition and parameter docs
├── Makefile # Development workflow targets
├── charts/ # Vendored upstream charts
├── images/ # Dockerfiles and image build context
├── patches/ # Optional upstream chart patches
├── templates/ # Additional manifests
├── templates/dashboard-resourcemap.yaml # Dashboard resource mapping
├── values.yaml # Override values for upstream
└── values.schema.json # JSON schema for validation and UI
```
## Conventions
### Helm Charts
- Follow **umbrella chart** pattern for system components
- Include upstream charts in `charts/` directory (vendored, not referenced)
- Override configuration in root `values.yaml`
- Use `values.schema.json` for input validation and dashboard UI rendering
### Go Code
- Follow standard **Go conventions** and idioms
- Use **controller-runtime** patterns for Kubernetes controllers
- Namespaces follow pattern: `github.com/cozystack/cozystack/<path>`
- Add proper error handling and structured logging
- Use `declare(strict_types=1)` equivalent (Go's type safety)
### Git Commits
- Use format: `[component] Description`
- Reference PR numbers when available
- Keep commits atomic and focused
- Follow conventional commit format for changelogs
### Documentation
- Keep README files current
- Document breaking changes clearly
- Update relevant docs when making changes
- Use clear, concise language with code examples
## Development Workflow
### Standard Make Targets
Every package includes a `Makefile` with these targets:
```bash
make update # Update Helm chart and versions from upstream
make image # Build Docker images used in the package
make show # Show rendered Helm templates
make diff # Diff Helm release against live cluster objects
make apply # Apply Helm release to Kubernetes cluster
```
### Using cozypkg
The `cozypkg` tool wraps Helm and Flux for local development:
```bash
cozypkg show # Render manifests (helm template)
cozypkg diff # Show live vs desired manifests
cozypkg apply # Upgrade/install HelmRelease and sync
cozypkg suspend # Suspend Flux reconciliation
cozypkg resume # Resume Flux reconciliation
cozypkg get # Get HelmRelease resources
cozypkg list # List all HelmReleases
cozypkg delete # Uninstall release
cozypkg reconcile # Trigger Flux reconciliation
```
### Example: Updating a Component
```bash
cd packages/system/cilium # Navigate to package
make update # Pull latest upstream
make image # Build images
git diff . # Review manifest changes
make diff # Compare with cluster
make apply # Deploy to cluster
kubectl get pod -n cozy-cilium # Verify deployment
git commit -m "[cilium] Update to vX.Y.Z"
```
## Adding New Packages
### For System Components (operators, CNI, CSI, etc.)
1. Create directory: `packages/system/<component-name>/`
2. Create `Chart.yaml` with component metadata
3. Add upstream chart to `charts/` directory
4. Create `values.yaml` with overrides
5. Generate `values.schema.json` using `readme-generator`
6. Add `Makefile` using `scripts/package.mk`
7. Create `images/` directory if custom images needed
8. Add to bundle configuration in `packages/core/platform/`
9. Write tests in `hack/e2e-apps/`
10. Update documentation
### For User Applications (apps catalog)
1. Create directory: `packages/apps/<app-name>/`
2. Define minimal user-facing parameters in `values.schema.json`
3. Use Cozystack API for high-level resources
4. Add `templates/dashboard-resourcemap.yaml` for UI display
5. Keep business logic in system operators, not in app charts
6. Test deployment through dashboard
7. Document usage in README
### For Extra/Tenant Applications
1. Create in `packages/extra/<app-name>/`
2. Follow same structure as apps
3. Not shown in catalog
4. Installable only as tenant component
5. One application type per tenant namespace
## Tests and CI
### Local Testing
- **Unit tests**: Go tests in `*_test.go` files
- **Integration tests**: BATS scripts in `hack/e2e-apps/`
- **E2E tests**: Full platform tests via `hack/e2e.sh`
### Running E2E Tests
```bash
cd packages/core/testing
make apply # Create testing sandbox in cluster
make test # Run end-to-end tests
make delete # Remove testing sandbox
# Or locally with QEMU VMs:
./hack/e2e.sh
```
### CI Pipeline
- Automated tests run on every PR
- Image builds for changed packages
- Manifest diff generation
- E2E tests on full platform
- Release packaging and publishing
### Testing Environment Commands
```bash
make exec # Interactive shell in sandbox
make login # Download kubeconfig (requires mirrord)
make proxy # Enable SOCKS5 proxy (requires mirrord + gost)
```
## Things Agents Should Not Do
### Never Edit These
- Do not modify files in `/vendor/` (Go dependencies)
- Do not edit generated files: `zz_generated.*.go`
- Do not change `go.mod`/`go.sum` manually (use `go get`)
- Do not edit upstream charts in `packages/*/charts/` directly (use patches)
- Do not modify image digests in `values.yaml` (generated by build)
### Version Control
- Do not commit built artifacts from `packages/*/build/`
- Do not commit generated dashboards
- Do not commit test artifacts or temporary files
### Git Operations
- Do not force push to main/master
- Do not skip hooks (--no-verify, --no-gpg-sign)
- Do not update git config
- Do not perform destructive operations without explicit request
### Changelogs
- Do not manually edit `docs/changelogs/*.md` outside of changelog workflow
- Follow changelog agent rules in `.cursor/changelog-agent.md`
- Use structured format from templates
### Core Components
- Do not modify `packages/core/installer/installer.sh` without understanding migration impact
- Do not change `packages/core/platform/` logic without testing full bootstrap
- Do not alter FluxCD configurations without considering reconciliation loops
## Special Workflows
### Changelog Generation
When working with changelogs (see `.cursor/changelog-agent.md` for details):
1. **Activation**: Automatic when user mentions "changelog" or works in `docs/changelogs/`
2. **Commands**:
- "Create changelog for vX.Y.Z" → Generate from git history
- "Review changelog vX.Y.Z" → Analyze quality
- "Update changelog with PR #XXXX" → Add entry
3. **Process**:
- Extract version and range
- Run git log between versions
- Categorize by BMAD framework
- Generate structured output
- Validate against checklist
4. **Templates**: Use `patch-template.md` or `template.md`
### Building Cozystack Container
```bash
cd packages/core/installer
make image-cozystack # Build cozystack image
make apply # Apply to cluster
kubectl get pod -n cozy-system
kubectl get hr -A # Check HelmRelease objects
```
### Building with Custom Registry
```bash
export REGISTRY=my-registry.example.com/cozystack
cd packages/system/component-name
make image
make apply
```
## Buildx Configuration
Install and configure Docker buildx for multi-arch builds:
```bash
# Kubernetes driver (build in cluster)
docker buildx create \
--bootstrap \
--name=buildkit \
--driver=kubernetes \
--driver-opt=namespace=tenant-kvaps,replicas=2 \
--platform=linux/amd64 \
--platform=linux/arm64 \
--use
# Or use local Docker (omit --driver* options)
docker buildx create --bootstrap --name=local --use
```
## References
- [Cozystack Documentation](https://cozystack.io/docs/)
- [Developer Guide](https://cozystack.io/docs/development/)
- [GitHub Repository](https://github.com/cozystack/cozystack)
- [Helm Documentation](https://helm.sh/docs/)
- [FluxCD Documentation](https://fluxcd.io/flux/)
- [cozypkg Tool](https://github.com/cozystack/cozypkg)
- [Kubernetes Operator Patterns](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)
- [controller-runtime](https://github.com/kubernetes-sigs/controller-runtime)
## Community
- [Telegram](https://t.me/cozystack)
- [Slack](https://kubernetes.slack.com/archives/C06L3CPRVN1)
- [Community Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t)
---
## Machine-Readable Summary
```yaml
project: Cozystack
type: kubernetes-platform
description: Open-source platform for building cloud infrastructure
architecture: kubernetes-based, gitops-driven, multi-tenant
layout:
packages/:
core/: platform bootstrap and configuration
system/: cluster-wide components (CSI, CNI, operators)
apps/: user-facing applications (catalog)
extra/: tenant-specific applications
dashboards/: grafana monitoring dashboards
hack/: development scripts and e2e tests
scripts/: runtime scripts and migrations
cmd/: go command entry points
internal/: internal go packages
pkg/: public go packages
api/: kubernetes api definitions (CRDs)
docs/: documentation and changelogs
package_structure:
Chart.yaml: helm chart definition
Makefile: development workflow targets
charts/: vendored upstream charts
images/: docker image sources
patches/: upstream chart patches
templates/: additional manifests
values.yaml: configuration overrides
values.schema.json: validation schema and UI hints
workflow:
development_tool: cozypkg
commands:
- update: pull upstream charts
- image: build docker images
- show: render manifests
- diff: compare with cluster
- apply: deploy to cluster
gitops_engine: FluxCD
package_manager: Helm
conventions:
helm:
pattern: umbrella chart
upstream: vendored in charts/
overrides: root values.yaml
go:
style: standard go conventions
framework: controller-runtime
namespace: github.com/cozystack/cozystack
git:
commit_format: "[component] Description"
reference_prs: true
atomic_commits: true
testing:
unit: go test
integration: bats scripts (hack/e2e-apps/)
e2e: hack/e2e.sh
sandbox:
location: packages/core/testing
commands: [apply, test, delete, exec, login, proxy]
ci:
triggers: every PR
checks:
- automated tests
- image builds
- manifest diffs
- e2e tests
- packaging
special_agents:
changelog:
activation:
- files in docs/changelogs/
- user mentions "changelog"
- changelog-related requests
config_file: .cursor/changelog-agent.md
templates:
- docs/changelogs/patch-template.md
- docs/changelogs/template.md
framework: BMAD categorization
do_not_edit:
- vendor/
- zz_generated.*.go
- packages/*/charts/* (use patches)
- go.mod manually
- go.sum manually
- image digests in values.yaml
- built artifacts
tools:
required:
- kubectl
- helm
- docker buildx
- make
- go
recommended:
- cozypkg
- mirrord
- gost
- readme-generator
core_components:
bootstrap:
- packages/core/installer (installer.sh, assets server)
- packages/core/platform (flux config, reconciliation)
api:
- cmd/cozystack-api (api server)
- cmd/cozystack-controller (main controller)
- api/v1alpha1 (CRD definitions)
delivery:
- FluxCD Helm Controller
- HelmRelease custom resources
bundle_system:
definition: packages/core/platform/
components_from: packages/system/
user_applications: packages/apps/ + packages/extra/
tenant_isolation: namespace-based
one_app_type_per_tenant: true
image_management:
location: packages/*/images/
build: make image
injection: automatic to values.yaml
format: path + digest
registry: configurable via REGISTRY env var
multi_arch:
tool: docker buildx
platforms: [linux/amd64, linux/arm64]
driver_options: [kubernetes, docker]
```

View File

@@ -1,4 +1,4 @@
.PHONY: manifests repos assets unit-tests helm-unit-tests
.PHONY: manifests repos assets
build-deps:
@command -V find docker skopeo jq gh helm > /dev/null
@@ -46,11 +46,6 @@ test:
make -C packages/core/testing apply
make -C packages/core/testing test
unit-tests: helm-unit-tests
helm-unit-tests:
hack/helm-unit-tests.sh
prepare-env:
make -C packages/core/testing apply
make -C packages/core/testing prepare-cluster

View File

@@ -80,41 +80,58 @@ EOF
# Wait for the machine deployment to scale to 2 replicas (timeout after 1 minute)
kubectl wait machinedeployment kubernetes-${test_name}-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
# Get the admin kubeconfig and save it to a file
kubectl get secret kubernetes-${test_name}-admin-kubeconfig -ojsonpath='{.data.super-admin\.conf}' -n tenant-test | base64 -d > tenantkubeconfig-${test_name}
kubectl get secret kubernetes-${test_name}-admin-kubeconfig -ojsonpath='{.data.super-admin\.conf}' -n tenant-test | base64 -d > tenantkubeconfig
# Update the kubeconfig to use localhost for the API server
yq -i ".clusters[0].cluster.server = \"https://localhost:${port}\"" tenantkubeconfig-${test_name}
yq -i ".clusters[0].cluster.server = \"https://localhost:${port}\"" tenantkubeconfig
# Set up port forwarding to the Kubernetes API server for a 200 second timeout
bash -c 'timeout 300s kubectl port-forward service/kubernetes-'"${test_name}"' -n tenant-test '"${port}"':6443 > /dev/null 2>&1 &'
# Verify the Kubernetes version matches what we expect (retry for up to 20 seconds)
timeout 20 sh -ec 'until kubectl --kubeconfig tenantkubeconfig-'"${test_name}"' version 2>/dev/null | grep -Fq "Server Version: ${k8s_version}"; do sleep 5; done'
timeout 20 sh -ec 'until kubectl --kubeconfig tenantkubeconfig version 2>/dev/null | grep -Fq "Server Version: ${k8s_version}"; do sleep 5; done'
# Wait for the nodes to be ready (timeout after 2 minutes)
timeout 3m bash -c '
until [ "$(kubectl --kubeconfig tenantkubeconfig-'"${test_name}"' get nodes -o jsonpath="{.items[*].metadata.name}" | wc -w)" -eq 2 ]; do
until [ "$(kubectl --kubeconfig tenantkubeconfig get nodes -o jsonpath="{.items[*].metadata.name}" | wc -w)" -eq 2 ]; do
sleep 2
done
'
# Verify the nodes are ready
kubectl --kubeconfig tenantkubeconfig-${test_name} wait node --all --timeout=2m --for=condition=Ready
kubectl --kubeconfig tenantkubeconfig-${test_name} get nodes -o wide
kubectl --kubeconfig tenantkubeconfig wait node --all --timeout=2m --for=condition=Ready
kubectl --kubeconfig tenantkubeconfig get nodes -o wide
# Verify the kubelet version matches what we expect
versions=$(kubectl --kubeconfig "tenantkubeconfig-${test_name}" \
get nodes -o jsonpath='{.items[*].status.nodeInfo.kubeletVersion}')
versions=$(kubectl --kubeconfig tenantkubeconfig get nodes -o jsonpath='{.items[*].status.nodeInfo.kubeletVersion}')
node_ok=true
case "$k8s_version" in
v1.32*)
echo "⚠️ TODO: Temporary stub — allowing nodes with v1.33 while k8s_version is v1.32"
;;
esac
for v in $versions; do
case "$v" in
"${k8s_version}" | "${k8s_version}".* | "${k8s_version}"-*)
# acceptable
case "$k8s_version" in
v1.32|v1.32.*)
case "$v" in
v1.32 | v1.32.* | v1.32-* | v1.33 | v1.33.* | v1.33-*)
;;
*)
node_ok=false
break
;;
esac
;;
*)
node_ok=false
break
case "$v" in
"${k8s_version}" | "${k8s_version}".* | "${k8s_version}"-*)
;;
*)
node_ok=false
break
;;
esac
;;
esac
done

View File

@@ -19,26 +19,3 @@
curl -sS --fail 'http://localhost:21234/openapi/v2?timeout=32s' -H 'Accept: application/com.github.proto-openapi.spec.v2@v1.0+protobuf' > /dev/null
)
}
@test "Test kinds" {
val=$(kubectl get --raw /apis/apps.cozystack.io/v1alpha1/tenants | jq -r '.kind')
if [ "$val" != "TenantList" ]; then
echo "Expected kind to be TenantList, got $val"
exit 1
fi
val=$(kubectl get --raw /apis/apps.cozystack.io/v1alpha1/tenants | jq -r '.items[0].kind')
if [ "$val" != "Tenant" ]; then
echo "Expected kind to be Tenant, got $val"
exit 1
fi
val=$(kubectl get --raw /apis/apps.cozystack.io/v1alpha1/ingresses | jq -r '.kind')
if [ "$val" != "IngressList" ]; then
echo "Expected kind to be IngressList, got $val"
exit 1
fi
val=$(kubectl get --raw /apis/apps.cozystack.io/v1alpha1/ingresses | jq -r '.items[0].kind')
if [ "$val" != "Ingress" ]; then
echo "Expected kind to be Ingress, got $val"
exit 1
fi
}

View File

@@ -1,59 +0,0 @@
#!/bin/sh
set -eu
# Script to run unit tests for all Helm charts.
# It iterates through directories in packages/apps, packages/extra,
# packages/system, and packages/library and runs the 'test' Makefile
# target if it exists.
FAILED_DIRS_FILE="$(mktemp)"
trap 'rm -f "$FAILED_DIRS_FILE"' EXIT
tests_found=0
check_and_run_test() {
dir="$1"
makefile="$dir/Makefile"
if [ ! -f "$makefile" ]; then
return 0
fi
if make -C "$dir" -n test >/dev/null 2>&1; then
echo "Running tests in $dir"
tests_found=$((tests_found + 1))
if ! make -C "$dir" test; then
printf '%s\n' "$dir" >> "$FAILED_DIRS_FILE"
return 1
fi
fi
return 0
}
for package_dir in packages/apps packages/extra packages/system packages/library; do
if [ ! -d "$package_dir" ]; then
echo "Warning: Directory $package_dir does not exist, skipping..." >&2
continue
fi
for dir in "$package_dir"/*; do
[ -d "$dir" ] || continue
check_and_run_test "$dir" || true
done
done
if [ "$tests_found" -eq 0 ]; then
echo "No directories with 'test' Makefile targets found."
exit 0
fi
if [ -s "$FAILED_DIRS_FILE" ]; then
echo "ERROR: Tests failed in the following directories:" >&2
while IFS= read -r dir; do
echo " - $dir" >&2
done < "$FAILED_DIRS_FILE"
exit 1
fi
echo "All Helm unit tests passed."

View File

@@ -44,9 +44,6 @@ func (m *Manager) ensureFactory(ctx context.Context, crd *cozyv1alpha1.Cozystack
if flags.Secrets {
tabs = append(tabs, secretsTab(kind))
}
if prefix, ok := vncTabPrefix(kind); ok {
tabs = append(tabs, vncTab(prefix))
}
tabs = append(tabs, yamlTab(plural))
// Use unified factory creation
@@ -153,27 +150,6 @@ func detailsTab(kind, endpoint, schemaJSON string, keysOrder [][]string) map[str
}),
paramsList,
}
if kind == "VirtualPrivateCloud" {
rightColStack = append(rightColStack,
antdFlexVertical("vpc-subnets-block", 4, []any{
antdText("vpc-subnets-label", true, "Subnets", nil),
map[string]any{
"type": "EnrichedTable",
"data": map[string]any{
"id": "vpc-subnets-table",
"baseprefix": "/openapi-ui",
"clusterNamePartOfUrl": "{2}",
"customizationId": "virtualprivatecloud-subnets",
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/configmaps",
"fieldSelector": map[string]any{
"metadata.name": "virtualprivatecloud-{6}-subnets",
},
"pathToItems": []any{"items"},
},
},
}),
)
}
return map[string]any{
"key": "details",
@@ -355,36 +331,6 @@ func yamlTab(plural string) map[string]any {
}
}
func vncTabPrefix(kind string) (string, bool) {
switch kind {
case "VirtualMachine":
return "virtual-machine", true
case "VMInstance":
return "vm-instance", true
default:
return "", false
}
}
func vncTab(prefix string) map[string]any {
return map[string]any{
"key": "vnc",
"label": "VNC",
"children": []any{
map[string]any{
"type": "VMVNC",
"data": map[string]any{
"id": "vm-vnc",
"cluster": "{2}",
"namespace": "{reqsJsonPath[0]['.metadata.namespace']['-']}",
"substractHeight": float64(400),
"vmName": fmt.Sprintf("%s-{reqsJsonPath[0]['.metadata.name']['-']}", prefix),
},
},
},
}
}
// ---------------- OpenAPI → Right column ----------------
func buildOpenAPIParamsBlocks(schemaJSON string, keysOrder [][]string) []any {

View File

@@ -182,13 +182,6 @@ func CreateAllCustomColumnsOverrides() []*dashboardv1alpha1.CustomColumnsOverrid
createTimestampColumn("Created", ".metadata.creationTimestamp"),
}),
// Virtual private cloud subnets
createCustomColumnsOverride("virtualprivatecloud-subnets", []any{
createFlatMapColumn("Data", ".data"),
createStringColumn("Subnet Parameters", "_flatMapData_Key"),
createStringColumn("Values", "_flatMapData_Value"),
}),
// Factory ingress details rules
createCustomColumnsOverride("factory-kube-ingress-details-rules", []any{
createStringColumn("Host", ".host"),

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.0.0@sha256:b7633717cd7449c0042ae92d8ca9b36e4d69566561f5c7d44e21058e7d05c6d5
ghcr.io/cozystack/cozystack/nginx-cache:0.0.0@sha256:50ac1581e3100bd6c477a71161cb455a341ffaf9e5e2f6086802e4e25271e8af

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:d5c836ba33cf5dbed7e6f866784f668f80ffe69179e7c75847b680111984eefb
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:c8b08084a86251cdd18e237de89b695bca0e4f7eb1f1f6ddc2b903b4d74ea5ff

View File

@@ -182,33 +182,6 @@ metadata:
spec:
template:
spec:
files:
- path: /usr/bin/update-k8s.sh
owner: root:root
permissions: "0755"
content: |
#!/usr/bin/env bash
set -euo pipefail
# Expected to be passed in via preKubeadmCommands
: "${KUBELET_VERSION:?KUBELET_VERSION must be set, e.g. v1.31.0}"
ARCH="$(uname -m)"
case "${ARCH}" in
x86_64) ARCH=amd64 ;;
aarch64) ARCH=arm64 ;;
esac
# Use your internal mirror here for real-world use.
BASE_URL="https://dl.k8s.io/release/${KUBELET_VERSION}/bin/linux/${ARCH}"
echo "Installing kubelet and kubeadm ${KUBELET_VERSION} for ${ARCH}..."
curl -fsSL "${BASE_URL}/kubelet" -o /root/kubelet
curl -fsSL "${BASE_URL}/kubeadm" -o /root/kubeadm
chmod 0755 /root/kubelet
chmod 0755 /root/kubeadm
if /root/kubelet --version ; then mv /root/kubelet /usr/bin/kubelet ; fi
if /root/kubeadm version ; then mv /root/kubeadm /usr/bin/kubeadm ; fi
diskSetup:
filesystems:
- device: /dev/vdb
@@ -232,7 +205,6 @@ spec:
{{- end }}
{{- end }}
preKubeadmCommands:
- KUBELET_VERSION={{ include "kubernetes.versionMap" $}} /usr/bin/update-k8s.sh || true
- sed -i 's|root:x:|root::|' /etc/passwd
- systemctl stop containerd.service
- mkdir -p /ephemeral/kubelet /ephemeral/containerd

View File

@@ -6,11 +6,11 @@ metadata:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation,hook-failed
name: {{ .Release.Name }}-cleanup
name: {{ .Release.Name }}-datavolume-cleanup
spec:
template:
spec:
serviceAccountName: {{ .Release.Name }}-cleanup
serviceAccountName: {{ .Release.Name }}-datavolume-cleanup
restartPolicy: Never
tolerations:
- key: CriticalAddonsOnly
@@ -28,17 +28,12 @@ spec:
-l "cluster.x-k8s.io/cluster-name={{ .Release.Name }}"
--ignore-not-found=true
kubectl -n {{ .Release.Namespace }} delete services
-l "cluster.x-k8s.io/cluster-name={{ .Release.Name }}"
--field-selector spec.type=LoadBalancer
--ignore-not-found=true
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Release.Name }}-cleanup
name: {{ .Release.Name }}-datavolume-cleanup
annotations:
helm.sh/hook: post-delete
helm.sh/hook-delete-policy: before-hook-creation,hook-failed,hook-succeeded
@@ -51,7 +46,7 @@ metadata:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation,hook-failed
"helm.sh/hook-weight": "5"
name: {{ .Release.Name }}-cleanup
name: {{ .Release.Name }}-datavolume-cleanup
rules:
- apiGroups:
- "cdi.kubevirt.io"
@@ -61,14 +56,6 @@ rules:
- get
- list
- delete
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
@@ -77,13 +64,13 @@ metadata:
"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation,hook-failed
"helm.sh/hook-weight": "5"
name: {{ .Release.Name }}-cleanup
name: {{ .Release.Name }}-datavolume-cleanup
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ .Release.Name }}-cleanup
name: {{ .Release.Name }}-datavolume-cleanup
subjects:
- kind: ServiceAccount
name: {{ .Release.Name }}-cleanup
name: {{ .Release.Name }}-datavolume-cleanup
namespace: {{ .Release.Namespace }}

View File

@@ -37,10 +37,6 @@ spec:
# automaticFailover: true
{{- end }}
podMetadata:
labels:
"policy.cozystack.io/allow-to-apiserver": "true"
metrics:
enabled: true
exporter:

View File

@@ -122,7 +122,7 @@ metadata:
name: {{ include "tenant.name" . }}-view
namespace: {{ include "tenant.name" . }}
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "view" (include "tenant.name" .)) | nindent 2 }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "view" (include "tenant.name" .)) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-view
@@ -200,7 +200,7 @@ metadata:
name: {{ include "tenant.name" . }}-use
namespace: {{ include "tenant.name" . }}
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" (include "tenant.name" .)) | nindent 2 }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "use" (include "tenant.name" .)) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-use
@@ -299,7 +299,7 @@ metadata:
name: {{ include "tenant.name" . }}-admin
namespace: {{ include "tenant.name" . }}
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "admin" (include "tenant.name" .)) | nindent 2 }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "admin" (include "tenant.name" .)) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-admin
@@ -373,7 +373,7 @@ metadata:
name: {{ include "tenant.name" . }}-super-admin
namespace: {{ include "tenant.name" . }}
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "super-admin" (include "tenant.name" .) ) | nindent 2 }}
{{ include "cozy-lib.rbac.subjectsForTenant" (list "super-admin" (include "tenant.name" .) ) | nindent 2 }}
roleRef:
kind: Role
name: {{ include "tenant.name" . }}-super-admin

View File

@@ -19,7 +19,7 @@ Subnet name and ip address range must be unique within a VPC.
Subnet ip address space must not overlap with the default management network ip address range, subsets of 172.16.0.0/12 are recommended.
Currently there are no fail-safe checks, however they are planned for the future.
Different VPCs may have subnets with overlapping ip address ranges.
Different VPCs may have subnets with ovelapping ip address ranges.
A VM or a pod may be connected to multiple secondary Subnets at once. Each secondary connection will be represented as an additional network interface.

View File

@@ -1 +0,0 @@
../../../library/cozy-lib

View File

@@ -60,33 +60,13 @@ kind: ConfigMap
metadata:
name: {{ $.Release.Name }}-subnets
labels:
apps.cozystack.io/application.group: apps.cozystack.io
apps.cozystack.io/application.kind: VirtualPrivateCloud
apps.cozystack.io/application.name: {{ trimPrefix "virtualprivatecloud-" .Release.Name }}
cozystack.io/vpcId: {{ $vpcId }}
cozystack.io/tenantName: {{ $.Release.Namespace }}
data:
{{- range $subnetName, $subnetConfig := .Values.subnets }}
{{ $subnetName }}.ID: {{ print "subnet-" (print $.Release.Namespace "/" $vpcId "/" $subnetName | sha256sum | trunc 8) }}
{{ $subnetName }}.CIDR: {{ $subnetConfig.cidr }}
{{ $subnetName }}: |-
subnetName: {{ $subnetName }}
subnetId: {{ print "subnet-" (print $.Release.Namespace "/" $vpcId "/" $subnetName | sha256sum | trunc 8) }}
subnetCIDR: {{ $subnetConfig.cidr }}
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ .Release.Name }}-subnets"
subjects: {{- include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "view" .Release.Namespace ) | nindent 2 }}
roleRef:
kind: Role
name: "{{ .Release.Name }}-subnets"
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ .Release.Name }}-subnets"
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get","list","watch"]
resourceNames: ["{{ .Release.Name }}-subnets"]

View File

@@ -1,2 +1,2 @@
cozystack:
image: ghcr.io/cozystack/cozystack/installer:v0.38.0@sha256:1a902ebd15fe375079098c088dd5b40475926c8d9576faf6348433f0fd86a963
image: ghcr.io/cozystack/cozystack/installer:v0.37.0@sha256:256c5a0f0ae2fc3ad6865b9fda74c42945b38a5384240fa29554617185b60556

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.38.0@sha256:cb17739b46eca263b2a31c714a3cb211da6f9de259b1641c2fc72c91bdfc93bb
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.37.0@sha256:10afd0a6c39248ec41d0e59ff1bc6c29bd0075b7cc9a512b01cf603ef39c33ea

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/matchbox:v0.38.0@sha256:9ff2bdcf802445f6c1cabdf0e6fc32ee10043b1067945232a91088abad63f583
ghcr.io/cozystack/cozystack/matchbox:v0.37.0@sha256:5cca5f56b755285aefa11b1052fe55e1aa83b25bae34aef80cdb77ff63091044

View File

@@ -1,6 +1,6 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $exposeIngress := index $cozyConfig.data "expose-ingress" | default "tenant-root" }}
{{- $exposeExternalIPs := (index $cozyConfig.data "expose-external-ips") | default "" | nospace }}
{{- $exposeExternalIPs := (index $cozyConfig.data "expose-external-ips") | default "" }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v0.38.0@sha256:4548d85e7e69150aaf52fbb17fb9487e9714bdd8407aff49762cf39b9d0ab29c
ghcr.io/cozystack/cozystack/objectstorage-sidecar:v0.37.0@sha256:f166f09cdc9cdbb758209883819ab8261a3793bc1d7a6b6685efd5a2b2930847

View File

@@ -4,5 +4,3 @@ include ../../../scripts/package.mk
generate:
cozyvalues-gen -v values.yaml -s values.schema.json -r README.md
test:
$(MAKE) -C ../../tests/cozy-lib-tests/ test

View File

@@ -174,46 +174,15 @@
{{- end }}
{{- define "cozy-lib.resources.flatten" -}}
{{- $out := dict -}}
{{- $res := include "cozy-lib.resources.sanitize" . | fromYaml -}}
{{- range $section, $values := $res }}
{{- range $k, $v := $values }}
{{- with include "cozy-lib.resources.flattenResource" (list $section $k) }}
{{- $_ := set $out . $v }}
{{- end }}
{{- end }}
{{- end }}
{{- $out | toYaml }}
{{- $out := dict -}}
{{- $res := include "cozy-lib.resources.sanitize" . | fromYaml -}}
{{- range $section, $values := $res }}
{{- range $k, $v := $values }}
{{- $key := printf "%s.%s" $section $k }}
{{- if ne $key "limits.storage" }}
{{- $_ := set $out $key $v }}
{{- end }}
{{- end }}
{{- end }}
{{/*
This is a helper function that takes an argument like `list "limits" "services.loadbalancers"`
or `list "limits" "storage"` or `list "requests" "cpu"` and returns "services.loadbalancers",
"", and "requests.cpu", respectively, thus transforming them to an acceptable format for k8s
ResourceQuotas objects.
*/}}
{{- define "cozy-lib.resources.flattenResource" }}
{{- $rawQuotaKeys := list
"pods"
"services"
"services.loadbalancers"
"services.nodeports"
"services.clusterip"
"configmaps"
"secrets"
"persistentvolumeclaims"
"replicationcontrollers"
"resourcequotas"
-}}
{{- $section := index . 0 }}
{{- $type := index . 1 }}
{{- $out := "" }}
{{- if and (eq $section "limits") (eq $type "storage") }}
{{- $out = "" }}
{{- else if and (eq $section "limits") (has $type $rawQuotaKeys) }}
{{- $out = $type }}
{{- else if not (has $type $rawQuotaKeys) }}
{{- $out = printf "%s.%s" $section $type }}
{{- end }}
{{- $out -}}
{{- $out | toYaml }}
{{- end }}

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:f21b1c37872221323cee0490f9c58e04fa360c2b8c68700ab0455bc39f3ad160
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:7348bec610f08bd902c88c9a9f28fdd644727e2728a1e4103f88f0c99febd5e7

View File

@@ -1 +0,0 @@
apiserver.local.config/

View File

@@ -4,18 +4,6 @@ NAMESPACE=cozy-system
include ../../../scripts/common-envs.mk
include ../../../scripts/package.mk
run-local:
openssl req -nodes -new -x509 -keyout /tmp/ca.key -out /tmp/ca.crt -subj "/CN=kube-ca"
openssl req -out /tmp/client.csr -new -newkey rsa:2048 -nodes -keyout /tmp/client.key -subj "/C=US/ST=SomeState/L=L/OU=Dev/CN=development/O=system:masters"
openssl x509 -req -days 365 -in /tmp/client.csr -CA /tmp/ca.crt -CAkey /tmp/ca.key -set_serial 01 -sha256 -out /tmp/client.crt
openssl req -out /tmp/apiserver.csr -new -newkey rsa:2048 -nodes -keyout /tmp/apiserver.key -subj "/CN=cozystack-api" -config cozystack-api-openssl.cnf
openssl x509 -req -days 365 -in /tmp/apiserver.csr -CA /tmp/ca.crt -CAkey /tmp/ca.key -set_serial 01 -sha256 -out /tmp/apiserver.crt -extensions v3_req -extfile cozystack-api-openssl.cnf
CGO_ENABLED=0 go build -o /tmp/cozystack-api ../../../cmd/cozystack-api/main.go
/tmp/cozystack-api --client-ca-file /tmp/ca.crt --tls-cert-file /tmp/apiserver.crt --tls-private-key-file /tmp/apiserver.key --secure-port 6443 --kubeconfig $(KUBECONFIG) --authorization-kubeconfig $(KUBECONFIG) --authentication-kubeconfig $(KUBECONFIG)
debug:
dlv debug ../../../cmd/cozystack-api/main.go -- --client-ca-file /tmp/ca.crt --tls-cert-file /tmp/apiserver.crt --tls-private-key-file /tmp/apiserver.key --secure-port 6443 --kubeconfig $(KUBECONFIG) --authorization-kubeconfig $(KUBECONFIG) --authentication-kubeconfig $(KUBECONFIG)
image: image-cozystack-api
image-cozystack-api:

View File

@@ -1,13 +0,0 @@
[ req ]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[ req_distinguished_name ]
CN = cozystack-api
[ v3_req ]
subjectAltName = @alt_names
[ alt_names ]
IP.1 = 127.0.0.1

View File

@@ -1,5 +1,5 @@
cozystackAPI:
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.38.0@sha256:5eb5d6369c7c7ba0fa6b34b7c5022faa15c860b72e441b5fbde3eceda94efc88
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.37.0@sha256:19d89e8afb90ce38ab7e42ecedfc28402f7c0b56f30957db957c5415132ff6ca
localK8sAPIEndpoint:
enabled: true
replicas: 2

View File

@@ -1,6 +1,6 @@
cozystackController:
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.38.0@sha256:4628a3711b6a6fc2e446255ee172cd268b28b07c65e98c302ea8897574dcbf22
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.37.0@sha256:845b8e68cbc277c2303080bcd55597e4334610d396dad258ad56fd906530acc3
debug: false
disableTelemetry: false
cozystackVersion: "v0.38.0"
cozystackVersion: "v0.37.0"
cozystackAPIKind: "DaemonSet"

View File

@@ -5,7 +5,7 @@ ARG NODE_VERSION=20.18.1
FROM node:${NODE_VERSION}-alpine AS openapi-k8s-toolkit-builder
RUN apk add git
WORKDIR /src
ARG COMMIT=cb2f122caafaa2fd5455750213d9e633017ec555
ARG COMMIT=7bd5380c6c4606640dd3bac68bf9dce469470518
RUN wget -O- https://github.com/cozystack/openapi-k8s-toolkit/archive/${COMMIT}.tar.gz | tar -xzvf- --strip-components=1
COPY openapi-k8s-toolkit/patches /patches
@@ -22,7 +22,7 @@ FROM node:${NODE_VERSION}-alpine AS builder
#RUN apk add git
WORKDIR /src
ARG COMMIT_REF=3cfbbf2156b6a5e4a1f283a032019530c0c2d37d
ARG COMMIT_REF=0c3629b2ce8545e81f7ece4d65372a188c802dfc
RUN wget -O- https://github.com/PRO-Robotech/openapi-ui/archive/${COMMIT_REF}.tar.gz | tar xzf - --strip-components=1
#COPY openapi-ui/patches /patches

View File

@@ -1,6 +1,6 @@
{{- $brandingConfig:= lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
{{- $tenantText := "v0.38.0" }}
{{- $tenantText := "latest" }}
{{- $footerText := "Cozystack" }}
{{- $titleText := "Cozystack Dashboard" }}
{{- $logoText := "" }}

View File

@@ -34,14 +34,6 @@ data:
}
location /k8s {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
rewrite /k8s/(.*) /$1 break;
proxy_pass https://kubernetes.default.svc:443;
}

View File

@@ -1,6 +1,6 @@
openapiUI:
image: ghcr.io/cozystack/cozystack/openapi-ui:v0.38.0@sha256:78570edb9f4e329ffed0f8da3942acee1536323169d56324e57360df66044c28
image: ghcr.io/cozystack/cozystack/openapi-ui:latest@sha256:77991f2482c0026d082582b22a8ffb191f3ba6fc948b2f125ef9b1081538f865
openapiUIK8sBff:
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:v0.38.0@sha256:b7f18b86913d94338f1ceb93fca6409d19f565e35d6d6e683ca93441920fec71
image: ghcr.io/cozystack/cozystack/openapi-ui-k8s-bff:latest@sha256:8386f0747266726afb2b30db662092d66b0af0370e3becd8bee9684125fa9cc9
tokenProxy:
image: ghcr.io/cozystack/cozystack/token-proxy:v0.38.0@sha256:fad27112617bb17816702571e1f39d0ac3fe5283468d25eb12f79906cdab566b
image: ghcr.io/cozystack/cozystack/token-proxy:latest@sha256:fad27112617bb17816702571e1f39d0ac3fe5283468d25eb12f79906cdab566b

View File

@@ -1,7 +1,4 @@
strimzi-kafka-operator:
watchAnyNamespace: true
generateNetworkPolicy: false
kubernetesServiceDnsDomain: cozy.local
resources:
limits:
memory: 512Mi
kubernetesServiceDnsDomain: cozy.local

View File

@@ -3,7 +3,7 @@ kamaji:
deploy: false
image:
pullPolicy: IfNotPresent
tag: v0.38.0@sha256:125e4e6a8b86418e891416d29353053ab8b65182b7e443f221b557c11a385280
tag: v0.37.0@sha256:9f4fd5045ede2909fbaf2572e4138fcbd8921071ecf8f08446257fddd0e6f655
repository: ghcr.io/cozystack/cozystack/kamaji
resources:
limits:
@@ -13,4 +13,4 @@ kamaji:
cpu: 100m
memory: 100Mi
extraArgs:
- --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v0.38.0@sha256:125e4e6a8b86418e891416d29353053ab8b65182b7e443f221b557c11a385280
- --migrate-image=ghcr.io/cozystack/cozystack/kamaji:v0.37.0@sha256:9f4fd5045ede2909fbaf2572e4138fcbd8921071ecf8f08446257fddd0e6f655

View File

@@ -1,4 +1,4 @@
portSecurity: true
routes: ""
image: ghcr.io/cozystack/cozystack/kubeovn-plunger:v0.38.0@sha256:a140bdcc300bcfb63a5d64884d02d802d7669ba96dc65292a06f3b200ff627f8
image: ghcr.io/cozystack/cozystack/kubeovn-plunger:v0.37.0@sha256:9950614571ea77a55925eba0839b6b12c8e5a7a30b8858031a8c6050f261af1a
ovnCentralName: ovn-central

View File

@@ -1,3 +1,3 @@
portSecurity: true
routes: ""
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v0.38.0@sha256:7bfd458299a507f2cf82cddb65941ded6991fd4ba92fd46010cbc8c363126085
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v0.37.0@sha256:7e63205708e607ce2cedfe2a2cafd323ca51e3ebc71244a21ff6f9016c6c87bc

View File

@@ -44,7 +44,7 @@ kube-ovn:
memory: "50Mi"
limits:
cpu: "1000m"
memory: "2Gi"
memory: "1Gi"
kube-ovn-pinger:
requests:
cpu: "10m"
@@ -65,4 +65,4 @@ global:
images:
kubeovn:
repository: kubeovn
tag: v1.14.11@sha256:1b0f472cf30d5806e3afd10439ce8f9cfe8a004322dbd1911f7d69171fe936e5
tag: v1.14.5@sha256:af10da442a0c6dc7df47a0ef752e2eb5c247bb0b43069fdfcb2aa51511185ea2

View File

@@ -1,3 +1,3 @@
storageClass: replicated
csiDriver:
image: ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:d5c836ba33cf5dbed7e6f866784f668f80ffe69179e7c75847b680111984eefb
image: ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.0.0@sha256:c8b08084a86251cdd18e237de89b695bca0e4f7eb1f1f6ddc2b903b4d74ea5ff

View File

@@ -1,5 +1,5 @@
lineageControllerWebhook:
image: ghcr.io/cozystack/cozystack/lineage-controller-webhook:v0.38.0@sha256:fc2b04f59757904ec1557a39529b84b595114b040ef95d677fd7f21ac3958e0a
image: ghcr.io/cozystack/cozystack/lineage-controller-webhook:v0.37.0@sha256:845b8e68cbc277c2303080bcd55597e4334610d396dad258ad56fd906530acc3
debug: false
localK8sAPIEndpoint:
enabled: true

View File

@@ -1,6 +1,6 @@
dependencies:
- name: mariadb-operator-crds
repository: file://../mariadb-operator-crds
version: 25.10.2
digest: sha256:01b102dbdb92970e38346df382ed3e5cd93d02a3b642029e94320256c9bfad42
generated: "2025-10-28T11:29:04.951947063Z"
version: 0.38.1
digest: sha256:0f2ff90b83955a060f581b7db4a0c746338ae3a50d9766877c346c7f61d74cde
generated: "2025-04-15T16:54:07.813989419Z"

View File

@@ -1,10 +1,10 @@
apiVersion: v2
appVersion: 25.10.2
appVersion: 0.38.1
dependencies:
- condition: crds.enabled
name: mariadb-operator-crds
repository: file://../mariadb-operator-crds
version: 25.10.2
version: 0.38.1
description: Run and operate MariaDB in a cloud native way
home: https://github.com/mariadb-operator/mariadb-operator
icon: https://mariadb-operator.github.io/mariadb-operator/assets/mariadb_profile.svg
@@ -21,4 +21,4 @@ maintainers:
name: mmontes11
name: mariadb-operator
type: application
version: 25.10.2
version: 0.38.1

View File

@@ -2,7 +2,7 @@
[//]: # (README.md generated by gotmpl. DO NOT EDIT.)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 25.10.2](https://img.shields.io/badge/Version-25.10.2-informational?style=flat-square) ![AppVersion: 25.10.2](https://img.shields.io/badge/AppVersion-25.10.2-informational?style=flat-square)
![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![Version: 0.38.1](https://img.shields.io/badge/Version-0.38.1-informational?style=flat-square) ![AppVersion: 0.38.1](https://img.shields.io/badge/AppVersion-0.38.1-informational?style=flat-square)
Run and operate MariaDB in a cloud native way
@@ -16,7 +16,7 @@ helm install mariadb-operator-crds mariadb-operator/mariadb-operator-crds
helm install mariadb-operator mariadb-operator/mariadb-operator
```
Refer to the [helm documentation](https://github.com/mariadb-operator/mariadb-operator/blob/main/docs/helm.md) for further detail.
Refer to the [helm documentation](https://github.com/mariadb-operator/mariadb-operator/blob/main/docs/HELM.md) for further detail.
## Values
@@ -60,15 +60,14 @@ Refer to the [helm documentation](https://github.com/mariadb-operator/mariadb-op
| certController.tolerations | list | `[]` | Tolerations to add to cert-controller container |
| certController.topologySpreadConstraints | list | `[]` | topologySpreadConstraints to add to cert-controller container |
| clusterName | string | `"cluster.local"` | Cluster DNS name |
| config | object | `{"exporterImage":"prom/mysqld-exporter:v0.15.1","exporterMaxscaleImage":"docker-registry2.mariadb.com/mariadb/maxscale-prometheus-exporter-ubi:v0.0.1","galeraLibPath":"/usr/lib/galera/libgalera_smm.so","mariadbDefaultVersion":"11.8","mariadbImage":"docker-registry1.mariadb.com/library/mariadb:11.8.2","mariadbImageName":"docker-registry1.mariadb.com/library/mariadb","maxscaleImage":"docker-registry2.mariadb.com/mariadb/maxscale:23.08.5"}` | Operator configuration |
| config | object | `{"exporterImage":"prom/mysqld-exporter:v0.15.1","exporterMaxscaleImage":"docker-registry2.mariadb.com/mariadb/maxscale-prometheus-exporter-ubi:v0.0.1","galeraLibPath":"/usr/lib/galera/libgalera_smm.so","mariadbDefaultVersion":"11.4","mariadbImage":"docker-registry1.mariadb.com/library/mariadb:11.4.5","maxscaleImage":"docker-registry2.mariadb.com/mariadb/maxscale:23.08.5"}` | Operator configuration |
| config.exporterImage | string | `"prom/mysqld-exporter:v0.15.1"` | Default MariaDB exporter image |
| config.exporterMaxscaleImage | string | `"docker-registry2.mariadb.com/mariadb/maxscale-prometheus-exporter-ubi:v0.0.1"` | Default MaxScale exporter image |
| config.galeraLibPath | string | `"/usr/lib/galera/libgalera_smm.so"` | Galera library path to be used with MariaDB Galera |
| config.mariadbDefaultVersion | string | `"11.8"` | Default MariaDB version to be used when unable to infer it via image tag |
| config.mariadbImage | string | `"docker-registry1.mariadb.com/library/mariadb:11.8.2"` | Default MariaDB image |
| config.mariadbImageName | string | `"docker-registry1.mariadb.com/library/mariadb"` | Default MariaDB image name |
| config.mariadbDefaultVersion | string | `"11.4"` | Default MariaDB version to be used when unable to infer it via image tag |
| config.mariadbImage | string | `"docker-registry1.mariadb.com/library/mariadb:11.4.5"` | Default MariaDB image |
| config.maxscaleImage | string | `"docker-registry2.mariadb.com/mariadb/maxscale:23.08.5"` | Default MaxScale image |
| crds | object | `{"enabled":false}` | CRDs |
| crds | object | `{"enabled":false}` | - CRDs |
| crds.enabled | bool | `false` | Whether the helm chart should create and update the CRDs. It is false by default, which implies that the CRDs must be managed independently with the mariadb-operator-crds helm chart. **WARNING** This should only be set to true during the initial deployment. If this chart manages the CRDs and is later uninstalled, all MariaDB instances will be DELETED. |
| currentNamespaceOnly | bool | `false` | Whether the operator should watch CRDs only in its own namespace or not. |
| extrArgs | list | `[]` | Extra arguments to be passed to the controller entrypoint |

View File

@@ -17,6 +17,6 @@ helm install mariadb-operator-crds mariadb-operator/mariadb-operator-crds
helm install mariadb-operator mariadb-operator/mariadb-operator
```
Refer to the [helm documentation](https://github.com/mariadb-operator/mariadb-operator/blob/main/docs/helm.md) for further detail.
Refer to the [helm documentation](https://github.com/mariadb-operator/mariadb-operator/blob/main/docs/HELM.md) for further detail.
{{ template "chart.valuesSection" . }}

View File

@@ -16,4 +16,4 @@ maintainers:
name: mmontes11
name: mariadb-operator-crds
type: application
version: 25.10.2
version: 0.38.1

View File

@@ -1,4 +1,4 @@
mariadb-operator has been successfully deployed! 🦭
Not sure what to do next? 😅 Check out:
https://github.com/mariadb-operator/mariadb-operator/blob/main/docs/quickstart.md
https://github.com/mariadb-operator/mariadb-operator/blob/main/docs/QUICKSTART.md

View File

@@ -51,10 +51,10 @@ rules:
- patch
- watch
- apiGroups:
- discovery.k8s.io
- ""
resources:
- endpointslices
- endpointslices/restricted
- endpoints
- endpoints/restricted
verbs:
- get
- list

View File

@@ -4,7 +4,6 @@ data:
MARIADB_GALERA_LIB_PATH: "{{ .Values.config.galeraLibPath }}"
MARIADB_DEFAULT_VERSION: "{{ .Values.config.mariadbDefaultVersion }}"
RELATED_IMAGE_MARIADB: "{{ .Values.config.mariadbImage }}"
RELATED_IMAGE_MARIADB_NAME: "{{ .Values.config.mariadbImageName }}"
RELATED_IMAGE_MAXSCALE: "{{ .Values.config.maxscaleImage }}"
RELATED_IMAGE_EXPORTER: "{{ .Values.config.exporterImage }}"
RELATED_IMAGE_EXPORTER_MAXSCALE: "{{ .Values.config.exporterMaxscaleImage }}"

View File

@@ -36,6 +36,17 @@ rules:
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- endpoints
- endpoints/restricted
verbs:
- create
- get
- list
- patch
- watch
- apiGroups:
- ""
resources:
@@ -54,7 +65,6 @@ rules:
- persistentvolumeclaims
verbs:
- create
- delete
- deletecollection
- list
- patch
@@ -111,7 +121,6 @@ rules:
verbs:
- create
- delete
- get
- list
- patch
- watch
@@ -124,17 +133,6 @@ rules:
- list
- patch
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
- endpointslices/restricted
verbs:
- create
- get
- list
- patch
- watch
- apiGroups:
- k8s.mariadb.com
resources:
@@ -143,9 +141,7 @@ rules:
- databases
- grants
- mariadbs
- externalmariadbs
- maxscales
- physicalbackups
- restores
- sqljobs
- users
@@ -165,9 +161,7 @@ rules:
- databases/finalizers
- grants/finalizers
- mariadbs/finalizers
- externalmariadbs/finalizers
- maxscales/finalizers
- physicalbackups/finalizers
- restores/finalizers
- sqljobs/finalizers
- users/finalizers
@@ -181,9 +175,7 @@ rules:
- databases/status
- grants/status
- mariadbs/status
- externalmariadbs/status
- maxscales/status
- physicalbackups/status
- restores/status
- sqljobs/status
- users/status
@@ -228,17 +220,6 @@ rules:
- list
- patch
- watch
- apiGroups:
- snapshot.storage.k8s.io
resources:
- volumesnapshots
verbs:
- create
- delete
- get
- list
- patch
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@@ -53,6 +53,17 @@ rules:
- list
- patch
- watch
- apiGroups:
- ""
resources:
- endpoints
- endpoints/restricted
verbs:
- create
- get
- list
- patch
- watch
- apiGroups:
- ""
resources:
@@ -71,7 +82,6 @@ rules:
- persistentvolumeclaims
verbs:
- create
- delete
- deletecollection
- list
- patch
@@ -140,7 +150,6 @@ rules:
verbs:
- create
- delete
- get
- list
- patch
- watch
@@ -153,17 +162,6 @@ rules:
- list
- patch
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
- endpointslices/restricted
verbs:
- create
- get
- list
- patch
- watch
- apiGroups:
- k8s.mariadb.com
resources:
@@ -172,9 +170,7 @@ rules:
- databases
- grants
- mariadbs
- externalmariadbs
- maxscales
- physicalbackups
- restores
- sqljobs
- users
@@ -194,9 +190,7 @@ rules:
- databases/finalizers
- grants/finalizers
- mariadbs/finalizers
- externalmariadbs/finalizers
- maxscales/finalizers
- physicalbackups/finalizers
- restores/finalizers
- sqljobs/finalizers
- users/finalizers
@@ -210,9 +204,7 @@ rules:
- databases/status
- grants/status
- mariadbs/status
- externalmariadbs/status
- maxscales/status
- physicalbackups/status
- restores/status
- sqljobs/status
- users/status
@@ -258,17 +250,6 @@ rules:
- list
- patch
- watch
- apiGroups:
- snapshot.storage.k8s.io
resources:
- volumesnapshots
verbs:
- create
- delete
- get
- list
- patch
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@@ -1,5 +1,41 @@
{{ if and (not .Values.currentNamespaceOnly) .Values.webhook.enabled }}
{{ $fullName := include "mariadb-operator.fullname" . }}
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: {{ $fullName }}-webhook
labels:
{{- include "mariadb-operator-webhook.labels" . | nindent 4 }}
annotations:
{{- if .Values.webhook.cert.certManager.enabled }}
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "mariadb-operator.fullname" . }}-webhook-cert
{{- else }}
k8s.mariadb.com/webhook: ""
{{- end }}
{{- with .Values.webhook.annotations }}
{{ toYaml . | indent 4 }}
{{- end }}
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ $fullName }}-webhook
namespace: {{ .Release.Namespace }}
path: /mutate-k8s-mariadb-com-v1alpha1-mariadb
failurePolicy: Fail
name: mmariadb-v1alpha1.kb.io
rules:
- apiGroups:
- k8s.mariadb.com
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- mariadbs
sideEffects: None
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
@@ -37,26 +73,6 @@ webhooks:
resources:
- backups
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ $fullName }}-webhook
namespace: {{ .Release.Namespace }}
path: /validate-k8s-mariadb-com-v1alpha1-physicalbackup
failurePolicy: Fail
name: vphysicalbackup-v1alpha1.kb.io
rules:
- apiGroups:
- k8s.mariadb.com
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- physicalbackups
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
@@ -217,4 +233,4 @@ webhooks:
resources:
- users
sideEffects: None
{{- end }}
{{- end }}

View File

@@ -1,6 +1,6 @@
nameOverride: ""
fullnameOverride: ""
# -- CRDs
# --- CRDs
crds:
# -- Whether the helm chart should create and update the CRDs. It is false by default, which implies that the CRDs must be
# managed independently with the mariadb-operator-crds helm chart.
@@ -310,11 +310,9 @@ config:
# -- Galera library path to be used with MariaDB Galera
galeraLibPath: /usr/lib/galera/libgalera_smm.so
# -- Default MariaDB version to be used when unable to infer it via image tag
mariadbDefaultVersion: "11.8"
mariadbDefaultVersion: "11.4"
# -- Default MariaDB image
mariadbImage: docker-registry1.mariadb.com/library/mariadb:11.8.2
# -- Default MariaDB image name
mariadbImageName: docker-registry1.mariadb.com/library/mariadb
mariadbImage: docker-registry1.mariadb.com/library/mariadb:11.4.5
# -- Default MaxScale image
maxscaleImage: docker-registry2.mariadb.com/mariadb/maxscale:23.08.5
# -- Default MariaDB exporter image

View File

@@ -154,7 +154,7 @@ spec:
serviceAccountName: multus
containers:
- name: kube-multus
image: ghcr.io/k8snetworkplumbingwg/multus-cni:v4.2.3-thick
image: ghcr.io/k8snetworkplumbingwg/multus-cni:v4.2.2-thick
command: [ "/usr/src/multus-cni/bin/multus-daemon" ]
resources:
requests:
@@ -162,7 +162,7 @@ spec:
memory: "100Mi"
limits:
cpu: "100m"
memory: "300Mi"
memory: "100Mi"
securityContext:
privileged: true
terminationMessagePolicy: FallbackToLogsOnError
@@ -201,13 +201,11 @@ spec:
fieldPath: spec.nodeName
initContainers:
- name: install-multus-binary
image: ghcr.io/k8snetworkplumbingwg/multus-cni:v4.2.3-thick
image: ghcr.io/k8snetworkplumbingwg/multus-cni:v4.2.2-thick
command:
- "/usr/src/multus-cni/bin/install_multus"
- "-d"
- "/host/opt/cni/bin"
- "-t"
- "thick"
- "sh"
- "-c"
- "cp /usr/src/multus-cni/bin/multus-shim /host/opt/cni/bin/multus-shim && cp /usr/src/multus-cni/bin/passthru /host/opt/cni/bin/passthru"
resources:
requests:
cpu: "10m"

View File

@@ -1,3 +1,3 @@
objectstorage:
controller:
image: "ghcr.io/cozystack/cozystack/objectstorage-controller:v0.38.0@sha256:7d37495cce46d30d4613ecfacaa7b7f140e7ea8f3dbcc3e8c976e271de6cc71b"
image: "ghcr.io/cozystack/cozystack/objectstorage-controller:v0.37.0@sha256:5f2eed05d19ba971806374834cb16ca49282aac76130194c00b213c79ce3e10d"

View File

@@ -3,8 +3,8 @@ name: piraeus
description: |
The Piraeus Operator manages software defined storage clusters using LINSTOR in Kubernetes.
type: application
version: 2.10.1
appVersion: "v2.10.1"
version: 2.9.1
appVersion: "v2.9.1"
maintainers:
- name: Piraeus Datastore
url: https://piraeus.io

View File

@@ -3,8 +3,33 @@
Deploys the [Piraeus Operator](https://github.com/piraeusdatastore/piraeus-operator) which deploys and manages a simple
and resilient storage solution for Kubernetes.
The main deployment method for Piraeus Operator switched to [`kustomize`](https://piraeus.io/docs/stable/tutorial/get-started/)
The main deployment method for Piraeus Operator switched to [`kustomize`](../../docs/tutorial)
in release `v2.0.0`. This chart is intended for users who want to continue using Helm.
This chart **only** configures the Operator, but does not create the `LinstorCluster` resource creating the actual
storage system. Refer to the [how-to guide](https://piraeus.io/docs/stable/how-to/helm/).
storage system. Refer to the existing [tutorials](../../docs/tutorial)
and [how-to guides](../../docs/how-to).
## Deploying Piraeus Operator
To deploy Piraeus Operator with Helm, clone this repository and deploy the chart:
```
$ git clone --branch v2 https://github.com/piraeusdatastore/piraeus-operator
$ cd piraeus-operator
$ helm install piraeus-operator charts/piraeus-operator --create-namespace -n piraeus-datastore
```
Follow the instructions printed by Helm to create your storage cluster:
```
$ kubectl apply -f - <<EOF
apiVersion: piraeus.io/v1
kind: LinstorCluster
metadata:
name: linstorcluster
spec: {}
EOF
```
Check out our [documentation](../../docs) for more information.

View File

@@ -14,7 +14,7 @@ Piraeus Operator installed.
` }}
{{- end }}
{{- if and (not (.Capabilities.APIVersions.Has "piraeus.io/v1/LinstorCluster")) (not .Values.installCRDs) }}
{{- if not (.Capabilities.APIVersions.Has "piraeus.io/v1/LinstorCluster") }}
It looks like the necessary CRDs for Piraeus Operator are still missing.
To apply them via helm now use:
@@ -23,12 +23,18 @@ To apply them via helm now use:
Alternatively, you can manage them manually:
kubectl apply --server-side -k "https://github.com/piraeusdatastore/piraeus-operator//config/crd?ref={{ .Chart.AppVersion }}"
kubectl apply --server-side -k "https://github.com/piraeusdatastore/piraeus-operator//config/crd?ref=v2"
{{- end }}
To get started with Piraeus Datastore deploy the "linstor-cluster" chart:
To get started with Piraeus, simply run:
$ helm upgrade --install --namespace {{ .Release.Namespace }} linstor-cluster oci://ghcr.io/piraeusdatastore/helm-charts/linstor-cluster
$ kubectl apply -f - <<EOF
apiVersion: piraeus.io/v1
kind: LinstorCluster
metadata:
name: linstorcluster
spec: {}
EOF
For next steps, check out our documentation at https://piraeus.io/docs/{{ .Chart.AppVersion }}
For next steps, check out our documentation at https://github.com/piraeusdatastore/piraeus-operator/tree/v2/docs

View File

@@ -23,26 +23,20 @@ data:
tag: v1.32.3
image: piraeus-server
linstor-csi:
tag: v1.10.2
tag: v1.9.0
image: piraeus-csi
nfs-server:
tag: v1.10.2
image: piraeus-csi-nfs-server
drbd-reactor:
tag: v1.10.0
tag: v1.9.0
image: drbd-reactor
ha-controller:
tag: v1.3.1
tag: v1.3.0
image: piraeus-ha-controller
drbd-shutdown-guard:
tag: v1.1.1
tag: v1.0.0
image: drbd-shutdown-guard
ktls-utils:
tag: v1.2.1
image: ktls-utils
linstor-affinity-controller:
tag: v1.3.0
image: linstor-affinity-controller
drbd-module-loader:
tag: v9.2.15
# The special "match" attribute is used to select an image based on the node's reported OS.
@@ -99,13 +93,13 @@ data:
tag: v2.17.0
image: livenessprobe
csi-provisioner:
tag: v6.0.0
tag: v5.3.0
image: csi-provisioner
csi-snapshotter:
tag: v8.4.0
tag: v8.3.0
image: csi-snapshotter
csi-resizer:
tag: v2.0.0
tag: v1.14.0
image: csi-resizer
csi-external-health-monitor-controller:
tag: v0.16.0

View File

@@ -15,41 +15,7 @@ spec:
singular: linstorcluster
scope: Cluster
versions:
- additionalPrinterColumns:
- description: If the LINSTOR Cluster is available
jsonPath: .status.conditions[?(@.type=='Available')].status
name: Available
type: string
- description: If the LINSTOR Cluster is fully configured
jsonPath: .status.conditions[?(@.type=='Configured')].status
name: Configured
type: string
- description: The version of the LINSTOR Cluster
jsonPath: .status.version
name: Version
priority: 10
type: string
- description: The number of running/expected Satellites
jsonPath: .status.satellites
name: Satellites
type: string
- description: The used capacity in all storage pools
jsonPath: .status.capacity
name: Used Capacity
type: string
- description: The number of volumes in the cluster
jsonPath: .status.numberOfVolumes
name: Volumes
type: integer
- description: The number of snapshots in the cluster
jsonPath: .status.numberOfSnapshots
name: Snapshots
priority: 10
type: integer
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
- name: v1
schema:
openAPIV3Schema:
description: LinstorCluster is the Schema for the linstorclusters API
@@ -74,30 +40,6 @@ spec:
spec:
description: LinstorClusterSpec defines the desired state of LinstorCluster
properties:
affinityController:
description: AffinityController controls the deployment of the Affinity
Controller Deployment.
properties:
enabled:
default: true
description: Enable the component.
type: boolean
podTemplate:
description: |-
Template to apply to Pods of the component.
The template is applied as a patch to the default deployment, so it can be "sparse", not listing any
containers or volumes that should remain unchanged.
See https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
replicas:
description: Number of desired pods. Defaults to 1.
format: int32
minimum: 0
type: integer
type: object
apiTLS:
description: |-
ApiTLS secures the LINSTOR API.
@@ -105,11 +47,6 @@ spec:
This configures the TLS key and certificate used to secure the LINSTOR API.
nullable: true
properties:
affinityControllerSecretName:
description: |-
AffinityControllerSecretName references a secret holding the TLS key and certificate used by the Affinity
Controller to monitor volume state. Defaults to "linstor-affinity-controller-tls".
type: string
apiSecretName:
description: |-
ApiSecretName references a secret holding the TLS key and certificate used to protect the API.
@@ -177,11 +114,6 @@ spec:
CsiNodeSecretName references a secret holding the TLS key and certificate used by the CSI Nodes to query
the volume state. Defaults to "linstor-csi-node-tls".
type: string
nfsServerSecretName:
description: |-
NFSServerSecretName references a secret holding the TLS key and certificate used by the NFS Server to query
the cluster state. Defaults to "linstor-csi-nfs-server-tls".
type: string
type: object
controller:
description: Controller controls the deployment of the LINSTOR Controller
@@ -220,11 +152,6 @@ spec:
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
replicas:
description: Number of desired pods. Defaults to 1.
format: int32
minimum: 0
type: integer
type: object
csiNode:
description: CSINode controls the deployment of the CSI Node DaemonSet.
@@ -345,25 +272,6 @@ spec:
* Store credentials for accessing remotes for backups.
See https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/#s-encrypt_commands for more information.
type: string
nfsServer:
description: NFSServer controls the deployment of the LINSTOR CSI
NFS Server DaemonSet.
properties:
enabled:
default: true
description: Enable the component.
type: boolean
podTemplate:
description: |-
Template to apply to Pods of the component.
The template is applied as a patch to the default deployment, so it can be "sparse", not listing any
containers or volumes that should remain unchanged.
See https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates
type: object
x-kubernetes-map-type: atomic
x-kubernetes-preserve-unknown-fields: true
type: object
nodeAffinity:
description: |-
NodeAffinity selects the nodes on which LINSTOR Satellites will be deployed.
@@ -471,16 +379,9 @@ spec:
JSON patch and its targets.
properties:
options:
additionalProperties:
type: boolean
description: Options is a list of options for the patch
properties:
allowKindChange:
description: AllowKindChange allows kind changes to the
resource.
type: boolean
allowNameChange:
description: AllowNameChange allows name changes to the
resource.
type: boolean
type: object
patch:
description: Patch is the content of a patch.
@@ -592,15 +493,6 @@ spec:
status:
description: LinstorClusterStatus defines the observed state of LinstorCluster
properties:
availableCapacityBytes:
description: The number of bytes in total in all storage pools in
the LINSTOR Cluster.
format: int64
type: integer
capacity:
description: Capacity mirrors the information from TotalCapacityBytes
and FreeCapacityBytes in a human-readable string
type: string
conditions:
description: Current LINSTOR Cluster state
items:
@@ -661,35 +553,6 @@ spec:
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
freeCapacityBytes:
description: The number of bytes free in all storage pools in the
LINSTOR Cluster.
format: int64
type: integer
numberOfSnapshots:
description: The number of snapshots in the LINSTOR Cluster.
format: int32
type: integer
numberOfVolumes:
description: The number of volumes in the LINSTOR Cluster.
format: int32
type: integer
runningSatellites:
description: The number of LINSTOR Satellites currently running.
format: int32
type: integer
satellites:
description: Satellites mirrors the information from ScheduledSatellites
and RunningSatellites in a human-readable string
type: string
scheduledSatellites:
description: The number of LINSTOR Satellites that are expected to
run.
format: int32
type: integer
version:
description: The Version of the LINSTOR Cluster.
type: string
type: object
type: object
served: true
@@ -712,15 +575,7 @@ spec:
singular: linstornodeconnection
scope: Cluster
versions:
- additionalPrinterColumns:
- description: If the LINSTOR Node Connection is fully configured
jsonPath: .status.conditions[?(@.type=='Configured')].status
name: Configured
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
- name: v1
schema:
openAPIV3Schema:
description: LinstorNodeConnection is the Schema for the linstornodeconnections
@@ -922,28 +777,7 @@ spec:
singular: linstorsatelliteconfiguration
scope: Cluster
versions:
- additionalPrinterColumns:
- description: The node selector used
jsonPath: .spec.nodeSelector
name: Selector
type: string
- description: If the Configuration was applied
jsonPath: .status.conditions[?(@.type=='Applied')].status
name: Applied
type: string
- description: Number of Satellites this Configuration has been applied to
jsonPath: .status.matched
name: Matched
type: integer
- description: Satellites this Configuration has been applied to
jsonPath: .status.appliedTo
name: Satellites
priority: 10
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
- name: v1
schema:
openAPIV3Schema:
description: LinstorSatelliteConfiguration is the Schema for the linstorsatelliteconfigurations
@@ -1179,16 +1013,9 @@ spec:
JSON patch and its targets.
properties:
options:
additionalProperties:
type: boolean
description: Options is a list of options for the patch
properties:
allowKindChange:
description: AllowKindChange allows kind changes to the
resource.
type: boolean
allowNameChange:
description: AllowNameChange allows name changes to the
resource.
type: boolean
type: object
patch:
description: Patch is the content of a patch.
@@ -1447,12 +1274,6 @@ spec:
description: LinstorSatelliteConfigurationStatus defines the observed
state of LinstorSatelliteConfiguration
properties:
appliedTo:
description: AppliedTo lists the LinstorSatellite resource this configuration
was applied to
items:
type: string
type: array
conditions:
description: Current LINSTOR Satellite Config state
items:
@@ -1513,10 +1334,6 @@ spec:
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
matched:
description: Number of configured LinstorSatellite resource.
format: int64
type: integer
type: object
type: object
served: true
@@ -1539,51 +1356,7 @@ spec:
singular: linstorsatellite
scope: Cluster
versions:
- additionalPrinterColumns:
- description: If the LINSTOR Satellite is connected
jsonPath: .status.conditions[?(@.type=='Available')].status
name: Connected
type: string
- description: If the LINSTOR Satellite is fully configured
jsonPath: .status.conditions[?(@.type=='Configured')].status
name: Configured
type: string
- description: The Satellite Configurations applied to this Satellite
jsonPath: .metadata.annotations.piraeus\.io/applied-configurations
name: Applied Configurations
priority: 10
type: string
- description: The deletion policy of the Satellite
jsonPath: .spec.deletionPolicy
name: Deletion Policy
type: string
- description: The used capacity on the node
jsonPath: .status.capacity
name: Used Capacity
type: string
- description: The number of volumes on the node
jsonPath: .status.numberOfVolumes
name: Volumes
type: integer
- description: The number of snapshots on the node
jsonPath: .status.numberOfSnapshots
name: Snapshots
priority: 10
type: integer
- description: The storage providers supported by the node
jsonPath: .status.storageProviders
name: Storage Providers
priority: 10
type: string
- description: The device layers supported by the node
jsonPath: .status.deviceLayers
name: Device Layers
priority: 10
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
- name: v1
schema:
openAPIV3Schema:
description: LinstorSatellite is the Schema for the linstorsatellites API
@@ -1773,16 +1546,9 @@ spec:
JSON patch and its targets.
properties:
options:
additionalProperties:
type: boolean
description: Options is a list of options for the patch
properties:
allowKindChange:
description: AllowKindChange allows kind changes to the
resource.
type: boolean
allowNameChange:
description: AllowNameChange allows name changes to the
resource.
type: boolean
type: object
patch:
description: Patch is the content of a patch.
@@ -2035,15 +1801,6 @@ spec:
status:
description: LinstorSatelliteStatus defines the observed state of LinstorSatellite
properties:
availableCapacityBytes:
description: The number of bytes in total in all storage pools on
this Satellite.
format: int64
type: integer
capacity:
description: Capacity mirrors the information from TotalCapacityBytes
and FreeCapacityBytes in a human-readable string.
type: string
conditions:
description: Current LINSTOR Satellite state
items:
@@ -2104,31 +1861,6 @@ spec:
x-kubernetes-list-map-keys:
- type
x-kubernetes-list-type: map
deviceLayers:
description: DeviceLayers lists the device layers (LUKS, CACHE, etc...)
this Satellite supports.
items:
type: string
type: array
freeCapacityBytes:
description: The number of bytes free in all storage pools on this
Satellite.
format: int64
type: integer
numberOfSnapshots:
description: The number of snapshots on this Satellite.
format: int32
type: integer
numberOfVolumes:
description: The number of volumes on this Satellite.
format: int32
type: integer
storageProviders:
description: StorageProviders lists the storage providers (LVM, ZFS,
etc...) this Satellite supports.
items:
type: string
type: array
type: object
type: object
served: true

View File

@@ -13,15 +13,9 @@ spec:
template:
metadata:
labels:
{{- include "piraeus-operator.selectorLabels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- include "piraeus-operator.selectorLabels" . | nindent 8 }}
annotations:
kubectl.kubernetes.io/default-container: manager
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
containers:
- args:
@@ -96,9 +90,6 @@ spec:
{{- end }}
securityContext:
runAsNonRoot: true
{{- with .Values.podSecurityContext }}
{{ toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "piraeus-operator.serviceAccountName" . }}
terminationGracePeriodSeconds: 10
priorityClassName: {{ .Values.priorityClassName | default "system-cluster-critical" }}

View File

@@ -30,18 +30,8 @@ rules:
- ""
resources:
- nodes
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- persistentvolumeclaims
verbs:
- delete
- get
- list
- patch
@@ -102,21 +92,6 @@ rules:
- patch
- update
- watch
- apiGroups:
- cluster.x-k8s.io
resources:
- machines
verbs:
- get
- update
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
- endpointslices/restricted
verbs:
- create
- delete
- apiGroups:
- events.k8s.io
resources:
@@ -128,32 +103,6 @@ rules:
- patch
- update
- watch
- apiGroups:
- groupsnapshot.storage.k8s.io
resources:
- volumegroupsnapshotclasses
- volumesnapshots
verbs:
- get
- list
- watch
- apiGroups:
- groupsnapshot.storage.k8s.io
resources:
- volumegroupsnapshotcontents
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- groupsnapshot.storage.k8s.io
resources:
- volumegroupsnapshotcontents/status
verbs:
- patch
- update
- apiGroups:
- internal.linstor.linbit.com
resources:
@@ -245,6 +194,7 @@ rules:
resources:
- volumesnapshotcontents
verbs:
- delete
- get
- list
- patch

View File

@@ -1,4 +1,3 @@
---
replicaCount: 1
installCRDs: false
@@ -19,7 +18,6 @@ operator:
options:
leaderElect: true
#clusterApiKubeconfig: "" # set to "<none>" to disable ClusterAPI integration
resources: { }
# limits:
@@ -82,7 +80,6 @@ rbac:
create: true
podAnnotations: { }
podLabels: {}
podSecurityContext: {}
# fsGroup: 2000

View File

@@ -124,7 +124,7 @@ seaweedfs:
bucketClassName: "seaweedfs"
region: ""
sidecar:
image: "ghcr.io/cozystack/cozystack/objectstorage-sidecar:v0.38.0@sha256:4548d85e7e69150aaf52fbb17fb9487e9714bdd8407aff49762cf39b9d0ab29c"
image: "ghcr.io/cozystack/cozystack/objectstorage-sidecar:latest@sha256:f4ec2b5ec8183870e710b32fa11b3af5d97fa664572df5edcff4b593b00f9a7b"
certificates:
commonName: "SeaweedFS CA"
ipAddresses: []

View File

@@ -1,23 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,24 +0,0 @@
apiVersion: v2
name: quota
description: Testing chart for cozy-lib
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

View File

@@ -1,2 +0,0 @@
test:
helm unittest .

View File

@@ -1 +0,0 @@
../../../library/cozy-lib

View File

@@ -1,8 +0,0 @@
apiVersion: v1
kind: ResourceQuota
metadata:
name: {{ .Release.Name }}
spec:
{{- with .Values.quota }}
hard: {{- include "cozy-lib.resources.flatten" (list . $) | nindent 4 }}
{{- end }}

View File

@@ -1,50 +0,0 @@
# ./tests/quota_test.yaml
suite: quota helper
templates:
- templates/tests/quota.yaml
tests:
- it: correctly interprets special kubernetes quota types
values:
- quota_values.yaml
release:
name: myrelease
namespace: default
revision: 1
isUpgrade: false
asserts:
- equal:
path: spec.hard["limits.cpu"]
value: "20"
- equal:
path: spec.hard["requests.cpu"]
value: "2"
- equal:
path: spec.hard["limits.foobar"]
value: "3"
- equal:
path: spec.hard["requests.foobar"]
value: "3"
- equal:
path: spec.hard["services.loadbalancers"]
value: "2"
- equal:
path: spec.hard["requests.storage"]
value: "5Gi"
- notExists:
path: spec.hard["limits.storage"]
- notExists:
path: spec.hard["limits.services.loadbalancers"]
- notExists:
path: spec.hard["requests.services.loadbalancers"]

View File

@@ -1,5 +0,0 @@
quota:
services.loadbalancers: "2"
cpu: "20"
storage: "5Gi"
foobar: "3"

View File

@@ -30,16 +30,14 @@ import (
"k8s.io/apimachinery/pkg/runtime/serializer"
"k8s.io/apiserver/pkg/registry/rest"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/klog/v2"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"github.com/cozystack/cozystack/pkg/apis/apps"
appsinstall "github.com/cozystack/cozystack/pkg/apis/apps/install"
coreinstall "github.com/cozystack/cozystack/pkg/apis/apps/install"
"github.com/cozystack/cozystack/pkg/apis/core"
coreinstall "github.com/cozystack/cozystack/pkg/apis/core/install"
"github.com/cozystack/cozystack/pkg/config"
cozyregistry "github.com/cozystack/cozystack/pkg/registry"
applicationstorage "github.com/cozystack/cozystack/pkg/registry/apps/application"
@@ -50,8 +48,7 @@ import (
var (
// Scheme defines methods for serializing and deserializing API objects.
Scheme = runtime.NewScheme()
mgrScheme = runtime.NewScheme()
Scheme = runtime.NewScheme()
// Codecs provides methods for retrieving codecs and serializers for specific
// versions and content types.
Codecs = serializer.NewCodecFactory(Scheme)
@@ -60,23 +57,18 @@ var (
)
func init() {
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&zap.Options{
Development: true,
// any other zap.Options tweaks
})))
klog.SetLogger(ctrl.Log.WithName("klog"))
appsinstall.Install(Scheme)
coreinstall.Install(Scheme)
// Register HelmRelease types.
if err := helmv2.AddToScheme(mgrScheme); err != nil {
if err := helmv2.AddToScheme(Scheme); err != nil {
panic(fmt.Errorf("Failed to add HelmRelease types to scheme: %w", err))
}
if err := corev1.AddToScheme(mgrScheme); err != nil {
if err := corev1.AddToScheme(Scheme); err != nil {
panic(fmt.Errorf("Failed to add core types to scheme: %w", err))
}
if err := rbacv1.AddToScheme(mgrScheme); err != nil {
if err := rbacv1.AddToScheme(Scheme); err != nil {
panic(fmt.Errorf("Failed to add RBAC types to scheme: %w", err))
}
// Add unversioned types.
@@ -142,7 +134,7 @@ func (c completedConfig) New() (*CozyServer, error) {
}
mgr, err := ctrl.NewManager(cfg, ctrl.Options{
Scheme: mgrScheme,
Scheme: Scheme,
Cache: cache.Options{SyncPeriod: &syncPeriod},
})
if err != nil {
@@ -172,7 +164,7 @@ func (c completedConfig) New() (*CozyServer, error) {
}
cli := mgr.GetClient()
watchCli, err := client.NewWithWatch(cfg, client.Options{Scheme: mgrScheme})
watchCli, err := client.NewWithWatch(cfg, client.Options{Scheme: Scheme})
if err != nil {
return nil, fmt.Errorf("failed to build watch client: %w", err)
}

View File

@@ -42,11 +42,11 @@ import (
genericoptions "k8s.io/apiserver/pkg/server/options"
utilfeature "k8s.io/apiserver/pkg/util/feature"
utilversionpkg "k8s.io/apiserver/pkg/util/version"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/component-base/featuregate"
baseversion "k8s.io/component-base/version"
netutils "k8s.io/utils/net"
"sigs.k8s.io/controller-runtime/pkg/client"
k8sconfig "sigs.k8s.io/controller-runtime/pkg/client/config"
)
// CozyServerOptions holds the state for the Cozy API server
@@ -150,7 +150,7 @@ func (o *CozyServerOptions) Complete() error {
return fmt.Errorf("failed to register types: %w", err)
}
cfg, err := k8sconfig.GetConfig()
cfg, err := clientcmd.BuildConfigFromFlags("", "")
if err != nil {
return fmt.Errorf("failed to get kubeconfig: %w", err)
}

View File

@@ -142,17 +142,9 @@ func (r *REST) GetSingularName() string {
// Create handles the creation of a new Application by converting it to a HelmRelease
func (r *REST) Create(ctx context.Context, obj runtime.Object, createValidation rest.ValidateObjectFunc, options *metav1.CreateOptions) (runtime.Object, error) {
// Assert the object is of type Application
us, ok := obj.(*unstructured.Unstructured)
app, ok := obj.(*appsv1alpha1.Application)
if !ok {
return nil, fmt.Errorf("expected unstructured.Unstructured object, got %T", obj)
}
app := &appsv1alpha1.Application{}
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(us.Object, app); err != nil {
errMsg := fmt.Sprintf("returned unstructured.Unstructured object was not an Application")
klog.Errorf(errMsg)
return nil, fmt.Errorf(errMsg)
return nil, fmt.Errorf("expected Application object, got %T", obj)
}
// Convert Application to HelmRelease
@@ -397,9 +389,11 @@ func (r *REST) List(ctx context.Context, options *metainternalversion.ListOption
}
// Explicitly set apiVersion and kind in unstructured object
appList := r.NewList().(*unstructured.Unstructured)
appList := &unstructured.UnstructuredList{}
appList.SetAPIVersion("apps.cozystack.io/v1alpha1")
appList.SetKind(r.kindName + "List")
appList.SetResourceVersion(hrList.GetResourceVersion())
appList.Object["items"] = items
appList.Items = items
klog.V(6).Infof("Successfully listed %d Application resources in namespace %s", len(items), namespace)
return appList, nil
@@ -447,16 +441,9 @@ func (r *REST) Update(ctx context.Context, name string, objInfo rest.UpdatedObje
}
// Assert the new object is of type Application
us, ok := newObj.(*unstructured.Unstructured)
app, ok := newObj.(*appsv1alpha1.Application)
if !ok {
errMsg := fmt.Sprintf("expected unstructured.Unstructured object, got %T", newObj)
klog.Errorf(errMsg)
return nil, false, fmt.Errorf(errMsg)
}
app := &appsv1alpha1.Application{}
if err := runtime.DefaultUnstructuredConverter.FromUnstructured(us.Object, app); err != nil {
errMsg := fmt.Sprintf("returned unstructured.Unstructured object was not an Application")
errMsg := fmt.Sprintf("expected Application object, got %T", newObj)
klog.Errorf(errMsg)
return nil, false, fmt.Errorf(errMsg)
}
@@ -529,12 +516,14 @@ func (r *REST) Update(ctx context.Context, name string, objInfo rest.UpdatedObje
klog.Errorf("Failed to convert Application to unstructured for resource %s: %v", convertedApp.GetName(), err)
return nil, false, fmt.Errorf("failed to convert Application to unstructured: %v", err)
}
obj := &unstructured.Unstructured{Object: unstructuredApp}
obj.SetGroupVersionKind(r.gvk)
// Explicitly set apiVersion and kind in unstructured object
unstructuredApp["apiVersion"] = "apps.cozystack.io/v1alpha1"
unstructuredApp["kind"] = r.kindName
klog.V(6).Infof("Returning patched Application object: %+v", unstructuredApp)
return obj, false, nil
return &unstructured.Unstructured{Object: unstructuredApp}, false, nil
}
// Delete removes an Application by deleting the corresponding HelmRelease
@@ -734,13 +723,11 @@ func (r *REST) Watch(ctx context.Context, options *metainternalversion.ListOptio
klog.Errorf("Failed to convert Application to unstructured: %v", err)
continue
}
obj := &unstructured.Unstructured{Object: unstructuredApp}
obj.SetGroupVersionKind(r.gvk)
// Create watch event with Application object
appEvent := watch.Event{
Type: event.Type,
Object: obj,
Object: &unstructured.Unstructured{Object: unstructuredApp},
}
// Send event to custom watcher
@@ -1073,34 +1060,6 @@ func (r *REST) ConvertToTable(ctx context.Context, object runtime.Object, tableO
table = r.buildTableFromApplications(apps)
table.ListMeta.ResourceVersion = obj.GetResourceVersion()
case *unstructured.Unstructured:
var apps []appsv1alpha1.Application
for {
var items interface{}
var ok bool
var objects []unstructured.Unstructured
if items, ok = obj.Object["items"]; !ok {
break
}
if objects, ok = items.([]unstructured.Unstructured); !ok {
break
}
apps = make([]appsv1alpha1.Application, 0, len(objects))
var a appsv1alpha1.Application
for i := range objects {
err := runtime.DefaultUnstructuredConverter.FromUnstructured(objects[i].Object, &a)
if err != nil {
klog.Errorf("Failed to convert Unstructured to Application: %v", err)
continue
}
apps = append(apps, a)
}
break
}
if apps != nil {
table = r.buildTableFromApplications(apps)
table.ListMeta.ResourceVersion = obj.GetResourceVersion()
break
}
var app appsv1alpha1.Application
err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &app)
if err != nil {
@@ -1237,17 +1196,12 @@ func (r *REST) Destroy() {
// New creates a new instance of Application
func (r *REST) New() runtime.Object {
obj := &unstructured.Unstructured{}
obj.SetGroupVersionKind(r.gvk)
return obj
return &appsv1alpha1.Application{}
}
// NewList returns an empty list of Application objects
func (r *REST) NewList() runtime.Object {
obj := &unstructured.Unstructured{}
obj.SetGroupVersionKind(r.gvk.GroupVersion().WithKind(r.kindName + "List"))
obj.Object["items"] = make([]interface{}, 0)
return obj
return &appsv1alpha1.ApplicationList{}
}
// Kind returns the resource kind used for API discovery

View File

@@ -25,24 +25,10 @@ sleep 5
cozypkg -n cozy-system -C packages/system/cozystack-resource-definition-crd apply cozystack-resource-definition-crd --plain
cozypkg -n cozy-system -C packages/system/cozystack-resource-definitions apply cozystack-resource-definitions --plain
cozypkg -n cozy-system -C packages/system/cozystack-api apply cozystack-api --plain
if kubectl get ds cozystack-api -n cozy-system >/dev/null 2>&1; then
echo "Waiting for cozystack-api daemonset"
kubectl rollout status ds/cozystack-api -n cozy-system --timeout=5m || exit 1
else
echo "Waiting for cozystack-api deployment"
kubectl rollout status deploy/cozystack-api -n cozy-system --timeout=5m || exit 1
fi
helm upgrade --install -n cozy-system cozystack-controller ./packages/system/cozystack-controller/ --take-ownership
echo "Waiting for cozystack-controller"
kubectl rollout status deploy/cozystack-controller -n cozy-system --timeout=5m || exit 1
helm upgrade --install -n cozy-system lineage-controller-webhook ./packages/system/lineage-controller-webhook/ --take-ownership
echo "Waiting for lineage-webhook"
kubectl rollout status ds/lineage-controller-webhook -n cozy-system --timeout=5m || exit 1
sleep 5
echo "Running lineage-webhook test"
kubectl delete ns cozy-lineage-webhook-test --ignore-not-found && kubectl create ns cozy-lineage-webhook-test
cleanup_test_ns() {
kubectl delete ns cozy-lineage-webhook-test --ignore-not-found
@@ -51,6 +37,9 @@ trap cleanup_test_ns ERR
timeout 60 sh -c 'until kubectl -n cozy-lineage-webhook-test create service clusterip lineage-webhook-test --clusterip="None" --dry-run=server; do sleep 1; done'
cleanup_test_ns
kubectl wait deployment/cozystack-api -n cozy-system --timeout=4m --for=condition=available || exit 1
kubectl wait deployment/cozystack-controller -n cozy-system --timeout=4m --for=condition=available || exit 1
timestamp=$(date --rfc-3339=ns || date)
kubectl get namespace -o custom-columns=NAME:.metadata.name --no-headers |
grep '^tenant-' |
@@ -61,7 +50,6 @@ kubectl get namespace -o custom-columns=NAME:.metadata.name --no-headers |
-n "$namespace" --all \
migration.cozystack.io="$timestamp" --overwrite || true)
done
# Stamp version
kubectl create configmap -n cozy-system cozystack-version \
--from-literal=version=21 --dry-run=client -o yaml | kubectl apply -f-