mirror of
https://github.com/outbackdingo/cozystack.git
synced 2026-01-28 18:18:41 +00:00
Compare commits
33 Commits
tests-w-re
...
victoria-m
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fa054f3ea1 | ||
|
|
1b7a597f1c | ||
|
|
aa84b1c054 | ||
|
|
6e96dd0a33 | ||
|
|
08cb7c0f28 | ||
|
|
ef30e69245 | ||
|
|
847980f03d | ||
|
|
0ecb8585bc | ||
|
|
32aea4254b | ||
|
|
e49918745e | ||
|
|
220c347cc5 | ||
|
|
a4ec46a941 | ||
|
|
2c126786b3 | ||
|
|
784f1454ba | ||
|
|
9d9226b575 | ||
|
|
9ec5863a75 | ||
|
|
50f3089f14 | ||
|
|
1aadefef75 | ||
|
|
5727110542 | ||
|
|
f2fffb03e4 | ||
|
|
ab5eae3fbc | ||
|
|
38cf5fd58c | ||
|
|
cda554b58c | ||
|
|
a73794d751 | ||
|
|
81a412517c | ||
|
|
23a7281fbf | ||
|
|
f32c6426a9 | ||
|
|
91583a4e1a | ||
|
|
9af6ce25bc | ||
|
|
e70dfdec31 | ||
|
|
08c0eecbc5 | ||
|
|
1db08d0b73 | ||
|
|
b2ed7525cd |
1
.github/workflows/tags.yaml
vendored
1
.github/workflows/tags.yaml
vendored
@@ -118,6 +118,7 @@ jobs:
|
||||
git config user.name "cozystack-bot"
|
||||
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
|
||||
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
|
||||
git config --unset-all http.https://github.com/.extraheader || true
|
||||
git add .
|
||||
git commit -m "Prepare release ${GITHUB_REF#refs/tags/}" -s || echo "No changes to commit"
|
||||
git push origin HEAD || true
|
||||
|
||||
@@ -12,10 +12,10 @@ repos:
|
||||
name: Run 'make generate' in all app directories
|
||||
entry: |
|
||||
/bin/bash -c '
|
||||
for dir in ./packages/apps/*/; do
|
||||
for dir in ./packages/apps/*/ ./packages/extra/*/ ./packages/system/cozystack-api/; do
|
||||
if [ -d "$dir" ]; then
|
||||
echo "Running make generate in $dir"
|
||||
(cd "$dir" && make generate)
|
||||
make generate -C "$dir"
|
||||
fi
|
||||
done
|
||||
git diff --color=always | cat
|
||||
|
||||
11
docs/changelogs/template.md
Normal file
11
docs/changelogs/template.md
Normal file
@@ -0,0 +1,11 @@
|
||||
## Major Features and Improvements
|
||||
|
||||
## Security
|
||||
|
||||
## Fixes
|
||||
|
||||
## Dependencies
|
||||
|
||||
## Documentation
|
||||
|
||||
## Development, Testing, and CI/CD
|
||||
8
docs/changelogs/v0.31.1.md
Normal file
8
docs/changelogs/v0.31.1.md
Normal file
@@ -0,0 +1,8 @@
|
||||
## Fixes
|
||||
|
||||
* [build] Update Talos Linux v1.10.3 and fix assets. (@kvaps in https://github.com/cozystack/cozystack/pull/1006)
|
||||
* [ci] Fix uploading released artifacts to GitHub. (@kvaps in https://github.com/cozystack/cozystack/pull/1009)
|
||||
* [ci] Separate build and testing jobs. (@kvaps in https://github.com/cozystack/cozystack/pull/1005)
|
||||
* [docs] Write a full release post for v0.31.1. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/999)
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.31.0...v0.31.1
|
||||
12
docs/changelogs/v0.31.2.md
Normal file
12
docs/changelogs/v0.31.2.md
Normal file
@@ -0,0 +1,12 @@
|
||||
## Security
|
||||
|
||||
* Resolve a security problem that allowed a tenant administrator to gain enhanced privileges outside the tenant. (@kvaps in https://github.com/cozystack/cozystack/pull/1062, backported in https://github.com/cozystack/cozystack/pull/1066)
|
||||
|
||||
## Fixes
|
||||
|
||||
* [platform] Fix dependencies in `distro-full` bundle. (@klinch0 in https://github.com/cozystack/cozystack/pull/1056, backported in https://github.com/cozystack/cozystack/pull/1064)
|
||||
* [platform] Fix RBAC for annotating namespaces. (@kvaps in https://github.com/cozystack/cozystack/pull/1031, backported in https://github.com/cozystack/cozystack/pull/1037)
|
||||
* [platform] Reduce system resource consumption by using smaller resource presets for VerticalPodAutoscaler, SeaweedFS, and KubeOVN. (@klinch0 in https://github.com/cozystack/cozystack/pull/1054, backported in https://github.com/cozystack/cozystack/pull/1058)
|
||||
* [dashboard] Fix a number of issues in the Cozystack Dashboard (@kvaps in https://github.com/cozystack/cozystack/pull/1042, backported in https://github.com/cozystack/cozystack/pull/1066)
|
||||
* [apps] Specify minimal working resource presets. (@kvaps in https://github.com/cozystack/cozystack/pull/1040, backported in https://github.com/cozystack/cozystack/pull/1041)
|
||||
* [apps] Update built-in documentation and configuration reference for managed Clickhouse application. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1059, backported in https://github.com/cozystack/cozystack/pull/1065)
|
||||
38
docs/changelogs/v0.32.1.md
Normal file
38
docs/changelogs/v0.32.1.md
Normal file
@@ -0,0 +1,38 @@
|
||||
## Major Features and Improvements
|
||||
|
||||
* [postgres] Introduce new functionality for backup and restore in PostgreSQL. (@klinch0 in https://github.com/cozystack/cozystack/pull/1086)
|
||||
* [apps] Refactor resources in managed applications. (@kvaps in https://github.com/cozystack/cozystack/pull/1106)
|
||||
* [system] Make VMAgent's `extraArgs` tunable. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1091)
|
||||
|
||||
## Fixes
|
||||
|
||||
* [postgres] Escape users and database names. (@kvaps in https://github.com/cozystack/cozystack/pull/1087)
|
||||
* [tenant] Fix monitoring agents HelmReleases for tenant clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/1079)
|
||||
* [kubernetes] Wrap cert-manager CRDs in a conditional. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1076)
|
||||
* [kubernetes] Remove `useCustomSecretForPatchContainerd` option and enable it by default. (@kvaps in https://github.com/cozystack/cozystack/pull/1104)
|
||||
* [apps] Increase default resource presets for Clickhouse and Kafka from `nano` to `small`. Update OpenAPI specs and readme's. (@kvaps in https://github.com/cozystack/cozystack/pull/1103 and https://github.com/cozystack/cozystack/pull/1105)
|
||||
* [linstor] Add configurable DRBD network options for connection and timeout settings, replacing scripted logic for detecting devices that lost connection. (@kvaps in https://github.com/cozystack/cozystack/pull/1094)
|
||||
|
||||
## Dependencies
|
||||
|
||||
* Update cozy-proxy to v0.2.0 (@kvaps in https://github.com/cozystack/cozystack/pull/1081)
|
||||
* Update Kafka Operator to 0.45.1-rc1 (@kvaps in https://github.com/cozystack/cozystack/pull/1082 and https://github.com/cozystack/cozystack/pull/1102)
|
||||
* Update Flux Operator to 0.23.0 (@kingdonb in https://github.com/cozystack/cozystack/pull/1078)
|
||||
|
||||
## Documentation
|
||||
|
||||
* [docs] Release notes for v0.32.0 and two beta-versions. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1043)
|
||||
|
||||
## Development, Testing, and CI/CD
|
||||
|
||||
* [tests] Add Kafka, Redis. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1077)
|
||||
* [tests] Increase disk space for VMs in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1097)
|
||||
* [tests] Upd Kubernetes v1.33. (@kvaps in https://github.com/cozystack/cozystack/pull/1083)
|
||||
* [tests] increase postgres timeouts. (@kvaps in https://github.com/cozystack/cozystack/pull/1108)
|
||||
* [tests] don't wait for postgres ro service. (@kvaps in https://github.com/cozystack/cozystack/pull/1109)
|
||||
* [ci] Setup systemd timer to tear down sandbox. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1092)
|
||||
* [ci] Split testing job into several. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1075)
|
||||
* [ci] Run E2E tests as separate parallel jobs. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1093)
|
||||
* [ci] Refactor GitHub workflows. (@kvaps in https://github.com/cozystack/cozystack/pull/1107)
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.32.0...v0.32.1
|
||||
@@ -2,18 +2,6 @@
|
||||
|
||||
@test "Create DB ClickHouse" {
|
||||
name='test'
|
||||
withResources='true'
|
||||
if [ "$withResources" == 'true' ]; then
|
||||
resources=$(cat <<EOF
|
||||
resources:
|
||||
resources:
|
||||
cpu: 500m
|
||||
memory: 768Mi
|
||||
EOF
|
||||
)
|
||||
else
|
||||
resources=' resources: {}'
|
||||
fi
|
||||
kubectl apply -f- <<EOF
|
||||
apiVersion: apps.cozystack.io/v1alpha1
|
||||
kind: ClickHouse
|
||||
@@ -39,13 +27,15 @@ spec:
|
||||
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
|
||||
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
|
||||
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
|
||||
$resources
|
||||
resources: {}
|
||||
resourcesPreset: "nano"
|
||||
EOF
|
||||
sleep 5
|
||||
kubectl -n tenant-test wait --timeout=40s hr clickhouse-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=130s clickhouses $name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=120s sts chi-clickhouse-$name-clickhouse-0-0 --for=jsonpath='{.status.replicas}'=1
|
||||
timeout 210 sh -ec "until kubectl -n tenant-test wait svc chendpoint-clickhouse-$name --for=jsonpath='{.spec.ports[0].port}'=8123; do sleep 10; done"
|
||||
kubectl -n tenant-test delete clickhouse.apps.cozystack.io $name
|
||||
kubectl -n tenant-test wait hr clickhouse-$name --timeout=20s --for=condition=ready
|
||||
timeout 180 sh -ec "until kubectl -n tenant-test get svc chendpoint-clickhouse-$name -o jsonpath='{.spec.ports[*].port}' | grep -q '8123 9000'; do sleep 10; done"
|
||||
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-0 --timeout=120s --for=jsonpath='{.status.replicas}'=1
|
||||
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
|
||||
timeout 100 sh -ec "until kubectl -n tenant-test get svc chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.spec.ports[*].port}' | grep -q '9000 8123 9009'; do sleep 10; done"
|
||||
timeout 80 sh -ec "until kubectl -n tenant-test get sts chi-clickhouse-$name-clickhouse-0-1 ; do sleep 10; done"
|
||||
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-1 --timeout=140s --for=jsonpath='{.status.replicas}'=1
|
||||
}
|
||||
|
||||
@@ -2,18 +2,6 @@
|
||||
|
||||
@test "Create Kafka" {
|
||||
name='test'
|
||||
withResources='true'
|
||||
if [ "$withResources" == 'true' ]; then
|
||||
resources=$(cat <<EOF
|
||||
resources:
|
||||
resources:
|
||||
cpu: 500m
|
||||
memory: 768Mi
|
||||
EOF
|
||||
)
|
||||
else
|
||||
resources='resources: {}'
|
||||
fi
|
||||
kubectl apply -f- <<EOF
|
||||
apiVersion: apps.cozystack.io/v1alpha1
|
||||
kind: Kafka
|
||||
@@ -26,13 +14,13 @@ spec:
|
||||
size: 10Gi
|
||||
replicas: 2
|
||||
storageClass: ""
|
||||
$resources
|
||||
resources: {}
|
||||
resourcesPreset: "nano"
|
||||
zookeeper:
|
||||
size: 5Gi
|
||||
replicas: 2
|
||||
storageClass: ""
|
||||
$resources
|
||||
resources:
|
||||
resourcesPreset: "nano"
|
||||
topics:
|
||||
- name: testResults
|
||||
@@ -50,9 +38,14 @@ spec:
|
||||
replicas: 2
|
||||
EOF
|
||||
sleep 5
|
||||
kubectl -n tenant-test wait --timeout=30s hr kafka-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=1m kafkas $name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=50s pvc data-kafka-$name-zookeeper-0 --for=jsonpath='{.status.phase}'=Bound
|
||||
kubectl -n tenant-test wait --timeout=40s svc kafka-$name-zookeeper-client --for=jsonpath='{.spec.ports[0].port}'=2181
|
||||
kubectl -n tenant-test wait hr kafka-$name --timeout=30s --for=condition=ready
|
||||
kubectl wait kafkas -n tenant-test test --timeout=60s --for=condition=ready
|
||||
timeout 60 sh -ec "until kubectl -n tenant-test get pvc data-kafka-$name-zookeeper-0; do sleep 10; done"
|
||||
kubectl -n tenant-test wait pvc data-kafka-$name-zookeeper-0 --timeout=50s --for=jsonpath='{.status.phase}'=Bound
|
||||
timeout 40 sh -ec "until kubectl -n tenant-test get svc kafka-$name-zookeeper-client -o jsonpath='{.spec.ports[0].port}' | grep -q '2181'; do sleep 10; done"
|
||||
timeout 40 sh -ec "until kubectl -n tenant-test get svc kafka-$name-zookeeper-nodes -o jsonpath='{.spec.ports[*].port}' | grep -q '2181 2888 3888'; do sleep 10; done"
|
||||
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints kafka-$name-zookeeper-nodes -o jsonpath='{.subsets[*].addresses[0].ip}' | grep -q '[0-9]'; do sleep 10; done"
|
||||
kubectl -n tenant-test delete kafka.apps.cozystack.io $name
|
||||
kubectl -n tenant-test delete pvc data-kafka-$name-zookeeper-0
|
||||
kubectl -n tenant-test delete pvc data-kafka-$name-zookeeper-1
|
||||
}
|
||||
|
||||
@@ -1,17 +1,16 @@
|
||||
#!/usr/bin/env bats
|
||||
|
||||
@test "Create a tenant Kubernetes control plane" {
|
||||
name='test'
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: apps.cozystack.io/v1alpha1
|
||||
kind: Kubernetes
|
||||
metadata:
|
||||
name: $name
|
||||
name: test
|
||||
namespace: tenant-test
|
||||
spec:
|
||||
addons:
|
||||
certManager:
|
||||
enabled: true
|
||||
enabled: false
|
||||
valuesOverride: {}
|
||||
cilium:
|
||||
valuesOverride: {}
|
||||
@@ -25,12 +24,10 @@ spec:
|
||||
valuesOverride: {}
|
||||
ingressNginx:
|
||||
enabled: true
|
||||
hosts:
|
||||
- example.org
|
||||
exposeMethod: Proxied
|
||||
hosts: []
|
||||
valuesOverride: {}
|
||||
monitoringAgents:
|
||||
enabled: true
|
||||
enabled: false
|
||||
valuesOverride: {}
|
||||
verticalPodAutoscaler:
|
||||
valuesOverride: {}
|
||||
@@ -64,39 +61,12 @@ spec:
|
||||
- ingress-nginx
|
||||
storageClass: replicated
|
||||
EOF
|
||||
sleep 10
|
||||
kubectl wait --timeout=20s namespace tenant-test --for=jsonpath='{.status.phase}'=Active
|
||||
kubectl -n tenant-test wait --timeout=10s kamajicontrolplane kubernetes-$name --for=jsonpath='{.status.conditions[0].status}'=True
|
||||
kubectl -n tenant-test wait --timeout=4m kamajicontrolplane kubernetes-$name --for=condition=TenantControlPlaneCreated
|
||||
kubectl -n tenant-test wait --timeout=210s tcp kubernetes-$name --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
|
||||
kubectl -n tenant-test wait --timeout=4m deploy kubernetes-$name kubernetes-$name-cluster-autoscaler kubernetes-$name-kccm kubernetes-$name-kcsi-controller --for=condition=available
|
||||
kubectl -n tenant-test wait --timeout=1m machinedeployment kubernetes-$name-md0 --for=jsonpath='{.status.replicas}'=2
|
||||
kubectl -n tenant-test wait --timeout=10m machinedeployment kubernetes-$name-md0 --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
|
||||
# ingress / load balancer
|
||||
kubectl -n tenant-test wait --timeout=5m hr kubernetes-$name-monitoring-agents --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=5m hr kubernetes-$name-ingress-nginx --for=condition=ready
|
||||
kubectl -n tenant-test get secret kubernetes-$name-admin-kubeconfig -o go-template='{{ printf "%s\n" (index .data "admin.conf" | base64decode) }}' > admin.conf
|
||||
KUBECONFIG=admin.conf kubectl -n cozy-ingress-nginx wait --timeout=3m deploy ingress-nginx-defaultbackend --for=jsonpath='{.status.conditions[0].status}'=True
|
||||
KUBECONFIG=admin.conf kubectl -n cozy-monitoring wait --timeout=3m deploy cozy-monitoring-agents-metrics-server --for=jsonpath='{.status.conditions[0].status}'=True
|
||||
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
|
||||
timeout 10 sh -ec 'until kubectl get kamajicontrolplane -n tenant-test kubernetes-test; do sleep 1; done'
|
||||
kubectl wait --for=condition=TenantControlPlaneCreated kamajicontrolplane -n tenant-test kubernetes-test --timeout=4m
|
||||
kubectl wait tcp -n tenant-test kubernetes-test --timeout=2m --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
|
||||
kubectl wait deploy --timeout=4m --for=condition=available -n tenant-test kubernetes-test kubernetes-test-cluster-autoscaler kubernetes-test-kccm kubernetes-test-kcsi-controller
|
||||
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
|
||||
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=10m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
|
||||
kubectl -n tenant-test delete kuberneteses.apps.cozystack.io test
|
||||
}
|
||||
|
||||
@test "Create a PVC in tenant Kubernetes" {
|
||||
name='test'
|
||||
KUBECONFIG=admin.conf kubectl apply -f - <<EOF
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: pvc-$name
|
||||
namespace: cozy-monitoring
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
EOF
|
||||
sleep 10
|
||||
KUBECONFIG=admin.conf kubectl -n cozy-monitoring wait --timeout=20s pvc pvc-$name --for=jsonpath='{.status.phase}'=Bound
|
||||
KUBECONFIG=admin.conf kubectl -n cozy-monitoring delete pvc pvc-$name
|
||||
kubectl -n tenant-test delete kuberneteses.apps.cozystack.io $name
|
||||
}
|
||||
@@ -2,18 +2,6 @@
|
||||
|
||||
@test "Create DB MySQL" {
|
||||
name='test'
|
||||
withResources='true'
|
||||
if [ "$withResources" == 'true' ]; then
|
||||
resources=$(cat <<EOF
|
||||
resources:
|
||||
resources:
|
||||
cpu: 3000m
|
||||
memory: 3Gi
|
||||
EOF
|
||||
)
|
||||
else
|
||||
resources=' resources: {}'
|
||||
fi
|
||||
kubectl apply -f- <<EOF
|
||||
apiVersion: apps.cozystack.io/v1alpha1
|
||||
kind: MySQL
|
||||
@@ -43,15 +31,16 @@ spec:
|
||||
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
|
||||
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
|
||||
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
|
||||
$resources
|
||||
resources: {}
|
||||
resourcesPreset: "nano"
|
||||
EOF
|
||||
sleep 10
|
||||
kubectl -n tenant-test wait --timeout=30s hr mysql-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=130s mysqls $name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=110s sts mysql-$name --for=jsonpath='{.status.replicas}'=2
|
||||
sleep 60
|
||||
kubectl -n tenant-test wait --timeout=60s deploy mysql-$name-metrics --for=jsonpath='{.status.replicas}'=1
|
||||
kubectl -n tenant-test wait --timeout=100s svc mysql-$name --for=jsonpath='{.spec.ports[0].port}'=3306
|
||||
sleep 5
|
||||
kubectl -n tenant-test wait hr mysql-$name --timeout=30s --for=condition=ready
|
||||
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name -o jsonpath='{.spec.ports[0].port}' | grep -q '3306'; do sleep 10; done"
|
||||
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
|
||||
kubectl -n tenant-test wait statefulset.apps/mysql-$name --timeout=110s --for=jsonpath='{.status.replicas}'=2
|
||||
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name-metrics -o jsonpath='{.spec.ports[0].port}' | grep -q '9104'; do sleep 10; done"
|
||||
timeout 40 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name-metrics -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
|
||||
kubectl -n tenant-test wait deployment.apps/mysql-$name-metrics --timeout=90s --for=jsonpath='{.status.replicas}'=1
|
||||
kubectl -n tenant-test delete mysqls.apps.cozystack.io $name
|
||||
}
|
||||
|
||||
@@ -2,18 +2,6 @@
|
||||
|
||||
@test "Create DB PostgreSQL" {
|
||||
name='test'
|
||||
withResources='true'
|
||||
if [ "$withResources" == 'true' ]; then
|
||||
resources=$(cat <<EOF
|
||||
resources:
|
||||
resources:
|
||||
cpu: 500m
|
||||
memory: 768Mi
|
||||
EOF
|
||||
)
|
||||
else
|
||||
resources=' resources: {}'
|
||||
fi
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: apps.cozystack.io/v1alpha1
|
||||
kind: Postgres
|
||||
@@ -48,14 +36,19 @@ spec:
|
||||
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
|
||||
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
|
||||
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
|
||||
$resources
|
||||
resources: {}
|
||||
resourcesPreset: "nano"
|
||||
EOF
|
||||
sleep 5
|
||||
kubectl -n tenant-test wait --timeout=200s hr postgres-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=130s postgreses $name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=50s job.batch postgres-$name-init-job --for=condition=Complete
|
||||
kubectl -n tenant-test wait --timeout=40s svc postgres-$name-r --for=jsonpath='{.spec.ports[0].port}'=5432
|
||||
kubectl -n tenant-test wait hr postgres-$name --timeout=100s --for=condition=ready
|
||||
kubectl -n tenant-test wait job.batch postgres-$name-init-job --timeout=50s --for=condition=Complete
|
||||
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-r -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
|
||||
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-ro -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
|
||||
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-rw -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
|
||||
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-r -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
|
||||
# for some reason it takes longer for the read-only endpoint to be ready
|
||||
#timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-ro -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
|
||||
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-rw -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
|
||||
kubectl -n tenant-test delete postgreses.apps.cozystack.io $name
|
||||
kubectl -n tenant-test delete job.batch/postgres-$name-init-job
|
||||
}
|
||||
|
||||
@@ -2,18 +2,6 @@
|
||||
|
||||
@test "Create Redis" {
|
||||
name='test'
|
||||
withResources='true'
|
||||
if [ "$withResources" == 'true' ]; then
|
||||
resources=$(cat <<EOF
|
||||
resources:
|
||||
resources:
|
||||
cpu: 500m
|
||||
memory: 768Mi
|
||||
EOF
|
||||
)
|
||||
else
|
||||
resources='resources: {}'
|
||||
fi
|
||||
kubectl apply -f- <<EOF
|
||||
apiVersion: apps.cozystack.io/v1alpha1
|
||||
kind: Redis
|
||||
@@ -26,15 +14,13 @@ spec:
|
||||
replicas: 2
|
||||
storageClass: ""
|
||||
authEnabled: true
|
||||
$resources
|
||||
resources: {}
|
||||
resourcesPreset: "nano"
|
||||
EOF
|
||||
sleep 5
|
||||
kubectl -n tenant-test wait --timeout=20s hr redis-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=130s redis.apps.cozystack.io $name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=50s pvc redisfailover-persistent-data-rfr-redis-$name-0 --for=jsonpath='{.status.phase}'=Bound
|
||||
kubectl -n tenant-test wait --timeout=90s sts rfr-redis-$name --for=jsonpath='{.status.replicas}'=2
|
||||
sleep 45
|
||||
kubectl -n tenant-test wait --timeout=45s deploy rfs-redis-$name --for=condition=available
|
||||
kubectl -n tenant-test wait hr redis-$name --timeout=20s --for=condition=ready
|
||||
kubectl -n tenant-test wait pvc redisfailover-persistent-data-rfr-redis-$name-0 --timeout=50s --for=jsonpath='{.status.phase}'=Bound
|
||||
kubectl -n tenant-test wait deploy rfs-redis-$name --timeout=90s --for=condition=available
|
||||
kubectl -n tenant-test wait sts rfr-redis-$name --timeout=90s --for=jsonpath='{.status.replicas}'=2
|
||||
kubectl -n tenant-test delete redis.apps.cozystack.io $name
|
||||
}
|
||||
|
||||
@@ -2,14 +2,6 @@
|
||||
|
||||
@test "Create a Virtual Machine" {
|
||||
name='test'
|
||||
withResources='true'
|
||||
if [ "$withResources" == 'true' ]; then
|
||||
cores="1000m"
|
||||
memory="1Gi
|
||||
else
|
||||
cores="2000m"
|
||||
memory="2Gi
|
||||
fi
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: apps.cozystack.io/v1alpha1
|
||||
kind: VirtualMachine
|
||||
@@ -17,12 +9,6 @@ metadata:
|
||||
name: $name
|
||||
namespace: tenant-test
|
||||
spec:
|
||||
domain:
|
||||
cpu:
|
||||
cores: "$cores"
|
||||
resources:
|
||||
requests:
|
||||
memory: "$memory"
|
||||
external: false
|
||||
externalMethod: PortList
|
||||
externalPorts:
|
||||
@@ -34,6 +20,9 @@ spec:
|
||||
storage: 5Gi
|
||||
storageClass: replicated
|
||||
gpus: []
|
||||
resources:
|
||||
cpu: ""
|
||||
memory: ""
|
||||
sshKeys:
|
||||
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
|
||||
test@test
|
||||
@@ -48,12 +37,11 @@ spec:
|
||||
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
|
||||
cloudInitSeed: ""
|
||||
EOF
|
||||
sleep 10
|
||||
kubectl -n tenant-test wait --timeout=10s hr virtual-machine-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=130s virtualmachines $name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=130s pvc virtual-machine-$name --for=jsonpath='{.status.phase}'=Bound
|
||||
kubectl -n tenant-test wait --timeout=150s dv virtual-machine-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=100s vm virtual-machine-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=150s vmi virtual-machine-$name --for=jsonpath='{status.phase}'=Running
|
||||
sleep 5
|
||||
kubectl -n tenant-test wait hr virtual-machine-$name --timeout=10s --for=condition=ready
|
||||
kubectl -n tenant-test wait dv virtual-machine-$name --timeout=150s --for=condition=ready
|
||||
kubectl -n tenant-test wait pvc virtual-machine-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
|
||||
kubectl -n tenant-test wait vm virtual-machine-$name --timeout=100s --for=condition=ready
|
||||
timeout 120 sh -ec "until kubectl -n tenant-test get vmi virtual-machine-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 10; done"
|
||||
kubectl -n tenant-test delete virtualmachines.apps.cozystack.io $name
|
||||
}
|
||||
|
||||
@@ -17,37 +17,21 @@ spec:
|
||||
storageClass: replicated
|
||||
EOF
|
||||
sleep 5
|
||||
kubectl -n tenant-test wait --timeout=5s hr vm-disk-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=130s vmdisks $name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=130s pvc vm-disk-$name --for=jsonpath='{.status.phase}'=Bound
|
||||
kubectl -n tenant-test wait --timeout=150s dv vm-disk-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait hr vm-disk-$name --timeout=5s --for=condition=ready
|
||||
kubectl -n tenant-test wait dv vm-disk-$name --timeout=150s --for=condition=ready
|
||||
kubectl -n tenant-test wait pvc vm-disk-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
|
||||
}
|
||||
|
||||
@test "Create a VM Instance" {
|
||||
diskName='test'
|
||||
name='test'
|
||||
withResources='true'
|
||||
if [ "$withResources" == 'true' ]; then
|
||||
cores="1000m"
|
||||
memory="1Gi
|
||||
else
|
||||
cores="2000m"
|
||||
memory="2Gi
|
||||
fi
|
||||
kubectl -n tenant-test get vminstances.apps.cozystack.io $name ||
|
||||
kubectl create -f - <<EOF
|
||||
kubectl apply -f - <<EOF
|
||||
apiVersion: apps.cozystack.io/v1alpha1
|
||||
kind: VMInstance
|
||||
metadata:
|
||||
name: $name
|
||||
namespace: tenant-test
|
||||
spec:
|
||||
domain:
|
||||
cpu:
|
||||
cores: "$cores"
|
||||
resources:
|
||||
requests:
|
||||
memory: "$memory"
|
||||
external: false
|
||||
externalMethod: PortList
|
||||
externalPorts:
|
||||
@@ -58,6 +42,9 @@ spec:
|
||||
disks:
|
||||
- name: $diskName
|
||||
gpus: []
|
||||
resources:
|
||||
cpu: ""
|
||||
memory: ""
|
||||
sshKeys:
|
||||
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
|
||||
test@test
|
||||
@@ -73,10 +60,9 @@ spec:
|
||||
cloudInitSeed: ""
|
||||
EOF
|
||||
sleep 5
|
||||
kubectl -n tenant-test wait --timeout=5s hr vm-instance-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=130s vminstances $name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=20s vm vm-instance-$name --for=condition=ready
|
||||
kubectl -n tenant-test wait --timeout=40s vmi vm-instance-$name --for=jsonpath='{status.phase}'=Running
|
||||
timeout 20 sh -ec "until kubectl -n tenant-test get vmi vm-instance-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 5; done"
|
||||
kubectl -n tenant-test wait hr vm-instance-$name --timeout=5s --for=condition=ready
|
||||
kubectl -n tenant-test wait vm vm-instance-$name --timeout=20s --for=condition=ready
|
||||
kubectl -n tenant-test delete vminstances.apps.cozystack.io $name
|
||||
kubectl -n tenant-test delete vmdisks.apps.cozystack.io $diskName
|
||||
}
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/nginx-cache:0.6.0@sha256:b7633717cd7449c0042ae92d8ca9b36e4d69566561f5c7d44e21058e7d05c6d5
|
||||
ghcr.io/cozystack/cozystack/nginx-cache:0.6.0@sha256:50ac1581e3100bd6c477a71161cb455a341ffaf9e5e2f6086802e4e25271e8af
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.25.1@sha256:3a8170433e1632e5cc2b6d9db34d0605e8e6c63c158282c38450415e700e932e
|
||||
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.25.2@sha256:3a8170433e1632e5cc2b6d9db34d0605e8e6c63c158282c38450415e700e932e
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.25.1@sha256:412ed2b3c77249bd1b973e6dc9c87976d31863717fb66ba74ccda573af737eb1
|
||||
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.25.2@sha256:e522960064290747a67502d4e8927c591bdb290bad1f0bae88a02758ebfd380f
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.25.1@sha256:445c2727b04ac68595b43c988ff17b3d69a7b22b0644fde3b10c65b47a7bc036
|
||||
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.25.2@sha256:761e7235ff9cb7f6f223f00954943e6a5af32ed6624ee592a8610122f96febb0
|
||||
|
||||
@@ -76,7 +76,7 @@ input:
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
|
||||
imageRef: "ghcr.io/siderolabs/installer:${TALOS_VERSION}"
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:${AMD_UCODE_VERSION}
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:${BNX2_BNX2X_VERSION}
|
||||
|
||||
@@ -3,22 +3,22 @@
|
||||
arch: amd64
|
||||
platform: metal
|
||||
secureboot: false
|
||||
version: v1.10.3
|
||||
version: v1.10.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
|
||||
output:
|
||||
kind: initramfs
|
||||
imageOptions: {}
|
||||
|
||||
@@ -3,22 +3,22 @@
|
||||
arch: amd64
|
||||
platform: metal
|
||||
secureboot: false
|
||||
version: v1.10.3
|
||||
version: v1.10.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
|
||||
output:
|
||||
kind: installer
|
||||
imageOptions: {}
|
||||
|
||||
@@ -3,22 +3,22 @@
|
||||
arch: amd64
|
||||
platform: metal
|
||||
secureboot: false
|
||||
version: v1.10.3
|
||||
version: v1.10.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
|
||||
output:
|
||||
kind: iso
|
||||
imageOptions: {}
|
||||
|
||||
@@ -3,22 +3,22 @@
|
||||
arch: amd64
|
||||
platform: metal
|
||||
secureboot: false
|
||||
version: v1.10.3
|
||||
version: v1.10.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
|
||||
output:
|
||||
kind: kernel
|
||||
imageOptions: {}
|
||||
|
||||
@@ -3,22 +3,22 @@
|
||||
arch: amd64
|
||||
platform: metal
|
||||
secureboot: false
|
||||
version: v1.10.3
|
||||
version: v1.10.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
|
||||
output:
|
||||
kind: image
|
||||
imageOptions: { diskSize: 1306525696, diskFormat: raw }
|
||||
|
||||
@@ -3,22 +3,22 @@
|
||||
arch: amd64
|
||||
platform: nocloud
|
||||
secureboot: false
|
||||
version: v1.10.3
|
||||
version: v1.10.5
|
||||
input:
|
||||
kernel:
|
||||
path: /usr/install/amd64/vmlinuz
|
||||
initramfs:
|
||||
path: /usr/install/amd64/initramfs.xz
|
||||
baseInstaller:
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
|
||||
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
|
||||
systemExtensions:
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
|
||||
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
|
||||
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
|
||||
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
|
||||
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
|
||||
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
|
||||
output:
|
||||
kind: image
|
||||
imageOptions: { diskSize: 1306525696, diskFormat: raw }
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
cozystack:
|
||||
image: ghcr.io/cozystack/cozystack/installer:v0.33.1@sha256:03a0002be9cf5926643c295bbf05c3e250401b0f0595b9fcd147d53534f368f5
|
||||
image: ghcr.io/cozystack/cozystack/installer:v0.34.0-beta.1@sha256:6f29c93e52d686ae6144d64bbcd92c138cbd1b432b06a74273c5dc35b11fe048
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
e2e:
|
||||
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.33.1@sha256:eed183a4104b1c142f6c4a358338749efe73baefddd53d7fe4c7149ecb892ce1
|
||||
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.34.0-beta.1@sha256:f0a7a45218122b57022e51d41c0e6b18d31621c8ec504651d2347f47e5e5f256
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/matchbox:v0.33.1@sha256:ca3638c620215ace26ace3f7e8b27391847ab2158b5a67f070f43dcbea071532
|
||||
ghcr.io/cozystack/cozystack/matchbox:v0.34.0-beta.1@sha256:a0bd0076e0bc866858d3f08adca5944fb75004ad0ead2ace369d1c155d780383
|
||||
|
||||
@@ -7,7 +7,6 @@
|
||||
| Name | Description | Value |
|
||||
| ---------------- | ----------------------------------------------------------------- | ------- |
|
||||
| `replicas` | Number of ingress-nginx replicas | `2` |
|
||||
| `externalIPs` | List of externalIPs for service. | `[]` |
|
||||
| `whitelist` | List of client networks | `[]` |
|
||||
| `clouflareProxy` | Restoring original visitor IPs when Cloudflare proxied is enabled | `false` |
|
||||
|
||||
|
||||
@@ -7,14 +7,6 @@
|
||||
"description": "Number of ingress-nginx replicas",
|
||||
"default": 2
|
||||
},
|
||||
"externalIPs": {
|
||||
"type": "array",
|
||||
"description": "List of externalIPs for service.",
|
||||
"default": "[]",
|
||||
"items": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"whitelist": {
|
||||
"type": "array",
|
||||
"description": "List of client networks",
|
||||
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:b748d9add5fc4080b143d8690ca1ad851d911948ac8eb296dd9005d53d153c05
|
||||
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:45e02729edbee171519068b23cd3516009315769b36f59465c420a618320e363
|
||||
|
||||
@@ -79,7 +79,7 @@ annotations:
|
||||
Pod IP Pool\n description: |\n CiliumPodIPPool defines an IP pool that can
|
||||
be used for pooled IPAM (i.e. the multi-pool IPAM mode).\n"
|
||||
apiVersion: v2
|
||||
appVersion: 1.17.4
|
||||
appVersion: 1.17.5
|
||||
description: eBPF-based Networking, Security, and Observability
|
||||
home: https://cilium.io/
|
||||
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@main/Documentation/images/logo-solo.svg
|
||||
@@ -95,4 +95,4 @@ kubeVersion: '>= 1.21.0-0'
|
||||
name: cilium
|
||||
sources:
|
||||
- https://github.com/cilium/cilium
|
||||
version: 1.17.4
|
||||
version: 1.17.5
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# cilium
|
||||
|
||||
 
|
||||
 
|
||||
|
||||
Cilium is open source software for providing and transparently securing
|
||||
network connectivity and loadbalancing between application workloads such as
|
||||
@@ -85,7 +85,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| authentication.mutual.spire.install.agent.tolerations | list | `[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"},{"key":"CriticalAddonsOnly","operator":"Exists"}]` | SPIRE agent tolerations configuration By default it follows the same tolerations as the agent itself to allow the Cilium agent on this node to connect to SPIRE. ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true |
|
||||
| authentication.mutual.spire.install.existingNamespace | bool | `false` | SPIRE namespace already exists. Set to true if Helm should not create, manage, and import the SPIRE namespace. |
|
||||
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:37f7b378a29ceb4c551b1b5582e27747b855bbfaa73fa11914fe0df028dc581f","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
|
||||
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:f85340bf132ae937d2c2a763b8335c9bab35d6e8293f70f606b9c6178d84f42b","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
|
||||
| authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into |
|
||||
| authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration |
|
||||
| authentication.mutual.spire.install.server.annotations | object | `{}` | SPIRE server annotations |
|
||||
@@ -197,7 +197,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. |
|
||||
| clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. |
|
||||
| clustermesh.apiserver.healthPort | int | `9880` | TCP port for the clustermesh-apiserver health API. |
|
||||
| clustermesh.apiserver.image | object | `{"digest":"sha256:0b72f3046cf36ff9b113d53cc61185e893edb5fe728a2c9e561c1083f806453d","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.17.4","useDigest":true}` | Clustermesh API server image. |
|
||||
| clustermesh.apiserver.image | object | `{"digest":"sha256:78dc40b9cb8d7b1ad21a76ff3e11541809acda2ac4ef94150cc832100edc247d","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.17.5","useDigest":true}` | Clustermesh API server image. |
|
||||
| clustermesh.apiserver.kvstoremesh.enabled | bool | `true` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance. |
|
||||
| clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. |
|
||||
| clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. |
|
||||
@@ -243,6 +243,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| clustermesh.apiserver.service.enableSessionAffinity | string | `"HAOnly"` | Defines when to enable session affinity. Each replica in a clustermesh-apiserver deployment runs its own discrete etcd cluster. Remote clients connect to one of the replicas through a shared Kubernetes Service. A client reconnecting to a different backend will require a full resync to ensure data integrity. Session affinity can reduce the likelihood of this happening, but may not be supported by all cloud providers. Possible values: - "HAOnly" (default) Only enable session affinity for deployments with more than 1 replica. - "Always" Always enable session affinity. - "Never" Never enable session affinity. Useful in environments where session affinity is not supported, but may lead to slightly degraded performance due to more frequent reconnections. |
|
||||
| clustermesh.apiserver.service.externalTrafficPolicy | string | `"Cluster"` | The externalTrafficPolicy of service used for apiserver access. |
|
||||
| clustermesh.apiserver.service.internalTrafficPolicy | string | `"Cluster"` | The internalTrafficPolicy of service used for apiserver access. |
|
||||
| clustermesh.apiserver.service.labels | object | `{}` | Labels for the clustermesh-apiserver service. |
|
||||
| clustermesh.apiserver.service.loadBalancerClass | string | `nil` | Configure a loadBalancerClass. Allows to configure the loadBalancerClass on the clustermesh-apiserver LB service in case the Service type is set to LoadBalancer (requires Kubernetes 1.24+). |
|
||||
| clustermesh.apiserver.service.loadBalancerIP | string | `nil` | Configure a specific loadBalancerIP. Allows to configure a specific loadBalancerIP on the clustermesh-apiserver LB service in case the Service type is set to LoadBalancer. |
|
||||
| clustermesh.apiserver.service.loadBalancerSourceRanges | list | `[]` | Configure loadBalancerSourceRanges. Allows to configure the source IP ranges allowed to access the clustermesh-apiserver LB service in case the Service type is set to LoadBalancer. |
|
||||
@@ -377,7 +378,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| envoy.healthPort | int | `9878` | TCP port for the health API. |
|
||||
| envoy.httpRetryCount | int | `3` | Maximum number of retries for each HTTP request |
|
||||
| envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s |
|
||||
| envoy.image | object | `{"digest":"sha256:a04218c6879007d60d96339a441c448565b6f86650358652da27582e0efbf182","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.32.6-1746661844-0f602c28cb2aa57b29078195049fb257d5b5246c","useDigest":true}` | Envoy container image. |
|
||||
| envoy.image | object | `{"digest":"sha256:9f69e290a7ea3d4edf9192acd81694089af048ae0d8a67fb63bd62dc1d72203e","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.32.6-1749271279-0864395884b263913eac200ee2048fd985f8e626","useDigest":true}` | Envoy container image. |
|
||||
| envoy.initialFetchTimeoutSeconds | int | `30` | Time in seconds after which the initial fetch on an xDS stream is considered timed out |
|
||||
| envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
|
||||
| envoy.livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe |
|
||||
@@ -518,7 +519,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. |
|
||||
| hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay |
|
||||
| hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay |
|
||||
| hubble.relay.image | object | `{"digest":"sha256:c16de12a64b8b56de62b15c1652d036253b40cd7fa643d7e1a404dc71dc66441","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.17.4","useDigest":true}` | Hubble-relay container image. |
|
||||
| hubble.relay.image | object | `{"digest":"sha256:fbb8a6afa8718200fca9381ad274ed695792dbadd2417b0e99c36210ae4964ff","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.17.5","useDigest":true}` | Hubble-relay container image. |
|
||||
| hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. |
|
||||
| hubble.relay.listenPort | string | `"4245"` | Port to listen to. |
|
||||
| hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
@@ -625,7 +626,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. |
|
||||
| identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd`, `kvstore` or `doublewrite-readkvstore` / `doublewrite-readcrd` for migrating between identity backends). |
|
||||
| identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. |
|
||||
| image | object | `{"digest":"sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.4","useDigest":true}` | Agent container image. |
|
||||
| image | object | `{"digest":"sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.5","useDigest":true}` | Agent container image. |
|
||||
| imagePullSecrets | list | `[]` | Configure image pull secrets for pulling container images |
|
||||
| ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set |
|
||||
| ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. |
|
||||
@@ -763,7 +764,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| operator.hostNetwork | bool | `true` | HostNetwork setting |
|
||||
| operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. |
|
||||
| operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. |
|
||||
| operator.image | object | `{"alibabacloudDigest":"sha256:eaa7b18b7cda65af1d454d54224d175fdb69a35199fa949ae7dfda2789c18dd6","awsDigest":"sha256:3c31583e57648470fbf6646ac67122ac5896ce5f979ab824d9a38cfc7eafc753","azureDigest":"sha256:d8d95049bfeab47cb1a3f995164e1ca2cdec8e6c7036c29799647999cdae07b1","genericDigest":"sha256:a3906412f477b09904f46aac1bed28eb522bef7899ed7dd81c15f78b7aa1b9b5","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.17.4","useDigest":true}` | cilium-operator image. |
|
||||
| operator.image | object | `{"alibabacloudDigest":"sha256:654db67929f716b6178a34a15cb8f95e391465085bcf48cdba49819a56fcd259","awsDigest":"sha256:3e189ec1e286f1bf23d47c45bdeac6025ef7ec3d2dc16190ee768eb94708cbc3","azureDigest":"sha256:add78783fdaced7453a324612eeb9ebecf56002b56c14c73596b3b4923321026","genericDigest":"sha256:f954c97eeb1b47ed67d08cc8fb4108fb829f869373cbb3e698a7f8ef1085b09e","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.17.5","useDigest":true}` | cilium-operator image. |
|
||||
| operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. |
|
||||
| operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods |
|
||||
@@ -813,7 +814,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| preflight.extraEnv | list | `[]` | Additional preflight environment variables. |
|
||||
| preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. |
|
||||
| preflight.extraVolumes | list | `[]` | Additional preflight volumes. |
|
||||
| preflight.image | object | `{"digest":"sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.4","useDigest":true}` | Cilium pre-flight image. |
|
||||
| preflight.image | object | `{"digest":"sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.5","useDigest":true}` | Cilium pre-flight image. |
|
||||
| preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods |
|
||||
| preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |
|
||||
|
||||
@@ -1008,7 +1008,7 @@ spec:
|
||||
defaultMode: 0400
|
||||
sources:
|
||||
- secret:
|
||||
name: {{ .Values.hubble.tls.server.existingSecret | default "hubble-metrics-server-certs" }}
|
||||
name: {{ .Values.hubble.metrics.tls.server.existingSecret | default "hubble-metrics-server-certs" }}
|
||||
optional: true
|
||||
items:
|
||||
- key: tls.crt
|
||||
|
||||
@@ -378,7 +378,7 @@ data:
|
||||
bpf-events-default-burst-limit: {{ .Values.bpf.events.default.burstLimit | quote }}
|
||||
{{- end}}
|
||||
|
||||
{{- if .Values.bpf.mapDynamicSizeRatio }}
|
||||
{{- if ne 0.0 ( .Values.bpf.mapDynamicSizeRatio | float64) }}
|
||||
# Specifies the ratio (0.0-1.0] of total system memory to use for dynamic
|
||||
# sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
|
||||
bpf-map-dynamic-size-ratio: {{ .Values.bpf.mapDynamicSizeRatio | quote }}
|
||||
|
||||
@@ -11,7 +11,9 @@ metadata:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
||||
{{- with .Values.clustermesh.apiserver.service.labels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.clustermesh.apiserver.service.annotations .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
|
||||
@@ -597,7 +597,8 @@
|
||||
"mapDynamicSizeRatio": {
|
||||
"type": [
|
||||
"null",
|
||||
"number"
|
||||
"number",
|
||||
"string"
|
||||
]
|
||||
},
|
||||
"masquerade": {
|
||||
@@ -1246,6 +1247,9 @@
|
||||
"Cluster"
|
||||
]
|
||||
},
|
||||
"labels": {
|
||||
"type": "object"
|
||||
},
|
||||
"loadBalancerClass": {
|
||||
"type": [
|
||||
"null",
|
||||
|
||||
@@ -191,10 +191,10 @@ image:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/cilium"
|
||||
tag: "v1.17.4"
|
||||
tag: "v1.17.5"
|
||||
pullPolicy: "IfNotPresent"
|
||||
# cilium-digest
|
||||
digest: "sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a"
|
||||
digest: "sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6"
|
||||
useDigest: true
|
||||
# -- Scheduling configurations for cilium pods
|
||||
scheduling:
|
||||
@@ -561,7 +561,7 @@ bpf:
|
||||
# @schema
|
||||
policyMapMax: 16384
|
||||
# @schema
|
||||
# type: [null, number]
|
||||
# type: [null, number, string]
|
||||
# @schema
|
||||
# -- (float64) Configure auto-sizing for all BPF maps based on available memory.
|
||||
# ref: https://docs.cilium.io/en/stable/network/ebpf/maps/
|
||||
@@ -1440,9 +1440,9 @@ hubble:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/hubble-relay"
|
||||
tag: "v1.17.4"
|
||||
tag: "v1.17.5"
|
||||
# hubble-relay-digest
|
||||
digest: "sha256:c16de12a64b8b56de62b15c1652d036253b40cd7fa643d7e1a404dc71dc66441"
|
||||
digest: "sha256:fbb8a6afa8718200fca9381ad274ed695792dbadd2417b0e99c36210ae4964ff"
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# -- Specifies the resources for the hubble-relay pods
|
||||
@@ -2353,9 +2353,9 @@ envoy:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/cilium-envoy"
|
||||
tag: "v1.32.6-1746661844-0f602c28cb2aa57b29078195049fb257d5b5246c"
|
||||
tag: "v1.32.6-1749271279-0864395884b263913eac200ee2048fd985f8e626"
|
||||
pullPolicy: "IfNotPresent"
|
||||
digest: "sha256:a04218c6879007d60d96339a441c448565b6f86650358652da27582e0efbf182"
|
||||
digest: "sha256:9f69e290a7ea3d4edf9192acd81694089af048ae0d8a67fb63bd62dc1d72203e"
|
||||
useDigest: true
|
||||
# -- Additional containers added to the cilium Envoy DaemonSet.
|
||||
extraContainers: []
|
||||
@@ -2710,15 +2710,15 @@ operator:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/operator"
|
||||
tag: "v1.17.4"
|
||||
tag: "v1.17.5"
|
||||
# operator-generic-digest
|
||||
genericDigest: "sha256:a3906412f477b09904f46aac1bed28eb522bef7899ed7dd81c15f78b7aa1b9b5"
|
||||
genericDigest: "sha256:f954c97eeb1b47ed67d08cc8fb4108fb829f869373cbb3e698a7f8ef1085b09e"
|
||||
# operator-azure-digest
|
||||
azureDigest: "sha256:d8d95049bfeab47cb1a3f995164e1ca2cdec8e6c7036c29799647999cdae07b1"
|
||||
azureDigest: "sha256:add78783fdaced7453a324612eeb9ebecf56002b56c14c73596b3b4923321026"
|
||||
# operator-aws-digest
|
||||
awsDigest: "sha256:3c31583e57648470fbf6646ac67122ac5896ce5f979ab824d9a38cfc7eafc753"
|
||||
awsDigest: "sha256:3e189ec1e286f1bf23d47c45bdeac6025ef7ec3d2dc16190ee768eb94708cbc3"
|
||||
# operator-alibabacloud-digest
|
||||
alibabacloudDigest: "sha256:eaa7b18b7cda65af1d454d54224d175fdb69a35199fa949ae7dfda2789c18dd6"
|
||||
alibabacloudDigest: "sha256:654db67929f716b6178a34a15cb8f95e391465085bcf48cdba49819a56fcd259"
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
suffix: ""
|
||||
@@ -2993,9 +2993,9 @@ preflight:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/cilium"
|
||||
tag: "v1.17.4"
|
||||
tag: "v1.17.5"
|
||||
# cilium-digest
|
||||
digest: "sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a"
|
||||
digest: "sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6"
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# -- The priority class to use for the preflight pod.
|
||||
@@ -3142,9 +3142,9 @@ clustermesh:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/clustermesh-apiserver"
|
||||
tag: "v1.17.4"
|
||||
tag: "v1.17.5"
|
||||
# clustermesh-apiserver-digest
|
||||
digest: "sha256:0b72f3046cf36ff9b113d53cc61185e893edb5fe728a2c9e561c1083f806453d"
|
||||
digest: "sha256:78dc40b9cb8d7b1ad21a76ff3e11541809acda2ac4ef94150cc832100edc247d"
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# -- TCP port for the clustermesh-apiserver health API.
|
||||
@@ -3246,6 +3246,8 @@ clustermesh:
|
||||
# * EKS: service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
|
||||
# * GKE: networking.gke.io/load-balancer-type: "Internal"
|
||||
annotations: {}
|
||||
# -- Labels for the clustermesh-apiserver service.
|
||||
labels: {}
|
||||
# @schema
|
||||
# enum: [Local, Cluster]
|
||||
# @schema
|
||||
@@ -3651,7 +3653,7 @@ authentication:
|
||||
override: ~
|
||||
repository: "docker.io/library/busybox"
|
||||
tag: "1.37.0"
|
||||
digest: "sha256:37f7b378a29ceb4c551b1b5582e27747b855bbfaa73fa11914fe0df028dc581f"
|
||||
digest: "sha256:f85340bf132ae937d2c2a763b8335c9bab35d6e8293f70f606b9c6178d84f42b"
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# SPIRE agent configuration
|
||||
|
||||
@@ -566,7 +566,7 @@ bpf:
|
||||
# @schema
|
||||
policyMapMax: 16384
|
||||
# @schema
|
||||
# type: [null, number]
|
||||
# type: [null, number, string]
|
||||
# @schema
|
||||
# -- (float64) Configure auto-sizing for all BPF maps based on available memory.
|
||||
# ref: https://docs.cilium.io/en/stable/network/ebpf/maps/
|
||||
@@ -3276,6 +3276,8 @@ clustermesh:
|
||||
# * EKS: service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
|
||||
# * GKE: networking.gke.io/load-balancer-type: "Internal"
|
||||
annotations: {}
|
||||
# -- Labels for the clustermesh-apiserver service.
|
||||
labels: {}
|
||||
# @schema
|
||||
# enum: [Local, Cluster]
|
||||
# @schema
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
ARG VERSION=v1.17.4
|
||||
ARG VERSION=v1.17.5
|
||||
FROM quay.io/cilium/cilium:${VERSION}
|
||||
|
||||
@@ -14,7 +14,7 @@ cilium:
|
||||
mode: "kubernetes"
|
||||
image:
|
||||
repository: ghcr.io/cozystack/cozystack/cilium
|
||||
tag: 1.17.4
|
||||
digest: "sha256:91f628cbdc4652b4459af79c5a0501282cc0bc0a9fc11e3d8cb65e884f94e751"
|
||||
tag: 1.17.5
|
||||
digest: "sha256:2def2dccfc17870be6e1d63584c25b32e812f21c9cdcfa06deadd2787606654d"
|
||||
envoy:
|
||||
enabled: false
|
||||
|
||||
@@ -21,3 +21,8 @@ image-cozystack-api:
|
||||
IMAGE="$(REGISTRY)/cozystack-api:$(call settag,$(TAG))@$$(yq e '."containerimage.digest"' images/cozystack-api.json -o json -r)" \
|
||||
yq -i '.cozystackAPI.image = strenv(IMAGE)' values.yaml
|
||||
rm -f images/cozystack-api.json
|
||||
|
||||
generate:
|
||||
rm -rf openapi-schemas
|
||||
mkdir -p openapi-schemas
|
||||
find ../../apps ../../extra -maxdepth 2 -name values.schema.json -exec sh -c 'ln -s ../{} openapi-schemas/$$(basename $$(dirname {})).json' \;
|
||||
|
||||
1
packages/system/cozystack-api/openapi-schemas/bootbox.json
vendored
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/bootbox.json
vendored
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../extra/bootbox/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/bucket.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/bucket.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/bucket/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/clickhouse.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/clickhouse.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/clickhouse/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/etcd.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/etcd.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../extra/etcd/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/ferretdb.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/ferretdb.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/ferretdb/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/http-cache.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/http-cache.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/http-cache/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/info.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/info.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../extra/info/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/ingress.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/ingress.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../extra/ingress/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/kafka.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/kafka.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/kafka/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/kubernetes.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/kubernetes.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/kubernetes/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/monitoring.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/monitoring.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../extra/monitoring/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/mysql.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/mysql.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/mysql/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/nats.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/nats.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/nats/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/postgres.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/postgres.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/postgres/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/rabbitmq.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/rabbitmq.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/rabbitmq/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/redis.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/redis.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/redis/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/seaweedfs.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/seaweedfs.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../extra/seaweedfs/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/tcp-balancer.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/tcp-balancer.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/tcp-balancer/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/tenant.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/tenant.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/tenant/values.schema.json
|
||||
@@ -0,0 +1 @@
|
||||
../../../apps/virtual-machine/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/vm-disk.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/vm-disk.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/vm-disk/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/vm-instance.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/vm-instance.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/vm-instance/values.schema.json
|
||||
1
packages/system/cozystack-api/openapi-schemas/vpn.json
Symbolic link
1
packages/system/cozystack-api/openapi-schemas/vpn.json
Symbolic link
@@ -0,0 +1 @@
|
||||
../../../apps/vpn/values.schema.json
|
||||
@@ -10,6 +10,7 @@ data:
|
||||
kind: Bucket
|
||||
singular: bucket
|
||||
plural: buckets
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/bucket.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: bucket-
|
||||
labels:
|
||||
@@ -24,6 +25,7 @@ data:
|
||||
kind: ClickHouse
|
||||
singular: clickhouse
|
||||
plural: clickhouses
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/clickhouse.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: clickhouse-
|
||||
labels:
|
||||
@@ -38,6 +40,7 @@ data:
|
||||
kind: HTTPCache
|
||||
singular: httpcache
|
||||
plural: httpcaches
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/http-cache.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: http-cache-
|
||||
labels:
|
||||
@@ -52,6 +55,7 @@ data:
|
||||
kind: NATS
|
||||
singular: nats
|
||||
plural: natses
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/nats.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: nats-
|
||||
labels:
|
||||
@@ -66,6 +70,7 @@ data:
|
||||
kind: TCPBalancer
|
||||
singular: tcpbalancer
|
||||
plural: tcpbalancers
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/tcp-balancer.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: tcp-balancer-
|
||||
labels:
|
||||
@@ -80,6 +85,7 @@ data:
|
||||
kind: VirtualMachine
|
||||
singular: virtualmachine
|
||||
plural: virtualmachines
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/virtual-machine.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: virtual-machine-
|
||||
labels:
|
||||
@@ -94,6 +100,7 @@ data:
|
||||
kind: VPN
|
||||
singular: vpn
|
||||
plural: vpns
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/vpn.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: vpn-
|
||||
labels:
|
||||
@@ -108,6 +115,7 @@ data:
|
||||
kind: MySQL
|
||||
singular: mysql
|
||||
plural: mysqls
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/mysql.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: mysql-
|
||||
labels:
|
||||
@@ -122,6 +130,7 @@ data:
|
||||
kind: Tenant
|
||||
singular: tenant
|
||||
plural: tenants
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/tenant.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: tenant-
|
||||
labels:
|
||||
@@ -136,6 +145,7 @@ data:
|
||||
kind: Kubernetes
|
||||
singular: kubernetes
|
||||
plural: kuberneteses
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/kubernetes.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: kubernetes-
|
||||
labels:
|
||||
@@ -150,6 +160,7 @@ data:
|
||||
kind: Redis
|
||||
singular: redis
|
||||
plural: redises
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/redis.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: redis-
|
||||
labels:
|
||||
@@ -164,6 +175,7 @@ data:
|
||||
kind: RabbitMQ
|
||||
singular: rabbitmq
|
||||
plural: rabbitmqs
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/rabbitmq.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: rabbitmq-
|
||||
labels:
|
||||
@@ -178,6 +190,7 @@ data:
|
||||
kind: Postgres
|
||||
singular: postgres
|
||||
plural: postgreses
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/postgres.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: postgres-
|
||||
labels:
|
||||
@@ -192,6 +205,7 @@ data:
|
||||
kind: FerretDB
|
||||
singular: ferretdb
|
||||
plural: ferretdb
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/ferretdb.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: ferretdb-
|
||||
labels:
|
||||
@@ -206,6 +220,7 @@ data:
|
||||
kind: Kafka
|
||||
singular: kafka
|
||||
plural: kafkas
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/kafka.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: kafka-
|
||||
labels:
|
||||
@@ -220,6 +235,7 @@ data:
|
||||
kind: VMDisk
|
||||
plural: vmdisks
|
||||
singular: vmdisk
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/vm-disk.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: vm-disk-
|
||||
labels:
|
||||
@@ -234,6 +250,7 @@ data:
|
||||
kind: VMInstance
|
||||
plural: vminstances
|
||||
singular: vminstance
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/vm-instance.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: vm-instance-
|
||||
labels:
|
||||
@@ -248,6 +265,7 @@ data:
|
||||
kind: Monitoring
|
||||
plural: monitorings
|
||||
singular: monitoring
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/monitoring.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: ""
|
||||
labels:
|
||||
@@ -262,6 +280,7 @@ data:
|
||||
kind: Etcd
|
||||
plural: etcds
|
||||
singular: etcd
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/etcd.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: ""
|
||||
labels:
|
||||
@@ -276,6 +295,7 @@ data:
|
||||
kind: Ingress
|
||||
plural: ingresses
|
||||
singular: ingress
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/ingress.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: ""
|
||||
labels:
|
||||
@@ -290,6 +310,7 @@ data:
|
||||
kind: SeaweedFS
|
||||
plural: seaweedfses
|
||||
singular: seaweedfs
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/seaweedfs.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: ""
|
||||
labels:
|
||||
@@ -304,6 +325,7 @@ data:
|
||||
kind: BootBox
|
||||
plural: bootboxes
|
||||
singular: bootbox
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/bootbox.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: ""
|
||||
labels:
|
||||
@@ -318,6 +340,7 @@ data:
|
||||
kind: Info
|
||||
plural: infos
|
||||
singular: info
|
||||
openAPISchema: {{ .Files.Get "openapi-schemas/info.json" | fromJson | toJson | quote }}
|
||||
release:
|
||||
prefix: ""
|
||||
labels:
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
cozystackAPI:
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.33.1@sha256:ee6b71d3ab1c1484490ff1dc57a7df82813c4f18d6393f149d32acf656aa779d
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.34.0-beta.1@sha256:724a166d2daa9cae3caeb18bffdc7146d80de310a6f97360c2beaef340076e6d
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
cozystackController:
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.33.1@sha256:4777488e14f0313b153b153388c78ab89e3a39582c30266f2321704df1976922
|
||||
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.34.0-beta.1@sha256:cf0b80f2540ac8f6ddd226b3bab87e001602eb0ebc8a527a1c14a0a6f23eb427
|
||||
debug: false
|
||||
disableTelemetry: false
|
||||
cozystackVersion: "v0.33.1"
|
||||
cozystackVersion: "v0.34.0-beta.1"
|
||||
|
||||
@@ -76,7 +76,7 @@ data:
|
||||
"kubeappsNamespace": {{ .Release.Namespace | quote }},
|
||||
"helmGlobalNamespace": {{ include "kubeapps.helmGlobalPackagingNamespace" . | quote }},
|
||||
"carvelGlobalNamespace": {{ .Values.kubeappsapis.pluginConfig.kappController.packages.v1alpha1.globalPackagingNamespace | quote }},
|
||||
"appVersion": "v0.33.1",
|
||||
"appVersion": "v0.34.0-beta.1",
|
||||
"authProxyEnabled": {{ .Values.authProxy.enabled }},
|
||||
"oauthLoginURI": {{ .Values.authProxy.oauthLoginURI | quote }},
|
||||
"oauthLogoutURI": {{ .Values.authProxy.oauthLogoutURI | quote }},
|
||||
|
||||
@@ -19,8 +19,8 @@ kubeapps:
|
||||
image:
|
||||
registry: ghcr.io/cozystack/cozystack
|
||||
repository: dashboard
|
||||
tag: v0.33.1
|
||||
digest: "sha256:5e514516bd3dc0c693bb346ddeb9740e0439a59deb2a56b87317286e3ce79ac9"
|
||||
tag: v0.34.0-beta.1
|
||||
digest: "sha256:ac2b5348d85fe37ad70a4cc159881c4eaded9175a4b586cfa09a52b0fbe5e1e5"
|
||||
redis:
|
||||
master:
|
||||
resourcesPreset: "none"
|
||||
@@ -37,8 +37,8 @@ kubeapps:
|
||||
image:
|
||||
registry: ghcr.io/cozystack/cozystack
|
||||
repository: kubeapps-apis
|
||||
tag: v0.33.1
|
||||
digest: "sha256:ea5b21a27c97b14880042d2a642670e3461e7d946c65b5b557d2eb8df9f03a87"
|
||||
tag: v0.34.0-beta.1
|
||||
digest: "sha256:0270aea2e4b21a906db7f03214e7f6b0786be64a1b66e998e4ed8ef7da12da58"
|
||||
pluginConfig:
|
||||
flux:
|
||||
packages:
|
||||
|
||||
@@ -8,7 +8,7 @@ annotations:
|
||||
- name: Upstream Project
|
||||
url: https://github.com/controlplaneio-fluxcd/flux-operator
|
||||
apiVersion: v2
|
||||
appVersion: v0.23.0
|
||||
appVersion: v0.24.0
|
||||
description: 'A Helm chart for deploying the Flux Operator. '
|
||||
home: https://github.com/controlplaneio-fluxcd
|
||||
icon: https://raw.githubusercontent.com/cncf/artwork/main/projects/flux/icon/color/flux-icon-color.png
|
||||
@@ -25,4 +25,4 @@ sources:
|
||||
- https://github.com/controlplaneio-fluxcd/flux-operator
|
||||
- https://github.com/controlplaneio-fluxcd/charts
|
||||
type: application
|
||||
version: 0.23.0
|
||||
version: 0.24.0
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# flux-operator
|
||||
|
||||
  
|
||||
  
|
||||
|
||||
The [Flux Operator](https://github.com/controlplaneio-fluxcd/flux-operator) provides a
|
||||
declarative API for the installation and upgrade of CNCF [Flux](https://fluxcd.io) and the
|
||||
@@ -38,6 +38,8 @@ see the Flux Operator [documentation](https://fluxcd.control-plane.io/operator/)
|
||||
| commonLabels | object | `{}` | Common labels to add to all deployed objects including pods. |
|
||||
| extraArgs | list | `[]` | Container extra arguments. |
|
||||
| extraEnvs | list | `[]` | Container extra environment variables. |
|
||||
| extraVolumeMounts | list | `[]` | Container extra volume mounts. |
|
||||
| extraVolumes | list | `[]` | Pod extra volumes. |
|
||||
| fullnameOverride | string | `""` | |
|
||||
| hostNetwork | bool | `false` | If `true`, the container ports (`8080` and `8081`) are exposed on the host network. |
|
||||
| image | object | `{"imagePullPolicy":"IfNotPresent","pullSecrets":[],"repository":"ghcr.io/controlplaneio-fluxcd/flux-operator","tag":""}` | Container image settings. The image tag defaults to the chart appVersion. |
|
||||
|
||||
@@ -586,6 +586,9 @@ spec:
|
||||
description: ServerVersion is the version of the Kubernetes API
|
||||
server.
|
||||
type: string
|
||||
required:
|
||||
- platform
|
||||
- serverVersion
|
||||
type: object
|
||||
components:
|
||||
description: ComponentsStatus is the status of the Flux controller
|
||||
@@ -637,6 +640,23 @@ spec:
|
||||
- entitlement
|
||||
- status
|
||||
type: object
|
||||
operator:
|
||||
description: Operator is the version information of the Flux Operator.
|
||||
properties:
|
||||
apiVersion:
|
||||
description: APIVersion is the API version of the Flux Operator.
|
||||
type: string
|
||||
platform:
|
||||
description: Platform is the os/arch of Flux Operator.
|
||||
type: string
|
||||
version:
|
||||
description: Version is the version number of Flux Operator.
|
||||
type: string
|
||||
required:
|
||||
- apiVersion
|
||||
- platform
|
||||
- version
|
||||
type: object
|
||||
reconcilers:
|
||||
description: |-
|
||||
ReconcilersStatus is the list of Flux reconcilers and
|
||||
@@ -858,8 +878,10 @@ spec:
|
||||
- a PEM-encoded CA certificate (`ca.crt`)
|
||||
- a PEM-encoded client certificate (`tls.crt`) and private key (`tls.key`)
|
||||
|
||||
When connecting to a Git provider that uses self-signed certificates, the CA certificate
|
||||
When connecting to a Git or OCI provider that uses self-signed certificates, the CA certificate
|
||||
must be set in the Secret under the 'ca.crt' key to establish the trust relationship.
|
||||
When connecting to an OCI provider that supports client certificates (mTLS), the client certificate
|
||||
and private key must be set in the Secret under the 'tls.crt' and 'tls.key' keys, respectively.
|
||||
properties:
|
||||
name:
|
||||
description: Name of the referent.
|
||||
@@ -884,11 +906,21 @@ spec:
|
||||
ExcludeBranch specifies the regular expression to filter the branches
|
||||
that the input provider should exclude.
|
||||
type: string
|
||||
excludeTag:
|
||||
description: |-
|
||||
ExcludeTag specifies the regular expression to filter the tags
|
||||
that the input provider should exclude.
|
||||
type: string
|
||||
includeBranch:
|
||||
description: |-
|
||||
IncludeBranch specifies the regular expression to filter the branches
|
||||
that the input provider should include.
|
||||
type: string
|
||||
includeTag:
|
||||
description: |-
|
||||
IncludeTag specifies the regular expression to filter the tags
|
||||
that the input provider should include.
|
||||
type: string
|
||||
labels:
|
||||
description: Labels specifies the list of labels to filter the
|
||||
input provider response.
|
||||
@@ -896,13 +928,17 @@ spec:
|
||||
type: string
|
||||
type: array
|
||||
limit:
|
||||
default: 100
|
||||
description: |-
|
||||
Limit specifies the maximum number of input sets to return.
|
||||
When not set, the default limit is 100.
|
||||
type: integer
|
||||
semver:
|
||||
description: Semver specifies the semantic version range to filter
|
||||
and order the tags.
|
||||
description: |-
|
||||
Semver specifies a semantic version range to filter and sort the tags.
|
||||
If this field is not specified, the tags will be sorted in reverse
|
||||
alphabetical order.
|
||||
Supported only for tags at the moment.
|
||||
type: string
|
||||
type: object
|
||||
schedule:
|
||||
@@ -933,10 +969,12 @@ spec:
|
||||
secretRef:
|
||||
description: |-
|
||||
SecretRef specifies the Kubernetes Secret containing the basic-auth credentials
|
||||
to access the input provider. The secret must contain the keys
|
||||
'username' and 'password'.
|
||||
When connecting to a Git provider, the password should be a personal access token
|
||||
to access the input provider.
|
||||
When connecting to a Git provider, the secret must contain the keys
|
||||
'username' and 'password', and the password should be a personal access token
|
||||
that grants read-only access to the repository.
|
||||
When connecting to an OCI provider, the secret must contain a Kubernetes
|
||||
Image Pull Secret, as if created by `kubectl create secret docker-registry`.
|
||||
properties:
|
||||
name:
|
||||
description: Name of the referent.
|
||||
@@ -944,6 +982,14 @@ spec:
|
||||
required:
|
||||
- name
|
||||
type: object
|
||||
serviceAccountName:
|
||||
description: |-
|
||||
ServiceAccountName specifies the name of the Kubernetes ServiceAccount
|
||||
used for authentication with AWS, Azure or GCP services through
|
||||
workload identity federation features. If not specified, the
|
||||
authentication for these cloud providers will use the ServiceAccount
|
||||
of the operator (or any other environment authentication configuration).
|
||||
type: string
|
||||
skip:
|
||||
description: Skip defines whether we need to skip input provider response
|
||||
updates.
|
||||
@@ -966,12 +1012,20 @@ spec:
|
||||
- GitLabBranch
|
||||
- GitLabTag
|
||||
- GitLabMergeRequest
|
||||
- AzureDevOpsBranch
|
||||
- AzureDevOpsTag
|
||||
- AzureDevOpsPullRequest
|
||||
- OCIArtifactTag
|
||||
- ACRArtifactTag
|
||||
- ECRArtifactTag
|
||||
- GARArtifactTag
|
||||
type: string
|
||||
url:
|
||||
description: |-
|
||||
URL specifies the HTTP/S address of the input provider API.
|
||||
URL specifies the HTTP/S or OCI address of the input provider API.
|
||||
When connecting to a Git provider, the URL should point to the repository address.
|
||||
pattern: ^((http|https)://.*){0,1}$
|
||||
When connecting to an OCI provider, the URL should point to the OCI repository address.
|
||||
pattern: ^((http|https|oci)://.*){0,1}$
|
||||
type: string
|
||||
required:
|
||||
- type
|
||||
@@ -981,6 +1035,27 @@ spec:
|
||||
rule: self.type != 'Static' || !has(self.url)
|
||||
- message: spec.url must not be empty when spec.type is not 'Static'
|
||||
rule: self.type == 'Static' || has(self.url)
|
||||
- message: spec.url must start with 'http://' or 'https://' when spec.type
|
||||
is a Git provider
|
||||
rule: '!self.type.startsWith(''Git'') || self.url.startsWith(''http'')'
|
||||
- message: spec.url must start with 'http://' or 'https://' when spec.type
|
||||
is a Git provider
|
||||
rule: '!self.type.startsWith(''AzureDevOps'') || self.url.startsWith(''http'')'
|
||||
- message: spec.url must start with 'oci://' when spec.type is an OCI
|
||||
provider
|
||||
rule: '!self.type.endsWith(''ArtifactTag'') || self.url.startsWith(''oci'')'
|
||||
- message: cannot specify spec.serviceAccountName when spec.type is not
|
||||
one of AzureDevOps* or *ArtifactTag
|
||||
rule: '!has(self.serviceAccountName) || self.type.startsWith(''AzureDevOps'')
|
||||
|| self.type.endsWith(''ArtifactTag'')'
|
||||
- message: cannot specify spec.certSecretRef when spec.type is one of
|
||||
Static, AzureDevOps*, ACRArtifactTag, ECRArtifactTag or GARArtifactTag
|
||||
rule: '!has(self.certSecretRef) || !(self.url == ''Static'' || self.type.startsWith(''AzureDevOps'')
|
||||
|| (self.type.endsWith(''ArtifactTag'') && self.type != ''OCIArtifactTag''))'
|
||||
- message: cannot specify spec.secretRef when spec.type is one of Static,
|
||||
ACRArtifactTag, ECRArtifactTag or GARArtifactTag
|
||||
rule: '!has(self.secretRef) || !(self.url == ''Static'' || (self.type.endsWith(''ArtifactTag'')
|
||||
&& self.type != ''OCIArtifactTag''))'
|
||||
status:
|
||||
description: ResourceSetInputProviderStatus defines the observed state
|
||||
of ResourceSetInputProvider.
|
||||
|
||||
@@ -99,9 +99,15 @@ spec:
|
||||
volumeMounts:
|
||||
- name: temp
|
||||
mountPath: /tmp
|
||||
{{- if .Values.extraVolumeMounts }}
|
||||
{{- toYaml .Values.extraVolumeMounts | nindent 12 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: temp
|
||||
emptyDir: {}
|
||||
{{- if .Values.extraVolumes }}
|
||||
{{- toYaml .Values.extraVolumes | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
|
||||
@@ -116,12 +116,18 @@ nodeSelector: { } # @schema type: object
|
||||
# -- If `true`, the container ports (`8080` and `8081`) are exposed on the host network.
|
||||
hostNetwork: false # @schema default: false
|
||||
|
||||
# -- Pod extra volumes.
|
||||
extraVolumes: [ ] # @schema item: object ; uniqueItems: true
|
||||
|
||||
# -- Container extra environment variables.
|
||||
extraEnvs: [ ] # @schema item: object ; uniqueItems: true
|
||||
|
||||
# -- Container extra arguments.
|
||||
extraArgs: [ ] # @schema item: string ; uniqueItems: true
|
||||
|
||||
# -- Container extra volume mounts.
|
||||
extraVolumeMounts: [ ] # @schema item: object ; uniqueItems: true
|
||||
|
||||
# -- Container logging level flag.
|
||||
logLevel: "info" # @schema enum:[debug,info,error]
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ annotations:
|
||||
- name: Upstream Project
|
||||
url: https://github.com/controlplaneio-fluxcd/flux-operator
|
||||
apiVersion: v2
|
||||
appVersion: v0.23.0
|
||||
appVersion: v0.24.0
|
||||
description: 'A Helm chart for deploying a Flux instance managed by Flux Operator. '
|
||||
home: https://github.com/controlplaneio-fluxcd
|
||||
icon: https://raw.githubusercontent.com/cncf/artwork/main/projects/flux/icon/color/flux-icon-color.png
|
||||
@@ -25,4 +25,4 @@ sources:
|
||||
- https://github.com/controlplaneio-fluxcd/flux-operator
|
||||
- https://github.com/controlplaneio-fluxcd/charts
|
||||
type: application
|
||||
version: 0.23.0
|
||||
version: 0.24.0
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# flux-instance
|
||||
|
||||
  
|
||||
  
|
||||
|
||||
This chart is a thin wrapper around the `FluxInstance` custom resource, which is
|
||||
used by the [Flux Operator](https://github.com/controlplaneio-fluxcd/flux-operator)
|
||||
|
||||
2
packages/system/hetzner-ccm/Chart.yaml
Normal file
2
packages/system/hetzner-ccm/Chart.yaml
Normal file
@@ -0,0 +1,2 @@
|
||||
name: hetzner-ccm
|
||||
version: 1.26.0 # Placeholder, the actual version will be automatically set during the build process
|
||||
10
packages/system/hetzner-ccm/Makefile
Normal file
10
packages/system/hetzner-ccm/Makefile
Normal file
@@ -0,0 +1,10 @@
|
||||
export NAME=hetzner-ccm
|
||||
export NAMESPACE=kube-system
|
||||
|
||||
include ../../../scripts/package.mk
|
||||
|
||||
update:
|
||||
rm -rf charts
|
||||
helm repo add hcloud https://charts.hetzner.cloud
|
||||
helm repo update hcloud
|
||||
helm pull hcloud/hcloud-cloud-controller-manager --untar --untardir charts
|
||||
@@ -0,0 +1,96 @@
|
||||
---
|
||||
# Source: hcloud-cloud-controller-manager/templates/serviceaccount.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: hcloud-cloud-controller-manager
|
||||
namespace: kube-system
|
||||
---
|
||||
# Source: hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: "system:hcloud-cloud-controller-manager"
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hcloud-cloud-controller-manager
|
||||
namespace: kube-system
|
||||
---
|
||||
# Source: hcloud-cloud-controller-manager/templates/deployment.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: hcloud-cloud-controller-manager
|
||||
namespace: kube-system
|
||||
spec:
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: 'hcloud-hccm'
|
||||
app.kubernetes.io/name: 'hcloud-cloud-controller-manager'
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: 'hcloud-hccm'
|
||||
app.kubernetes.io/name: 'hcloud-cloud-controller-manager'
|
||||
spec:
|
||||
serviceAccountName: hcloud-cloud-controller-manager
|
||||
dnsPolicy: Default
|
||||
tolerations:
|
||||
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM.
|
||||
- key: "node.cloudprovider.kubernetes.io/uninitialized"
|
||||
value: "true"
|
||||
effect: "NoSchedule"
|
||||
- key: "CriticalAddonsOnly"
|
||||
operator: "Exists"
|
||||
|
||||
# Allow HCCM to schedule on control plane nodes.
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
effect: NoSchedule
|
||||
operator: Exists
|
||||
- key: "node-role.kubernetes.io/control-plane"
|
||||
effect: NoSchedule
|
||||
operator: Exists
|
||||
|
||||
- key: "node.kubernetes.io/not-ready"
|
||||
effect: "NoExecute"
|
||||
containers:
|
||||
- name: hcloud-cloud-controller-manager
|
||||
args:
|
||||
- "--allow-untagged-cloud"
|
||||
- "--cloud-provider=hcloud"
|
||||
- "--route-reconciliation-period=30s"
|
||||
- "--webhook-secure-port=0"
|
||||
- "--leader-elect=false"
|
||||
env:
|
||||
- name: HCLOUD_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
key: token
|
||||
name: hcloud
|
||||
- name: ROBOT_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
key: robot-password
|
||||
name: hcloud
|
||||
optional: true
|
||||
- name: ROBOT_USER
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
key: robot-user
|
||||
name: hcloud
|
||||
optional: true
|
||||
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8233
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
priorityClassName: "system-cluster-critical"
|
||||
@@ -0,0 +1,113 @@
|
||||
---
|
||||
# Source: hcloud-cloud-controller-manager/templates/serviceaccount.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: hcloud-cloud-controller-manager
|
||||
namespace: kube-system
|
||||
---
|
||||
# Source: hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: "system:hcloud-cloud-controller-manager"
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: hcloud-cloud-controller-manager
|
||||
namespace: kube-system
|
||||
---
|
||||
# Source: hcloud-cloud-controller-manager/templates/daemonset.yaml
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: hcloud-cloud-controller-manager
|
||||
namespace: kube-system
|
||||
spec:
|
||||
revisionHistoryLimit: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/instance: 'hcloud-hccm'
|
||||
app.kubernetes.io/name: 'hcloud-cloud-controller-manager'
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/instance: 'hcloud-hccm'
|
||||
app.kubernetes.io/name: 'hcloud-cloud-controller-manager'
|
||||
pod-label: pod-label
|
||||
annotations:
|
||||
pod-annotation: pod-annotation
|
||||
spec:
|
||||
serviceAccountName: hcloud-cloud-controller-manager
|
||||
dnsPolicy: Default
|
||||
tolerations:
|
||||
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM.
|
||||
- key: "node.cloudprovider.kubernetes.io/uninitialized"
|
||||
value: "true"
|
||||
effect: "NoSchedule"
|
||||
- key: "CriticalAddonsOnly"
|
||||
operator: "Exists"
|
||||
|
||||
# Allow HCCM to schedule on control plane nodes.
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
effect: NoSchedule
|
||||
operator: Exists
|
||||
- key: "node-role.kubernetes.io/control-plane"
|
||||
effect: NoSchedule
|
||||
operator: Exists
|
||||
|
||||
- key: "node.kubernetes.io/not-ready"
|
||||
effect: "NoExecute"
|
||||
|
||||
- effect: NoSchedule
|
||||
key: example-key
|
||||
operator: Exists
|
||||
nodeSelector:
|
||||
|
||||
foo: bar
|
||||
containers:
|
||||
- name: hcloud-cloud-controller-manager
|
||||
command:
|
||||
- "/bin/hcloud-cloud-controller-manager"
|
||||
- "--allow-untagged-cloud"
|
||||
- "--cloud-provider=hcloud"
|
||||
- "--route-reconciliation-period=30s"
|
||||
- "--webhook-secure-port=0"
|
||||
env:
|
||||
- name: HCLOUD_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
key: token
|
||||
name: hcloud
|
||||
- name: ROBOT_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
key: robot-password
|
||||
name: hcloud
|
||||
optional: true
|
||||
- name: ROBOT_USER
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
key: robot-user
|
||||
name: hcloud
|
||||
optional: true
|
||||
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version
|
||||
ports:
|
||||
- name: metrics
|
||||
containerPort: 8233
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
volumeMounts:
|
||||
- mountPath: /var/run/secrets/hcloud
|
||||
name: token-volume
|
||||
readOnly: true
|
||||
priorityClassName: system-cluster-critical
|
||||
volumes:
|
||||
- name: token-volume
|
||||
secret:
|
||||
secretName: hcloud-token
|
||||
@@ -0,0 +1,51 @@
|
||||
kind: DaemonSet
|
||||
|
||||
monitoring:
|
||||
podMonitor:
|
||||
labels:
|
||||
environment: staging
|
||||
annotations:
|
||||
release: kube-prometheus-stack
|
||||
|
||||
additionalTolerations:
|
||||
- key: "example-key"
|
||||
operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
|
||||
nodeSelector:
|
||||
foo: bar
|
||||
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: topology.kubernetes.io/zone
|
||||
operator: In
|
||||
values:
|
||||
- antarctica-east1
|
||||
- antarctica-west1
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 1
|
||||
preference:
|
||||
matchExpressions:
|
||||
- key: another-node-label-key
|
||||
operator: In
|
||||
values:
|
||||
- another-node-label-value
|
||||
|
||||
podLabels:
|
||||
pod-label: pod-label
|
||||
|
||||
podAnnotations:
|
||||
pod-annotation: pod-annotation
|
||||
|
||||
extraVolumeMounts:
|
||||
- name: token-volume
|
||||
readOnly: true
|
||||
mountPath: /var/run/secrets/hcloud
|
||||
|
||||
extraVolumes:
|
||||
- name: token-volume
|
||||
secret:
|
||||
secretName: hcloud-token
|
||||
@@ -0,0 +1,4 @@
|
||||
apiVersion: v2
|
||||
name: hcloud-cloud-controller-manager
|
||||
type: application
|
||||
version: 1.26.0
|
||||
@@ -0,0 +1,61 @@
|
||||
# hcloud-cloud-controller-manager Helm Chart
|
||||
|
||||
This Helm chart is the recommended installation method for [hcloud-cloud-controller-manager](https://github.com/hetznercloud/hcloud-cloud-controller-manager).
|
||||
|
||||
## Quickstart
|
||||
|
||||
First, [install Helm 3](https://helm.sh/docs/intro/install/).
|
||||
|
||||
The following snippet will deploy hcloud-cloud-controller-manager to the kube-system namespace.
|
||||
|
||||
```sh
|
||||
# Sync the Hetzner Cloud helm chart repository to your local computer.
|
||||
helm repo add hcloud https://charts.hetzner.cloud
|
||||
helm repo update hcloud
|
||||
|
||||
# Install the latest version of the hcloud-cloud-controller-manager chart.
|
||||
helm install hccm hcloud/hcloud-cloud-controller-manager -n kube-system
|
||||
|
||||
# If you want to install hccm with private networking support (see main Deployment guide for more info).
|
||||
helm install hccm hcloud/hcloud-cloud-controller-manager -n kube-system --set networking.enabled=true
|
||||
```
|
||||
|
||||
Please note that additional configuration is necessary. See the main [Deployment](https://github.com/hetznercloud/hcloud-cloud-controller-manager#deployment) guide.
|
||||
|
||||
If you're unfamiliar with Helm it would behoove you to peep around the documentation. Perhaps start with the [Quickstart Guide](https://helm.sh/docs/intro/quickstart/)?
|
||||
|
||||
### Upgrading from static manifests
|
||||
|
||||
If you previously installed hcloud-cloud-controller-manager with this command:
|
||||
|
||||
```sh
|
||||
kubectl apply -f https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml
|
||||
```
|
||||
|
||||
You can uninstall that same deployment, by running the following command:
|
||||
|
||||
```sh
|
||||
kubectl delete -f https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml
|
||||
```
|
||||
|
||||
Then you can follow the Quickstart installation steps above.
|
||||
|
||||
## Configuration
|
||||
|
||||
This chart aims to be highly flexible. Please review the [values.yaml](./values.yaml) for a full list of configuration options.
|
||||
|
||||
If you've already deployed hccm using the `helm install` command above, you can easily change configuration values:
|
||||
|
||||
```sh
|
||||
helm upgrade hccm hcloud/hcloud-cloud-controller-manager -n kube-system --set monitoring.podMonitor.enabled=true
|
||||
```
|
||||
|
||||
### Multiple replicas / DaemonSet
|
||||
|
||||
You can choose between different deployment options. By default the chart will deploy a single replica as a Deployment.
|
||||
|
||||
If you want to change the replica count you can adjust the value `replicaCount` inside the helm values.
|
||||
If you have more than 1 replica leader election will be turned on automatically.
|
||||
|
||||
If you want to deploy hccm as a DaemonSet you can set `kind` to `DaemonSet` inside the values.
|
||||
To adjust on which nodes the DaemonSet should be deployed you can use the `nodeSelector` and `additionalTolerations` values.
|
||||
@@ -0,0 +1,5 @@
|
||||
{{ if (and $.Values.monitoring.enabled $.Values.monitoring.podMonitor.enabled) }}
|
||||
{{ if not ($.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor") }}
|
||||
WARNING: monitoring.podMonitoring.enabled=true but PodMonitor could not be installed: the CRD was not detected.
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
@@ -0,0 +1,7 @@
|
||||
{{- define "hcloud-cloud-controller-manager.name" -}}
|
||||
{{- $.Values.nameOverride | default $.Chart.Name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "hcloud-cloud-controller-manager.selectorLabels" -}}
|
||||
{{- tpl (toYaml $.Values.selectorLabels) $ }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,14 @@
|
||||
{{- if .Values.rbac.create }}
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}"
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ include "hcloud-cloud-controller-manager.name" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,108 @@
|
||||
{{- if eq $.Values.kind "DaemonSet" }}
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: {{ include "hcloud-cloud-controller-manager.name" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
spec:
|
||||
revisionHistoryLimit: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 8 }}
|
||||
{{- if .Values.podLabels }}
|
||||
{{- toYaml .Values.podLabels | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.podAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml .Values.podAnnotations | nindent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- with .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "hcloud-cloud-controller-manager.name" . }}
|
||||
dnsPolicy: Default
|
||||
tolerations:
|
||||
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM.
|
||||
- key: "node.cloudprovider.kubernetes.io/uninitialized"
|
||||
value: "true"
|
||||
effect: "NoSchedule"
|
||||
- key: "CriticalAddonsOnly"
|
||||
operator: "Exists"
|
||||
|
||||
# Allow HCCM to schedule on control plane nodes.
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
effect: NoSchedule
|
||||
operator: Exists
|
||||
- key: "node-role.kubernetes.io/control-plane"
|
||||
effect: NoSchedule
|
||||
operator: Exists
|
||||
|
||||
- key: "node.kubernetes.io/not-ready"
|
||||
effect: "NoExecute"
|
||||
|
||||
{{- if gt (len .Values.additionalTolerations) 0 }}
|
||||
{{ toYaml .Values.additionalTolerations | nindent 8 }}
|
||||
{{- end }}
|
||||
|
||||
{{- if gt (len .Values.nodeSelector) 0 }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.nodeSelector | nindent 8 }}
|
||||
{{- end }}
|
||||
|
||||
{{- if $.Values.networking.enabled }}
|
||||
hostNetwork: true
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: hcloud-cloud-controller-manager
|
||||
command:
|
||||
- "/bin/hcloud-cloud-controller-manager"
|
||||
{{- range $key, $value := $.Values.args }}
|
||||
{{- if not (eq $value nil) }}
|
||||
- "--{{ $key }}{{ if $value }}={{ $value }}{{ end }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if $.Values.networking.enabled }}
|
||||
- "--allocate-node-cidrs=true"
|
||||
- "--cluster-cidr={{ $.Values.networking.clusterCIDR }}"
|
||||
{{- end }}
|
||||
env:
|
||||
{{- range $key, $value := $.Values.env }}
|
||||
- name: {{ $key }}
|
||||
{{- tpl (toYaml $value) $ | nindent 14 }}
|
||||
{{- end }}
|
||||
{{- if $.Values.networking.enabled }}
|
||||
- name: HCLOUD_NETWORK
|
||||
{{- tpl (toYaml $.Values.networking.network) $ | nindent 14 }}
|
||||
{{- end }}
|
||||
{{- if not $.Values.monitoring.enabled }}
|
||||
- name: HCLOUD_METRICS_ENABLED
|
||||
value: "false"
|
||||
{{- end }}
|
||||
{{- if $.Values.robot.enabled }}
|
||||
- name: ROBOT_ENABLED
|
||||
value: "true"
|
||||
{{- end }}
|
||||
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version
|
||||
ports:
|
||||
{{- if $.Values.monitoring.enabled }}
|
||||
- name: metrics
|
||||
containerPort: 8233
|
||||
{{- end }}
|
||||
resources:
|
||||
{{- toYaml $.Values.resources | nindent 12 }}
|
||||
{{- with .Values.extraVolumeMounts }}
|
||||
volumeMounts:
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
priorityClassName: system-cluster-critical
|
||||
{{- with .Values.extraVolumes }}
|
||||
volumes:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,118 @@
|
||||
{{- if eq $.Values.kind "Deployment" }}
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{ include "hcloud-cloud-controller-manager.name" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
spec:
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
revisionHistoryLimit: 2
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 8 }}
|
||||
{{- if .Values.podLabels }}
|
||||
{{- toYaml .Values.podLabels | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.podAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml .Values.podAnnotations | nindent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- with .Values.image.pullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "hcloud-cloud-controller-manager.name" . }}
|
||||
dnsPolicy: Default
|
||||
tolerations:
|
||||
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM.
|
||||
- key: "node.cloudprovider.kubernetes.io/uninitialized"
|
||||
value: "true"
|
||||
effect: "NoSchedule"
|
||||
- key: "CriticalAddonsOnly"
|
||||
operator: "Exists"
|
||||
|
||||
# Allow HCCM to schedule on control plane nodes.
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
effect: NoSchedule
|
||||
operator: Exists
|
||||
- key: "node-role.kubernetes.io/control-plane"
|
||||
effect: NoSchedule
|
||||
operator: Exists
|
||||
|
||||
- key: "node.kubernetes.io/not-ready"
|
||||
effect: "NoExecute"
|
||||
|
||||
{{- if gt (len .Values.additionalTolerations) 0 }}
|
||||
{{ toYaml .Values.additionalTolerations | nindent 8 }}
|
||||
{{- end }}
|
||||
|
||||
{{- if gt (len .Values.nodeSelector) 0 }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.nodeSelector | nindent 8 }}
|
||||
{{- end }}
|
||||
|
||||
{{- if gt (len .Values.affinity) 0 }}
|
||||
affinity:
|
||||
{{ toYaml .Values.affinity | nindent 8 }}
|
||||
{{- end }}
|
||||
|
||||
{{- if $.Values.networking.enabled }}
|
||||
hostNetwork: true
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: hcloud-cloud-controller-manager
|
||||
args:
|
||||
{{- range $key, $value := $.Values.args }}
|
||||
{{- if not (eq $value nil) }}
|
||||
- "--{{ $key }}{{ if $value }}={{ $value }}{{ end }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if $.Values.networking.enabled }}
|
||||
- "--allocate-node-cidrs=true"
|
||||
- "--cluster-cidr={{ $.Values.networking.clusterCIDR }}"
|
||||
{{- end }}
|
||||
{{- if (eq (int $.Values.replicaCount) 1) }}
|
||||
- "--leader-elect=false"
|
||||
{{- end }}
|
||||
env:
|
||||
{{- range $key, $value := $.Values.env }}
|
||||
- name: {{ $key }}
|
||||
{{- tpl (toYaml $value) $ | nindent 14 }}
|
||||
{{- end }}
|
||||
{{- if $.Values.networking.enabled }}
|
||||
- name: HCLOUD_NETWORK
|
||||
{{- tpl (toYaml $.Values.networking.network) $ | nindent 14 }}
|
||||
{{- end }}
|
||||
{{- if not $.Values.monitoring.enabled }}
|
||||
- name: HCLOUD_METRICS_ENABLED
|
||||
value: "false"
|
||||
{{- end }}
|
||||
{{- if $.Values.robot.enabled }}
|
||||
- name: ROBOT_ENABLED
|
||||
value: "true"
|
||||
{{- end }}
|
||||
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version
|
||||
ports:
|
||||
{{- if $.Values.monitoring.enabled }}
|
||||
- name: metrics
|
||||
containerPort: 8233
|
||||
{{- end }}
|
||||
resources:
|
||||
{{- toYaml $.Values.resources | nindent 12 }}
|
||||
{{- with .Values.extraVolumeMounts }}
|
||||
volumeMounts:
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- if .Values.priorityClassName }}
|
||||
priorityClassName: {{ .Values.priorityClassName | quote }}
|
||||
{{- end }}
|
||||
{{- with .Values.extraVolumes }}
|
||||
volumes:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,22 @@
|
||||
{{ if (and $.Values.monitoring.enabled $.Values.monitoring.podMonitor.enabled) }}
|
||||
{{ if $.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor" }}
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: {{ include "hcloud-cloud-controller-manager.name" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
labels:
|
||||
{{- with $.Values.monitoring.podMonitor.labels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
{{- range $key, $value := .Values.monitoring.podMonitor.annotations }}
|
||||
{{ $key }}: {{ $value | quote }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- tpl (toYaml $.Values.monitoring.podMonitor.spec) $ | nindent 2 }}
|
||||
selector:
|
||||
matchLabels:
|
||||
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }}
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
@@ -0,0 +1,5 @@
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ include "hcloud-cloud-controller-manager.name" . }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
@@ -0,0 +1,154 @@
|
||||
# hccm program command line arguments.
|
||||
# The following flags are managed by the chart and should *not* be set directly here:
|
||||
# --allocate-node-cidrs
|
||||
# --cluster-cidr
|
||||
# --leader-elect
|
||||
args:
|
||||
cloud-provider: hcloud
|
||||
allow-untagged-cloud: ""
|
||||
|
||||
# Read issue #395 to understand how changes to this value affect you.
|
||||
# https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/395
|
||||
route-reconciliation-period: 30s
|
||||
|
||||
# We do not use the webhooks feature and there is no need to bind a port that is unused.
|
||||
# https://github.com/kubernetes/kubernetes/issues/120043
|
||||
# https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/492
|
||||
webhook-secure-port: "0"
|
||||
|
||||
# Change deployment kind from "Deployment" to "DaemonSet"
|
||||
kind: Deployment
|
||||
|
||||
# change replicaCount (only used when kind is "Deployment")
|
||||
replicaCount: 1
|
||||
|
||||
# hccm environment variables
|
||||
env:
|
||||
# The following variables are managed by the chart and should *not* be set here:
|
||||
# HCLOUD_METRICS_ENABLED - see monitoring.enabled
|
||||
# HCLOUD_NETWORK - see networking.enabled
|
||||
# ROBOT_ENABLED - see robot.enabled
|
||||
|
||||
# You can also use a file to provide secrets to the hcloud-cloud-controller-manager.
|
||||
# This is currently possible for HCLOUD_TOKEN, ROBOT_USER, and ROBOT_PASSWORD.
|
||||
# Use the env var appended with _FILE (e.g. HCLOUD_TOKEN_FILE) and set the value to the file path that should be read
|
||||
# The file must be provided externally (e.g. via secret injection).
|
||||
# Example:
|
||||
# HCLOUD_TOKEN_FILE:
|
||||
# value: "/etc/hetzner/token"
|
||||
# to disable reading the token from the secret you have to disable the original env var:
|
||||
# HCLOUD_TOKEN: null
|
||||
|
||||
HCLOUD_TOKEN:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: hcloud
|
||||
key: token
|
||||
|
||||
ROBOT_USER:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: hcloud
|
||||
key: robot-user
|
||||
optional: true
|
||||
ROBOT_PASSWORD:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: hcloud
|
||||
key: robot-password
|
||||
optional: true
|
||||
|
||||
image:
|
||||
repository: docker.io/hetznercloud/hcloud-cloud-controller-manager
|
||||
tag: "v{{ $.Chart.Version }}"
|
||||
# Optionally specify an array of imagePullSecrets.
|
||||
# Secrets must be manually created in the namespace.
|
||||
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
|
||||
# e.g:
|
||||
# pullSecrets:
|
||||
# - myRegistryKeySecretName
|
||||
#
|
||||
pullSecrets: []
|
||||
|
||||
monitoring:
|
||||
# When enabled, the hccm Pod will serve metrics on port :8233
|
||||
enabled: true
|
||||
podMonitor:
|
||||
# When enabled (and metrics.enabled=true), a PodMonitor will be deployed to scrape metrics.
|
||||
# The PodMonitor [1] CRD must already exist in the target cluster.
|
||||
enabled: false
|
||||
# PodMonitor Labels
|
||||
labels: {}
|
||||
# release: kube-prometheus-stack
|
||||
# PodMonitor Annotations
|
||||
annotations: {}
|
||||
# PodMonitorSpec to be deployed. The "selector" field is set elsewhere and should *not* be used here.
|
||||
# https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitorSpec
|
||||
spec:
|
||||
podMetricsEndpoints:
|
||||
- port: metrics
|
||||
|
||||
nameOverride: ~
|
||||
|
||||
networking:
|
||||
# If enabled, hcloud-ccm will be deployed with networking support.
|
||||
enabled: false
|
||||
# If networking is enabled, clusterCIDR must match the PodCIDR subnet your cluster has been configured with.
|
||||
# The default "10.244.0.0/16" assumes you're using Flannel with default configuration.
|
||||
clusterCIDR: 10.244.0.0/16
|
||||
network:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: hcloud
|
||||
key: network
|
||||
|
||||
# Resource requests for the deployed hccm Pod.
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
|
||||
selectorLabels:
|
||||
app.kubernetes.io/name: '{{ include "hcloud-cloud-controller-manager.name" $ }}'
|
||||
app.kubernetes.io/instance: "{{ $.Release.Name }}"
|
||||
|
||||
additionalTolerations: []
|
||||
|
||||
# nodeSelector:
|
||||
# node-role.kubernetes.io/control-plane: ""
|
||||
nodeSelector: {}
|
||||
|
||||
# Set the affinity for pods. (Only works with kind=Deployment)
|
||||
affinity: {}
|
||||
|
||||
# pods priorityClassName
|
||||
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption
|
||||
priorityClassName: "system-cluster-critical"
|
||||
|
||||
robot:
|
||||
# Set to true to enable support for Robot (Dedicated) servers.
|
||||
enabled: false
|
||||
|
||||
rbac:
|
||||
# Create a cluster role binding with admin access for the service account.
|
||||
create: true
|
||||
|
||||
podLabels: {}
|
||||
|
||||
podAnnotations: {}
|
||||
|
||||
# Mounts the specified volume to the hcloud-cloud-controller-manager container.
|
||||
extraVolumeMounts: []
|
||||
# # Example
|
||||
# extraVolumeMounts:
|
||||
# - name: token-volume
|
||||
# readOnly: true
|
||||
# mountPath: /var/run/secrets/hcloud
|
||||
|
||||
# Adds extra volumes to the pod.
|
||||
extraVolumes: []
|
||||
# # Example
|
||||
# extraVolumes:
|
||||
# - name: token-volume
|
||||
# secret:
|
||||
# secretName: hcloud-token
|
||||
172
packages/system/hetzner-ccm/values.yaml
Normal file
172
packages/system/hetzner-ccm/values.yaml
Normal file
@@ -0,0 +1,172 @@
|
||||
# hccm program command line arguments.
|
||||
# The following flags are managed by the chart and should *not* be set directly here:
|
||||
# --allocate-node-cidrs
|
||||
# --cluster-cidr
|
||||
# --leader-elect
|
||||
args:
|
||||
cloud-provider: hcloud
|
||||
allow-untagged-cloud: ""
|
||||
|
||||
# Read issue #395 to understand how changes to this value affect you.
|
||||
# https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/395
|
||||
route-reconciliation-period: 30s
|
||||
|
||||
# We do not use the webhooks feature and there is no need to bind a port that is unused.
|
||||
# https://github.com/kubernetes/kubernetes/issues/120043
|
||||
# https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/492
|
||||
webhook-secure-port: "0"
|
||||
|
||||
|
||||
# Change deployment kind from "Deployment" to "DaemonSet"
|
||||
kind: Deployment
|
||||
|
||||
|
||||
# change replicaCount (only used when kind is "Deployment")
|
||||
replicaCount: 1
|
||||
|
||||
|
||||
# hccm environment variables
|
||||
env:
|
||||
# The following variables are managed by the chart and should *not* be set here:
|
||||
# HCLOUD_METRICS_ENABLED - see monitoring.enabled
|
||||
# HCLOUD_NETWORK - see networking.enabled
|
||||
# ROBOT_ENABLED - see robot.enabled
|
||||
|
||||
# You can also use a file to provide secrets to the hcloud-cloud-controller-manager.
|
||||
# This is currently possible for HCLOUD_TOKEN, ROBOT_USER, and ROBOT_PASSWORD.
|
||||
# Use the env var appended with _FILE (e.g. HCLOUD_TOKEN_FILE) and set the value to the file path that should be read
|
||||
# The file must be provided externally (e.g. via secret injection).
|
||||
# Example:
|
||||
# HCLOUD_TOKEN_FILE:
|
||||
# value: "/etc/hetzner/token"
|
||||
# to disable reading the token from the secret you have to disable the original env var:
|
||||
# HCLOUD_TOKEN: null
|
||||
|
||||
HCLOUD_TOKEN:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: hcloud
|
||||
key: token
|
||||
|
||||
ROBOT_USER:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: hcloud
|
||||
key: robot-user
|
||||
optional: true
|
||||
ROBOT_PASSWORD:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: hcloud
|
||||
key: robot-password
|
||||
optional: true
|
||||
|
||||
|
||||
image:
|
||||
repository: docker.io/hetznercloud/hcloud-cloud-controller-manager
|
||||
tag: "v{{ $.Chart.Version }}"
|
||||
# Optionally specify an array of imagePullSecrets.
|
||||
# Secrets must be manually created in the namespace.
|
||||
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
|
||||
# e.g:
|
||||
# pullSecrets:
|
||||
# - myRegistryKeySecretName
|
||||
#
|
||||
pullSecrets: []
|
||||
|
||||
|
||||
monitoring:
|
||||
# When enabled, the hccm Pod will serve metrics on port :8233
|
||||
enabled: false
|
||||
podMonitor:
|
||||
# When enabled (and metrics.enabled=true), a PodMonitor will be deployed to scrape metrics.
|
||||
# The PodMonitor [1] CRD must already exist in the target cluster.
|
||||
enabled: false
|
||||
# PodMonitor Labels
|
||||
labels: {}
|
||||
# release: kube-prometheus-stack
|
||||
# PodMonitor Annotations
|
||||
annotations: {}
|
||||
# PodMonitorSpec to be deployed. The "selector" field is set elsewhere and should *not* be used here.
|
||||
# https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitorSpec
|
||||
spec:
|
||||
podMetricsEndpoints:
|
||||
- port: metrics
|
||||
|
||||
|
||||
nameOverride: "hetzner-ccm"
|
||||
|
||||
|
||||
networking:
|
||||
# If enabled, hcloud-ccm will be deployed with networking support.
|
||||
enabled: false
|
||||
# If networking is enabled, clusterCIDR must match the PodCIDR subnet your cluster has been configured with.
|
||||
# The default "10.244.0.0/16" assumes you're using Flannel with default configuration.
|
||||
clusterCIDR: 10.244.0.0/16
|
||||
network:
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: hcloud
|
||||
key: network
|
||||
|
||||
|
||||
# Resource requests for the deployed hccm Pod.
|
||||
resources:
|
||||
cpu: ""
|
||||
memory: ""
|
||||
|
||||
|
||||
selectorLabels:
|
||||
app.kubernetes.io/name: '{{ include "hcloud-cloud-controller-manager.name" $ }}'
|
||||
app.kubernetes.io/instance: "{{ $.Release.Name }}"
|
||||
|
||||
|
||||
additionalTolerations: []
|
||||
|
||||
|
||||
# nodeSelector:
|
||||
# node-role.kubernetes.io/control-plane: ""
|
||||
nodeSelector: {}
|
||||
|
||||
|
||||
# Set the affinity for pods. (Only works with kind=Deployment)
|
||||
affinity: {}
|
||||
|
||||
|
||||
# pods priorityClassName
|
||||
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption
|
||||
priorityClassName: "system-cluster-critical"
|
||||
|
||||
|
||||
robot:
|
||||
# Set to true to enable support for Robot (Dedicated) servers.
|
||||
enabled: false
|
||||
|
||||
|
||||
rbac:
|
||||
# Create a cluster role binding with admin access for the service account.
|
||||
create: true
|
||||
|
||||
|
||||
podLabels: {}
|
||||
|
||||
|
||||
podAnnotations: {}
|
||||
|
||||
|
||||
# Mounts the specified volume to the hcloud-cloud-controller-manager container.
|
||||
extraVolumeMounts: []
|
||||
# # Example
|
||||
# extraVolumeMounts:
|
||||
# - name: token-volume
|
||||
# readOnly: true
|
||||
# mountPath: /var/run/secrets/hcloud
|
||||
|
||||
|
||||
# Adds extra volumes to the pod.
|
||||
extraVolumes: []
|
||||
# # Example
|
||||
# extraVolumes:
|
||||
# - name: token-volume
|
||||
# secret:
|
||||
# secretName: hcloud-token
|
||||
2
packages/system/hetzner-robotlb/Chart.yaml
Normal file
2
packages/system/hetzner-robotlb/Chart.yaml
Normal file
@@ -0,0 +1,2 @@
|
||||
name: hetzner-robotlb
|
||||
version: 0.1.3 # Placeholder, the actual version will be automatically set during the build process
|
||||
9
packages/system/hetzner-robotlb/Makefile
Normal file
9
packages/system/hetzner-robotlb/Makefile
Normal file
@@ -0,0 +1,9 @@
|
||||
export NAME=hetzner-robotlb
|
||||
export NAMESPACE=kube-system
|
||||
|
||||
include ../../../scripts/package.mk
|
||||
|
||||
update:
|
||||
rm -rf charts
|
||||
mkdir -p charts
|
||||
helm pull oci://ghcr.io/intreecom/charts/robotlb --untar --untardir charts
|
||||
23
packages/system/hetzner-robotlb/charts/robotlb/.helmignore
Normal file
23
packages/system/hetzner-robotlb/charts/robotlb/.helmignore
Normal file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
@@ -0,0 +1,6 @@
|
||||
apiVersion: v2
|
||||
appVersion: 0.0.5
|
||||
description: A Helm chart for robotlb (loadbalancer on hetzner cloud).
|
||||
name: robotlb
|
||||
type: application
|
||||
version: 0.1.3
|
||||
@@ -0,0 +1,4 @@
|
||||
The RobotLB Operator was successfully installed.
|
||||
Please follow the readme to create loadbalanced services.
|
||||
|
||||
README: https://github.com/intreecom/robotlb
|
||||
@@ -0,0 +1,62 @@
|
||||
{{/*
|
||||
Expand the name of the chart.
|
||||
*/}}
|
||||
{{- define "robotlb.name" -}}
|
||||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create a default fully qualified app name.
|
||||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||
If release name contains chart name it will be used as a full name.
|
||||
*/}}
|
||||
{{- define "robotlb.fullname" -}}
|
||||
{{- if .Values.fullnameOverride }}
|
||||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- $name := default .Chart.Name .Values.nameOverride }}
|
||||
{{- if contains $name .Release.Name }}
|
||||
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
|
||||
{{- else }}
|
||||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create chart name and version as used by the chart label.
|
||||
*/}}
|
||||
{{- define "robotlb.chart" -}}
|
||||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Common labels
|
||||
*/}}
|
||||
{{- define "robotlb.labels" -}}
|
||||
helm.sh/chart: {{ include "robotlb.chart" . }}
|
||||
{{ include "robotlb.selectorLabels" . }}
|
||||
{{- if .Chart.AppVersion }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Selector labels
|
||||
*/}}
|
||||
{{- define "robotlb.selectorLabels" -}}
|
||||
app.kubernetes.io/name: {{ include "robotlb.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
{{- define "robotlb.serviceAccountName" -}}
|
||||
{{- if .Values.serviceAccount.create }}
|
||||
{{- default (include "robotlb.fullname" .) .Values.serviceAccount.name }}
|
||||
{{- else }}
|
||||
{{- default "default" .Values.serviceAccount.name }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user