Compare commits

...

34 Commits

Author SHA1 Message Date
Timofei Larkin
8267072da2 Merge pull request #678 from aenix-io/release-v0.27.0
Prepare release v0.27.0
2025-03-06 22:17:33 +04:00
Timofei Larkin
8dd8a718a7 Prepare release v0.27.0 2025-03-06 18:54:54 +03:00
Timofei Larkin
63358e3e6c Recreate etcd pods and certificates on update (#675)
Since updating from 2.5.0 to 2.6.0 renews all certificates including the
trusted CAs for etcd, all etcd pods need a restart. As many people had
problems with their etcds post-update, we decided to script the
procedure, so the renewal of certs and pods is done automatically and in
a predictable order. The hook fires when upgrading from <2.6.0 to
>=2.6.1.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
	- Introduced pre- and post-upgrade tasks for the etcd package.
- Added supporting resources, including roles, role bindings, service
accounts, and a ConfigMap to facilitate smoother upgrade operations.
- **Chores**
	- Updated the application version from 2.6.0 to 2.6.1.
	- Revised version mapping entries to reflect the updated release.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-03-06 15:24:12 +01:00
xy2
063439ac94 Raise maxLabelsPerTimeseries for VictoriaMetrics vminsert. (#677) 2025-03-06 14:44:58 +01:00
xy2
a7425b0caf Add linstor plunger scripts as sidecars (#672)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced new management scripts for automating controller
maintenance tasks and satellite operations, including automatic cleanup
and reconnection of resources.
- Added dynamic configuration templates to streamline container image
management and refine deployment settings for enhanced security and
monitoring.
- Rolled out updated configurations for satellite deployments, improving
network performance and resource handling.
- New YAML configurations for Linstor satellites have been added,
enhancing TLS management and pod specifications.

- **Revert**
- Removed legacy configuration resources to consolidate and simplify
system management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-03-05 18:49:05 +01:00
Timofei Larkin
9ad81f2577 Patch kamaji to hardcode ControlplaneEndpoint port (#674)
When the port number is omitted in
TenantControlPlane.spec.controlPlane.ingress.hostname, Kamaji defaults
to using the assigned port number from TenantControlPlane.status. But
usually the internal port is 6443, while we expect to have 443 for
external clients of the k8s API. If we explicitly do hostname:443, we
run into clastix/kamaji#679, so for a quick hotfix we are temporarily
hardcoding 443.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Updated the control plane connection configuration to consistently use
the secure port 443, ensuring reliable and predictable endpoint
behavior.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-03-05 18:46:11 +01:00
Andrei Kvapil
485b1dffb7 Merge pull request #673 from aenix-io/KubevirtMachineTemplate
Kubernetes: fix namespace for KubevirtMachineTemplate
2025-03-05 17:55:36 +01:00
Andrei Kvapil
1877f17ca1 Kubernetes: fix namespace for KubevirtMachineTemplate
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-03-05 16:28:38 +01:00
Andrei Kvapil
43e593c72d Merge pull request #670 from aenix-io/upd-etcd-operator0.4.1
Update etcd-operator v0.4.1
2025-03-05 15:26:02 +01:00
Andrei Kvapil
159d0a2294 Update etcd-operator v0.4.1 2025-03-05 15:25:38 +01:00
Andrei Kvapil
6765f66e11 Merge pull request #669 from aenix-io/upd-cozy-proxy-0.1.3
Update cozy-proxy v0.1.3
2025-03-05 15:04:04 +01:00
Andrei Kvapil
73215dca16 Update cozy-proxy v0.1.3 2025-03-05 15:03:20 +01:00
Andrei Kvapil
85499e2bdc Merge pull request #668 from aenix-io/bump-monitoring2
bump monitoring chart version
2025-03-05 15:00:41 +01:00
Andrei Kvapil
06daf34102 bump monitoring chart version 2025-03-05 15:00:14 +01:00
Andrei Kvapil
47dfaaafe1 Update Cluster-API and providers (#667)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
  - Introduced dynamic IP address management support.
- Enabled comprehensive lifecycle hooks that trigger during both
installation and upgrades.
- Expanded configuration options with new fields for flexible
deployments and customizations.

- **Chores**
  - Upgraded the application and chart versions.
- Improved deployment settings with enhanced health checks, diagnostic
endpoints, and service account management.
- Updated provider versions to enhance overall stability and
performance.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-03-05 14:52:23 +01:00
xy2
c60b7c0730 Import Piraeus dashboard and alerts. (#658)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Expanded the monitored dashboards with a new storage dashboard entry.
- Introduced proactive alert configurations that cover key storage
components.
- Added templated alert management to streamline dynamic configuration.
- Enhanced metric collection by integrating monitoring endpoints for
storage components.
- Delivered a comprehensive dashboard offering real-time insights into
storage performance.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2025-03-05 14:51:23 +01:00
Andrei Kvapil
266d097cab Fix regression for updating Kamaji (#665)
This fix introduced Kamaji update
https://github.com/aenix-io/cozystack/pull/633
But helm chart didn't actually updated

This affected issue with creating new clusters.
Ref https://github.com/clastix/kamaji/issues/623

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Revised application and chart version information alongside updated
dependency requirements.
- **New Features**
- Added new configuration options for tenant control planes, including
enhanced network and load balancer settings.
- **Documentation**
- Updated version indicators and clarified configuration details for
default datastore behavior.
- **Bug Fixes**
- Improved deployment stability by conditionally applying the default
datastore setting to avoid potential errors.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-03-05 14:14:41 +01:00
Andrei Kvapil
d4452ea708 Merge pull request #660 from xy2/169-victoria-limits
Increase VMSelect default cpu limit
2025-03-05 14:13:32 +01:00
Andrei Kvapil
ec603bc3ef CAPI-operator: Remove the invalid caBundle (#666)
Upstream:
- https://github.com/kubernetes-sigs/cluster-api-operator/issues/590
- https://github.com/kubernetes-sigs/cluster-api-operator/pull/591

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Removed an outdated internal configuration setting for webhook
communication. This cleanup streamlines the system’s setup while keeping
public functionality unchanged.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-03-05 14:05:47 +01:00
Timofei Larkin
48af411878 Merge pull request #664 from klinch0/feature/change-severity-cert-alerts
feature/change-severity-for-kube-client-certificate-expiration
2025-03-05 13:56:23 +04:00
Timofei Larkin
57d0a236df Merge pull request #656 from klinch0/feature/add-workloads
feature/add-workloads
2025-03-05 13:46:19 +04:00
kklinch0
554d5dbbca feature/change-severity-for-kube-client-certificate-expiration 2025-03-05 12:41:26 +03:00
kklinch0
0793b1eaf6 feature/add-workload-monitors 2025-03-05 12:15:23 +03:00
Timofei Larkin
425ce77f60 Merge pull request #655 from klinch0/feature/add-multi-dc
feature/add-multi-dc-for-pg
2025-03-05 12:51:36 +04:00
kklinch0
88729e4124 rename globalAppTopologySpreadConstraints 2025-03-05 11:39:41 +03:00
Timofei Larkin
48f6a248c8 Merge pull request #663 from klinch0/bugfix/mv-ds-var
bugfix/mv-ds-var-for-goldpinger
2025-03-05 12:08:32 +04:00
kklinch0
9714b130a8 bugfix/mv-ds-var-for-goldpinger 2025-03-05 11:05:59 +03:00
kklinch0
4cce138d31 feature/add-topologyspreadconstraints-pg 2025-03-05 10:41:43 +03:00
Timofei Larkin
e7d6f2dfa3 Merge pull request #661 from klinch0/feature/add-ch-monitoring
feature/add-ch-dashboard
2025-03-04 20:22:15 +04:00
Timofei Larkin
b68a72614a Add TL as codeowner (#662)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated internal team management roles to include an additional
responsible contributor.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-03-04 17:19:21 +01:00
kklinch0
36b66a681d feature/add-ch-dashboard 2025-03-04 10:53:51 +03:00
Denis Seleznev
3e273c03b6 Increase the default cpu limit for vminsert. 2025-03-03 19:31:27 +01:00
Denis Seleznev
da0437a774 Make it possible to set cpu limit too. 2025-03-03 19:31:05 +01:00
Denis Seleznev
78cff8c223 Change defaults calculation logic. 2025-03-03 19:18:24 +01:00
102 changed files with 20954 additions and 1927 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1 +1 @@
* @kvaps
* @kvaps @lllamnyp

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -68,7 +68,7 @@ spec:
serviceAccountName: cozystack
containers:
- name: cozystack
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.26.1"
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.27.0"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
@@ -87,7 +87,7 @@ spec:
fieldRef:
fieldPath: metadata.name
- name: assets
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.26.1"
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.27.0"
command:
- /usr/bin/cozystack-assets-server
- "-dir=/cozystack/assets"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.1
version: 0.6.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/clickhouse-backup:0.6.1@sha256:7a99cabdfd541f863aa5d1b2f7b49afd39838fb94c8448986634a1dc9050751c
ghcr.io/aenix-io/cozystack/clickhouse-backup:0.6.2@sha256:7a99cabdfd541f863aa5d1b2f7b49afd39838fb94c8448986634a1dc9050751c

View File

@@ -17,3 +17,10 @@ rules:
resourceNames:
- {{ .Release.Name }}-credentials
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]

View File

@@ -0,0 +1,13 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: clickhouse
type: clickhouse
selector:
clickhouse.altinity.com/chi: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.1
version: 0.4.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/postgres-backup:0.8.0@sha256:d1f7692b6761f46f24687d885ec335330280346ae4a9ff28b3179681b36106b7
ghcr.io/aenix-io/cozystack/postgres-backup:0.9.0@sha256:6cc07280c0e2432ed37b2646faf82efe9702c6d93504844744aa505b890cac6f

View File

@@ -17,3 +17,10 @@ rules:
resourceNames:
- {{ .Release.Name }}-credentials
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]

View File

@@ -6,7 +6,13 @@ metadata:
spec:
instances: {{ .Values.replicas }}
enableSuperuserAccess: true
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
{{- end }}
{{- end }}
minSyncReplicas: {{ .Values.quorum.minSyncReplicas }}
maxSyncReplicas: {{ .Values.quorum.maxSyncReplicas }}

View File

@@ -0,0 +1,13 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: ferretdb
type: ferretdb
selector:
app: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/nginx-cache:0.3.1@sha256:854b3908114de1876038eb9902577595cce93553ce89bf75ac956d22f1e8b8cc
ghcr.io/aenix-io/cozystack/nginx-cache:0.3.1@sha256:72ced2b1d8da2c784d6231af6cb0752170f6ea845c73effb11adb006b7a7fbb2

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.2
version: 0.3.3
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -17,3 +17,11 @@ rules:
resourceNames:
- {{ .Release.Name }}-clients-ca
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
- {{ $.Release.Name }}-zookeeper
verbs: ["get", "list", "watch"]

View File

@@ -0,0 +1,30 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: kafka
type: kafka
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/name: kafka
version: {{ $.Chart.Version }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-zookeeper
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: kafka
type: zookeeper
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/name: zookeeper
version: {{ $.Chart.Version }}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.15.1
version: 0.15.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/cluster-autoscaler:0.15.1@sha256:73701e37727eedaafdf9efe4baefcf0835f064ee8731219f0c0186c0d0781a5c
ghcr.io/aenix-io/cozystack/cluster-autoscaler:0.15.2@sha256:077023fc24d466ac18f8d43fec41b9a14c0b3d32c0013e836e7448e7a1e7d661

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/kubevirt-cloud-provider:0.15.1@sha256:02037bb7a75b35ca1e34924f13e7fa7b25bac2017ddbd7e9ed004c0ff368cce3
ghcr.io/aenix-io/cozystack/kubevirt-cloud-provider:0.15.2@sha256:5ef7198eaaa4e422caa5f3d8f906c908046f1fbaf2d7a1e72b5a98627db3bda8

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/kubevirt-csi-driver:0.15.1@sha256:a86d8a4722b81e89820ead959874524c4cc86654c22ad73c421bbf717d62c3f3
ghcr.io/aenix-io/cozystack/kubevirt-csi-driver:0.15.2@sha256:f862c233399b213e376628ffbb55304f08d171e991371d5bde067b47890cc959

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.30.1@sha256:6f19f3f8a68372c5b212e98a79ff132cc20641bc46fc4b8d359158945dc04043
ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.30.1@sha256:7ce5467b8f34ef7897141b0ca96c455459c2729cae5824a2c20f32b01a841f90

View File

@@ -250,7 +250,7 @@ spec:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
name: {{ $.Release.Name }}-{{ $groupName }}-{{ $kubevirtmachinetemplateHash }}
namespace: default
namespace: {{ $.Release.Namespace }}
version: v1.30.1
---
apiVersion: cluster.x-k8s.io/v1beta1

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.2
version: 0.5.3
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/mariadb-backup:0.5.2@sha256:9f0b2bc5135e10b29edb2824309059f5b4c4e8b744804b2cf55381171f335675
ghcr.io/aenix-io/cozystack/mariadb-backup:0.5.3@sha256:89641695e0c1f4ad7b82697c27a2245bb4a1bc403845235ed0df98e04aa9a71f

View File

@@ -18,3 +18,10 @@ rules:
resourceNames:
- {{ .Release.Name }}-credentials
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]

View File

@@ -0,0 +1,13 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: mysql
type: mysql
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0
version: 0.4.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -17,3 +17,10 @@ rules:
resourceNames:
- {{ .Release.Name }}-credentials
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]

View File

@@ -0,0 +1,13 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: nats
type: nats
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}-system
version: {{ $.Chart.Version }}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.8.0
version: 0.9.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/postgres-backup:0.8.0@sha256:d1f7692b6761f46f24687d885ec335330280346ae4a9ff28b3179681b36106b7
ghcr.io/aenix-io/cozystack/postgres-backup:0.9.0@sha256:6cc07280c0e2432ed37b2646faf82efe9702c6d93504844744aa505b890cac6f

View File

@@ -6,7 +6,13 @@ metadata:
spec:
instances: {{ .Values.replicas }}
enableSuperuserAccess: true
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
{{- end }}
{{- end }}
postgresql:
parameters:
max_wal_senders: "30"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.3
version: 0.4.4
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -20,3 +20,10 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
- apiGroups:
- cozystack.io
resources:
- workloadmonitors
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]

View File

@@ -0,0 +1,13 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: rabbitmq
type: rabbitmq
selector:
app.kubernetes.io/name: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -6,13 +6,15 @@ clickhouse 0.3.0 b00621e
clickhouse 0.4.0 320fc32
clickhouse 0.5.0 2a4768a5
clickhouse 0.6.0 18bbdb67
clickhouse 0.6.1 HEAD
clickhouse 0.6.1 b7375f73
clickhouse 0.6.2 HEAD
ferretdb 0.1.0 4ffa8615
ferretdb 0.1.1 5ca8823
ferretdb 0.2.0 adaf603
ferretdb 0.3.0 aa2f553
ferretdb 0.4.0 def2eb0f
ferretdb 0.4.1 HEAD
ferretdb 0.4.1 a9555210
ferretdb 0.4.2 HEAD
http-cache 0.1.0 a956713
http-cache 0.2.0 5ca8823
http-cache 0.3.0 fab5940
@@ -24,7 +26,8 @@ kafka 0.2.2 d0758692
kafka 0.2.3 5ca8823
kafka 0.3.0 c07c4bbd
kafka 0.3.1 b7375f73
kafka 0.3.2 HEAD
kafka 0.3.2 b75aaf17
kafka 0.3.3 HEAD
kubernetes 0.1.0 f642698
kubernetes 0.2.0 7cd7de73
kubernetes 0.3.0 7caccec1
@@ -45,19 +48,22 @@ kubernetes 0.13.0 ced8e5b9
kubernetes 0.14.0 bfbde07c
kubernetes 0.14.1 fde4bcfa
kubernetes 0.15.0 cb7b8158
kubernetes 0.15.1 HEAD
kubernetes 0.15.1 77df31e1
kubernetes 0.15.2 HEAD
mysql 0.1.0 f642698
mysql 0.2.0 8b975ff0
mysql 0.3.0 5ca8823
mysql 0.4.0 93018c4
mysql 0.5.0 4b84798
mysql 0.5.1 fab5940b
mysql 0.5.2 HEAD
mysql 0.5.2 d8a92aa3
mysql 0.5.3 HEAD
nats 0.1.0 5ca8823
nats 0.2.0 c07c4bbd
nats 0.3.0 78366f19
nats 0.3.1 b7375f73
nats 0.4.0 HEAD
nats 0.4.0 da1e705a
nats 0.4.1 HEAD
postgres 0.1.0 f642698
postgres 0.2.0 7cd7de73
postgres 0.2.1 4a97e297
@@ -69,14 +75,16 @@ postgres 0.6.0 2a4768a
postgres 0.6.2 54fd61c
postgres 0.7.0 dc9d8bb
postgres 0.7.1 175a65f
postgres 0.8.0 HEAD
postgres 0.8.0 cb7b8158
postgres 0.9.0 HEAD
rabbitmq 0.1.0 f642698
rabbitmq 0.2.0 5ca8823
rabbitmq 0.3.0 9e33dc0
rabbitmq 0.4.0 36d8855
rabbitmq 0.4.1 35536bb
rabbitmq 0.4.2 00b2834e
rabbitmq 0.4.3 HEAD
rabbitmq 0.4.3 d8a92aa3
rabbitmq 0.4.4 HEAD
redis 0.1.1 f642698
redis 0.2.0 5ca8823
redis 0.3.0 c07c4bbd

View File

@@ -1,2 +1,2 @@
cozystack:
image: ghcr.io/aenix-io/cozystack/cozystack:v0.26.1@sha256:67c6eb4da3baf2208df9b2ed24cbf758a2180bb3a071ce53141c21b8d17263cf
image: ghcr.io/aenix-io/cozystack/cozystack:v0.27.0@sha256:aac04571e99e13653f08e6ccc2b2214032455af547f9a887d01f1483e30d2915

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/aenix-io/cozystack/e2e-sandbox:v0.26.1@sha256:e034c6d4232ffe6f87c24ae44100a63b1869210e484c929efac33ffcf60b18b1
image: ghcr.io/aenix-io/cozystack/e2e-sandbox:v0.27.0@sha256:1380b550c37c7316d924c9827122eb6fbb8e7da9aad8014f90b010b40f6c744d

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/matchbox:v0.26.1@sha256:f5d1e0f439f49e980888ed53a4bcc65fa97b1c4bc0df86abaa17de1a5a1f71a3
ghcr.io/aenix-io/cozystack/matchbox:v0.27.0@sha256:ef53e59943706fd9bce33b021b11ef469b44f97a184661f7ac24eb5f1b57fe9e

View File

@@ -3,4 +3,4 @@ name: etcd
description: Storage for Kubernetes clusters
icon: /logos/etcd.svg
type: application
version: 2.6.0
version: 2.6.1

View File

@@ -0,0 +1,39 @@
{{- $shouldUpdateCerts := true }}
{{- $configMap := lookup "v1" "ConfigMap" .Release.Namespace "etcd-deployed-version" }}
{{- if $configMap }}
{{- $deployedVersion := index $configMap "data" "version" }}
{{- if $deployedVersion | semverCompare ">= 2.6.1" }}
{{- $shouldUpdateCerts = false }}
{{- end }}
{{- end }}
{{- if $shouldUpdateCerts }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: etcd-hook
annotations:
helm.sh/hook: post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
spec:
template:
metadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"
spec:
serviceAccountName: etcd-hook
containers:
- name: kubectl
image: bitnami/kubectl:latest
command:
- sh
args:
- -exc
- |-
kubectl --namespace={{ .Release.Namespace }} delete secrets etcd-ca-tls etcd-peer-ca-tls
sleep 10
kubectl --namespace={{ .Release.Namespace }} delete secrets etcd-client-tls etcd-peer-tls etcd-server-tls
kubectl --namespace={{ .Release.Namespace }} delete pods --selector=app.kubernetes.io/instance=etcd,app.kubernetes.io/managed-by=etcd-operator,app.kubernetes.io/name=etcd,cozystack.io/service=etcd
restartPolicy: Never
{{- end }}

View File

@@ -0,0 +1,26 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
helm.sh/hook: post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
name: etcd-hook
rules:
- apiGroups:
- ""
resources:
- secrets
- pods
verbs:
- get
- list
- watch
- delete
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch

View File

@@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: etcd-hook
annotations:
helm.sh/hook: post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: etcd-hook
subjects:
- kind: ServiceAccount
name: etcd-hook
namespace: {{ .Release.Namespace | quote }}

View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: etcd-hook
annotations:
helm.sh/hook: post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: etcd-deployed-version
data:
version: {{ .Chart.Version }}

View File

@@ -3,4 +3,4 @@ name: monitoring
description: Monitoring and observability stack
icon: /logos/monitoring.svg
type: application
version: 1.8.0
version: 1.8.1

View File

@@ -36,3 +36,5 @@ flux/flux-control-plane
flux/flux-stats
kafka/strimzi-kafka
goldpinger/goldpinger
clickhouse/altinity-clickhouse-operator-dashboard
storage/linstor

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/grafana:1.8.0@sha256:0377abd3cb2c6e27b12ac297f1859aa4d550f1aa14989f824f2315d0dfd1a5b2
ghcr.io/aenix-io/cozystack/grafana:1.8.1@sha256:0377abd3cb2c6e27b12ac297f1859aa4d550f1aa14989f824f2315d0dfd1a5b2

View File

@@ -5,6 +5,13 @@ metadata:
name: alerta-db
spec:
instances: 2
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
{{- end }}
{{- end }}
storage:
size: {{ required ".Values.alerta.storage is required" .Values.alerta.storage }}
{{- with .Values.alerta.storageClassName }}

View File

@@ -6,7 +6,13 @@ spec:
instances: 2
storage:
size: {{ .Values.grafana.db.size }}
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
{{- end }}
{{- end }}
monitoring:
enablePodMonitor: true

View File

@@ -8,29 +8,32 @@ spec:
replicationFactor: 2
retentionPeriod: {{ .retentionPeriod | quote }}
vminsert:
extraArgs:
# kubevirt and other systems produce a lot of labels
# it's usually more than default 30
maxLabelsPerTimeseries: "60"
replicaCount: 2
resources:
{{- if and (hasKey . "vminsert") (hasKey .vminsert "resources") }}
{{- toYaml .vminsert.resources | nindent 6 }}
{{- else }}
limits:
memory: 1000Mi
{{- with . | dig "vminsert" "resources" "limits" "cpu" nil }}
cpu: {{ . | quote }}
{{- end }}
memory: {{ . | dig "vminsert" "resources" "limits" "memory" "1000Mi" }}
requests:
cpu: 100m
memory: 500Mi
{{- end }}
cpu: {{ . | dig "vminsert" "resources" "requests" "cpu" "500m" }}
memory: {{ . | dig "vminsert" "resources" "requests" "memory" "500Mi" }}
vmselect:
replicaCount: 2
resources:
{{- if and (hasKey . "vmselect") (hasKey .vmselect "resources") }}
{{- toYaml .vmselect.resources | nindent 6 }}
{{- else }}
limits:
memory: 1000Mi
# if we don't set the cpu limit, victoriametrics-operator will set 500m here, which is ridiculous small
# see internal/config/config.go in victoriametrics-operator
# 2 vcpu is the bare minimum for **single** Grafana user
cpu: {{ . | dig "vmselect" "resources" "limits" "cpu" "2000m" }}
memory: {{ . | dig "vmselect" "resources" "limits" "memory" "1000Mi" }}
requests:
cpu: 100m
memory: 500Mi
{{- end }}
cpu: {{ . | dig "vmselect" "resources" "requests" "cpu" "500m" }}
memory: {{ . | dig "vmselect" "resources" "requests" "memory" "500Mi" }}
extraArgs:
search.maxUniqueTimeseries: "600000"
vmalert.proxyURL: http://vmalert-{{ .name }}.{{ $.Release.Namespace }}.svc:8080
@@ -48,15 +51,14 @@ spec:
vmstorage:
replicaCount: 2
resources:
{{- if and (hasKey . "vmstorage") (hasKey .vmstorage "resources") }}
{{- toYaml .vmstorage.resources | nindent 6 }}
{{- else }}
limits:
memory: 2048Mi
{{- with . | dig "vmstorage" "resources" "limits" "cpu" nil }}
cpu: {{ . | quote }}
{{- end }}
memory: {{ . | dig "vmstorage" "resources" "limits" "memory" "2048Mi" }}
requests:
cpu: 100m
memory: 500Mi
{{- end }}
cpu: {{ . | dig "vmstorage" "resources" "requests" "cpu" "100m" }}
memory: {{ . | dig "vmstorage" "resources" "requests" "memory" "500Mi" }}
storage:
volumeClaimTemplate:
spec:

View File

@@ -7,7 +7,8 @@ etcd 2.2.0 5ca8823
etcd 2.3.0 b908400d
etcd 2.4.0 cb7b8158
etcd 2.5.0 861e6c46
etcd 2.6.0 HEAD
etcd 2.6.0 a7425b0
etcd 2.6.1 HEAD
info 1.0.0 HEAD
ingress 1.0.0 f642698
ingress 1.1.0 838bee5d
@@ -28,7 +29,8 @@ monitoring 1.5.4 d4634797
monitoring 1.6.0 cb7b8158
monitoring 1.6.1 3bb97596
monitoring 1.7.0 749110aa
monitoring 1.8.0 HEAD
monitoring 1.8.0 80b4c151
monitoring 1.8.1 HEAD
seaweedfs 0.1.0 5ca8823
seaweedfs 0.2.0 9e33dc0
seaweedfs 0.2.1 249bf35

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/s3manager:v0.5.0@sha256:efd4a57f1b4b74871181d676dddfcac95c3a3a1e7cc244e21647c6114a0e6438
ghcr.io/aenix-io/cozystack/s3manager:v0.5.0@sha256:3bf81b4cc5fdd5b99da40a663e15c649b2d992cd933bd56f8bb1bc9dd41a7b11

View File

@@ -1,6 +1,6 @@
apiVersion: v2
appVersion: 0.11.0
appVersion: 0.17.0
description: Cluster API Operator
name: cluster-api-operator
type: application
version: 0.11.0
version: 0.17.0

View File

@@ -26,7 +26,7 @@ apiVersion: v1
kind: Namespace
metadata:
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "1"
"argocd.argoproj.io/sync-wave": "1"
name: {{ $addonNamespace }}
@@ -37,7 +37,7 @@ metadata:
name: {{ $addonName }}
namespace: {{ $addonNamespace }}
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "2"
"argocd.argoproj.io/sync-wave": "2"
{{- if or $addonVersion $.Values.secretName }}

View File

@@ -26,7 +26,7 @@ apiVersion: v1
kind: Namespace
metadata:
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "1"
name: {{ $bootstrapNamespace }}
---
@@ -36,7 +36,7 @@ metadata:
name: {{ $bootstrapName }}
namespace: {{ $bootstrapNamespace }}
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "2"
{{- if or $bootstrapVersion $.Values.configSecret.name }}
spec:

View File

@@ -26,7 +26,7 @@ apiVersion: v1
kind: Namespace
metadata:
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "1"
name: {{ $controlPlaneNamespace }}
---
@@ -36,14 +36,27 @@ metadata:
name: {{ $controlPlaneName }}
namespace: {{ $controlPlaneNamespace }}
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "2"
{{- if or $controlPlaneVersion $.Values.configSecret.name }}
{{- if or $controlPlaneVersion $.Values.configSecret.name $.Values.manager }}
spec:
{{- end}}
{{- if $controlPlaneVersion }}
version: {{ $controlPlaneVersion }}
{{- end }}
{{- if $.Values.manager }}
{{- if hasKey $.Values.manager.featureGates $controlPlaneName }}
manager:
{{- range $key, $value := $.Values.manager.featureGates }}
{{- if eq $key $controlPlaneName }}
featureGates:
{{- range $k, $v := $value }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if $.Values.configSecret.name }}
configSecret:
name: {{ $.Values.configSecret.name }}

View File

@@ -6,7 +6,7 @@ apiVersion: v1
kind: Namespace
metadata:
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "1"
name: capi-system
---
@@ -16,7 +16,7 @@ metadata:
name: cluster-api
namespace: capi-system
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "2"
{{- with .Values.configSecret }}
spec:

View File

@@ -25,7 +25,7 @@ apiVersion: v1
kind: Namespace
metadata:
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "1"
name: {{ $coreNamespace }}
---
@@ -35,10 +35,10 @@ metadata:
name: {{ $coreName }}
namespace: {{ $coreNamespace }}
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "2"
"argocd.argoproj.io/sync-wave": "2"
{{- if or $coreVersion $.Values.configSecret.name }}
{{- if or $coreVersion $.Values.configSecret.name $.Values.manager }}
spec:
{{- end}}
{{- if $coreVersion }}

View File

@@ -47,6 +47,8 @@ spec:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
serviceAccountName: capi-operator-manager
automountServiceAccountToken: true
{{- with .Values.securityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
@@ -63,15 +65,15 @@ spec:
{{- if .Values.healthAddr }}
- --health-addr={{ .Values.healthAddr }}
{{- end }}
{{- if .Values.metricsBindAddr }}
- --metrics-bind-addr={{ .Values.metricsBindAddr }}
{{- end }}
{{- if .Values.diagnosticsAddress }}
- --diagnostics-address={{ .Values.diagnosticsAddress }}
{{- end }}
{{- if .Values.insecureDiagnostics }}
- --insecure-diagnostics={{ .Values.insecureDiagnostics }}
{{- end }}
{{- if .Values.watchConfigSecret }}
- --watch-configsecret
{{- end }}
{{- with .Values.leaderElection }}
- --leader-elect={{ .enabled }}
{{- if .leaseDuration }}
@@ -95,9 +97,15 @@ spec:
- containerPort: 9443
name: webhook-server
protocol: TCP
- containerPort: {{ ( split ":" $.Values.metricsBindAddr)._1 | int }}
{{- if $.Values.diagnosticsAddress }}
{{- $diagnosticsPort := $.Values.diagnosticsAddress }}
{{- if contains ":" $diagnosticsPort -}}
{{ $diagnosticsPort = ( split ":" $.Values.diagnosticsAddress)._1 | int }}
{{- end }}
- containerPort: {{ $diagnosticsPort | int }}
name: metrics
protocol: TCP
{{- end }}
{{- with .Values.resources.manager }}
resources:
{{- toYaml . | nindent 12 }}
@@ -114,6 +122,31 @@ spec:
volumeMounts:
{{- toYaml . | nindent 12 }}
{{- end }}
terminationMessagePolicy: FallbackToLogsOnError
{{- $healthAddr := $.Values.healthAddr }}
{{- if contains ":" $healthAddr -}}
{{ $healthAddr = ( split ":" $.Values.healthAddr)._1 | int }}
{{- end }}
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: {{ $healthAddr | default 9440 }}
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: {{ $healthAddr | default 9440 }}
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
terminationGracePeriodSeconds: 10
{{- with .Values.volumes }}
volumes:

View File

@@ -7,7 +7,7 @@ apiVersion: v1
kind: Namespace
metadata:
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "1"
"argocd.argoproj.io/sync-wave": "1"
name: capi-kubeadm-bootstrap-system
@@ -18,7 +18,7 @@ metadata:
name: kubeadm
namespace: capi-kubeadm-bootstrap-system
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "2"
"argocd.argoproj.io/sync-wave": "2"
{{- with .Values.configSecret }}
@@ -37,7 +37,7 @@ apiVersion: v1
kind: Namespace
metadata:
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "1"
"argocd.argoproj.io/sync-wave": "1"
name: capi-kubeadm-control-plane-system
@@ -48,11 +48,20 @@ metadata:
name: kubeadm
namespace: capi-kubeadm-control-plane-system
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "2"
"argocd.argoproj.io/sync-wave": "2"
{{- with .Values.configSecret }}
spec:
{{- if $.Values.manager }}
manager:
{{- if and $.Values.manager.featureGates $.Values.manager.featureGates.kubeadm }}
featureGates:
{{- range $key, $value := $.Values.manager.featureGates.kubeadm }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
{{- end }}
configSecret:
name: {{ .name }}
{{- if .namespace }}

View File

@@ -26,7 +26,7 @@ apiVersion: v1
kind: Namespace
metadata:
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "1"
"argocd.argoproj.io/sync-wave": "1"
name: {{ $infrastructureNamespace }}
@@ -37,10 +37,10 @@ metadata:
name: {{ $infrastructureName }}
namespace: {{ $infrastructureNamespace }}
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "2"
"argocd.argoproj.io/sync-wave": "2"
{{- if or $infrastructureVersion $.Values.configSecret.name $.Values.manager }}
{{- if or $infrastructureVersion $.Values.configSecret.name $.Values.manager $.Values.additionalDeployments }}
spec:
{{- end }}
{{- if $infrastructureVersion }}
@@ -59,6 +59,16 @@ spec:
{{- end }}
{{- end }}
{{- end }}
{{- if and (kindIs "map" $.Values.fetchConfig) (hasKey $.Values.fetchConfig $infrastructureName) }}
{{- range $key, $value := $.Values.fetchConfig }}
{{- if eq $key $infrastructureName }}
fetchConfig:
{{- range $k, $v := $value }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if $.Values.configSecret.name }}
configSecret:
name: {{ $.Values.configSecret.name }}
@@ -66,5 +76,8 @@ spec:
namespace: {{ $.Values.configSecret.namespace }}
{{- end }}
{{- end }}
{{- if $.Values.additionalDeployments }}
additionalDeployments: {{ toYaml $.Values.additionalDeployments | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,73 @@
# IPAM providers
{{- if .Values.ipam }}
{{- $ipams := split ";" .Values.ipam }}
{{- $ipamNamespace := "" }}
{{- $ipamName := "" }}
{{- $ipamVersion := "" }}
{{- range $ipam := $ipams }}
{{- $ipamArgs := split ":" $ipam }}
{{- $ipamArgsLen := len $ipamArgs }}
{{- if eq $ipamArgsLen 3 }}
{{- $ipamNamespace = $ipamArgs._0 }}
{{- $ipamName = $ipamArgs._1 }}
{{- $ipamVersion = $ipamArgs._2 }}
{{- else if eq $ipamArgsLen 2 }}
{{- $ipamNamespace = print $ipamArgs._0 "-ipam-system" }}
{{- $ipamName = $ipamArgs._0 }}
{{- $ipamVersion = $ipamArgs._1 }}
{{- else if eq $ipamArgsLen 1 }}
{{- $ipamNamespace = print $ipamArgs._0 "-ipam-system" }}
{{- $ipamName = $ipamArgs._0 }}
{{- else }}
{{- fail "ipam provider argument should have the following format in-cluster:v1.0.0 or mynamespace:in-cluster:v1.0.0" }}
{{- end }}
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "1"
"argocd.argoproj.io/sync-wave": "1"
name: {{ $ipamNamespace }}
---
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: IPAMProvider
metadata:
name: {{ $ipamName }}
namespace: {{ $ipamNamespace }}
annotations:
"helm.sh/hook": "post-install,post-upgrade"
"helm.sh/hook-weight": "2"
"argocd.argoproj.io/sync-wave": "2"
{{- if or $ipamVersion $.Values.configSecret.name $.Values.manager $.Values.additionalDeployments }}
spec:
{{- end }}
{{- if $ipamVersion }}
version: {{ $ipamVersion }}
{{- end }}
{{- if $.Values.manager }}
manager:
{{- if and (kindIs "map" $.Values.manager.featureGates) (hasKey $.Values.manager.featureGates $ipamName) }}
{{- range $key, $value := $.Values.manager.featureGates }}
{{- if eq $key $ipamName }}
featureGates:
{{- range $k, $v := $value }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if $.Values.configSecret.name }}
configSecret:
name: {{ $.Values.configSecret.name }}
{{- if $.Values.configSecret.namespace }}
namespace: {{ $.Values.configSecret.namespace }}
{{- end }}
{{- end }}
{{- if $.Values.additionalDeployments }}
additionalDeployments: {{ toYaml $.Values.additionalDeployments | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -5,8 +5,10 @@ core: ""
bootstrap: ""
controlPlane: ""
infrastructure: ""
ipam: ""
addon: ""
manager.featureGates: {}
fetchConfig: {}
# ---
# Common configuration secret options
configSecret: {}
@@ -19,14 +21,14 @@ leaderElection:
image:
manager:
repository: registry.k8s.io/capi-operator/cluster-api-operator
tag: v0.11.0
tag: v0.17.0
pullPolicy: IfNotPresent
env:
manager: []
healthAddr: ":8081"
metricsBindAddr: "127.0.0.1:8080"
diagnosticsAddress: "8443"
diagnosticsAddress: ":8443"
healthAddr: ":9440"
insecureDiagnostics: false
watchConfigSecret: false
imagePullSecrets: {}
resources:
manager:

View File

@@ -5,7 +5,7 @@ metadata:
name: cluster-api
spec:
# https://github.com/kubernetes-sigs/cluster-api
version: v1.8.3
version: v1.9.5
---
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: ControlPlaneProvider
@@ -13,7 +13,7 @@ metadata:
name: kamaji
spec:
# https://github.com/clastix/cluster-api-control-plane-provider-kamaji
version: v0.11.0
version: v0.14.1
deployment:
containers:
- name: manager
@@ -28,7 +28,7 @@ metadata:
name: kubeadm
spec:
# https://github.com/kubernetes-sigs/cluster-api
version: v1.8.3
version: v1.9.5
---
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: InfrastructureProvider

View File

@@ -1,4 +1,6 @@
altinity-clickhouse-operator:
serviceMonitor:
enabled: true
configs:
files:
config.yaml:

View File

@@ -2,5 +2,5 @@ apiVersion: v2
name: cozy-proxy
description: A simple kube-proxy addon for 1:1 NAT services in Kubernetes using an NFT backend
type: application
version: 0.1.2
appVersion: 0.1.2
version: 0.1.3
appVersion: 0.1.3

View File

@@ -25,3 +25,5 @@ spec:
privileged: true
capabilities:
add: ["NET_ADMIN"]
tolerations:
- operator: Exists

View File

@@ -1,6 +1,6 @@
image:
repository: ghcr.io/aenix-io/cozystack/cozy-proxy
tag: v0.1.2
tag: v0.1.3
pullPolicy: IfNotPresent
daemonset:

View File

@@ -1,2 +1,2 @@
cozystackAPI:
image: ghcr.io/aenix-io/cozystack/cozystack-api:v0.26.1@sha256:d4f2ad6e8e7b7578337c2c78649e95fcf658f2d8a242bcf6629be21c431f66e7
image: ghcr.io/aenix-io/cozystack/cozystack-api:v0.27.0@sha256:054adb2c2c3b380304e77a3f91428fc1d563d7ed2c1aab5d8ee0c5857b1dde99

View File

@@ -1,5 +1,5 @@
cozystackController:
image: ghcr.io/aenix-io/cozystack/cozystack-controller:v0.26.1@sha256:186df3406dd2a75f59872ff7d11fe92b6e4ce5787f76da3bc7ad670358ea40fb
image: ghcr.io/aenix-io/cozystack/cozystack-controller:v0.27.0@sha256:c97b2517aafdc1e906012c9604c792cb744ff1d3017d7c0c3836808dc308b835
debug: false
disableTelemetry: false
cozystackVersion: "v0.26.1"
cozystackVersion: "v0.27.0"

View File

@@ -76,7 +76,7 @@ data:
"kubeappsNamespace": {{ .Release.Namespace | quote }},
"helmGlobalNamespace": {{ include "kubeapps.helmGlobalPackagingNamespace" . | quote }},
"carvelGlobalNamespace": {{ .Values.kubeappsapis.pluginConfig.kappController.packages.v1alpha1.globalPackagingNamespace | quote }},
"appVersion": "v0.26.1",
"appVersion": "v0.27.0",
"authProxyEnabled": {{ .Values.authProxy.enabled }},
"oauthLoginURI": {{ .Values.authProxy.oauthLoginURI | quote }},
"oauthLogoutURI": {{ .Values.authProxy.oauthLogoutURI | quote }},

View File

@@ -18,14 +18,14 @@ kubeapps:
image:
registry: ghcr.io/aenix-io/cozystack
repository: dashboard
tag: v0.26.1
digest: "sha256:c1baa0d3f19201069da28a443a50f0dff1df53b2cbd2e8cfcb9201d25cd6bfc0"
tag: v0.27.0
digest: "sha256:a363361571a7740c8544ecc22745e426ad051068a6bbe62d7e7d5e91df4d988e"
kubeappsapis:
image:
registry: ghcr.io/aenix-io/cozystack
repository: kubeapps-apis
tag: v0.26.1
digest: "sha256:55694bd7d7fd7948e7cac7b511635da01515dfb34f224ee9e7de7acf54cf6e81"
tag: v0.27.0
digest: "sha256:dcffdd5a02433a4caec7b5e9753847cbeb05f2004146c38ec7cee44d02179423"
pluginConfig:
flux:
packages:

View File

@@ -1,5 +1,5 @@
apiVersion: v2
appVersion: v0.4.0
appVersion: v0.4.1
name: etcd-operator
type: application
version: 0.4.0
version: 0.4.1

View File

@@ -73,6 +73,7 @@ rules:
verbs:
- get
- list
- watch
- apiGroups:
- etcd.aenix.io
resources:

View File

@@ -1,6 +1,6 @@
dependencies:
- name: kamaji-etcd
repository: https://clastix.github.io/charts
version: 0.9.1
digest: sha256:522ec6321e2e394bd89f88a59446b39d6871838c63583346fdca10db36f1bbdb
generated: "2025-02-17T09:27:31.011938073+03:00"
version: 0.8.1
digest: sha256:381d8ef9619c2daeea37e40c6a9772ae3e5cee80887148879db04e887d5364ad
generated: "2024-10-25T19:28:40.880766186+02:00"

View File

@@ -1,5 +1,5 @@
apiVersion: v2
appVersion: v1.0.0
appVersion: v0.0.0
description: Kamaji is the Hosted Control Plane Manager for Kubernetes.
home: https://github.com/clastix/kamaji
icon: https://github.com/clastix/kamaji/raw/master/assets/logo-colored.png
@@ -17,11 +17,11 @@ name: kamaji
sources:
- https://github.com/clastix/kamaji
type: application
version: 2.0.0
version: 0.0.0
dependencies:
- name: kamaji-etcd
repository: https://clastix.github.io/charts
version: ">=0.7.0"
version: ">=0.8.1"
condition: kamaji-etcd.deploy
annotations:
catalog.cattle.io/certified: partner

View File

@@ -1,6 +1,6 @@
# kamaji
![Version: 2.0.0](https://img.shields.io/badge/Version-2.0.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v1.0.0](https://img.shields.io/badge/AppVersion-v1.0.0-informational?style=flat-square)
![Version: 0.0.0](https://img.shields.io/badge/Version-0.0.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.0.0](https://img.shields.io/badge/AppVersion-v0.0.0-informational?style=flat-square)
Kamaji is the Hosted Control Plane Manager for Kubernetes.
@@ -22,7 +22,7 @@ Kubernetes: `>=1.21.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://clastix.github.io/charts | kamaji-etcd | >=0.7.0 |
| https://clastix.github.io/charts | kamaji-etcd | >=0.8.1 |
[Kamaji](https://github.com/clastix/kamaji) requires a [multi-tenant `etcd`](https://github.com/clastix/kamaji-internal/blob/master/deploy/getting-started-with-kamaji.md#setup-internal-multi-tenant-etcd) cluster.
This Helm Chart starting from v0.1.1 provides the installation of an internal `etcd` in order to streamline the local test. If you'd like to use an externally managed etcd instance, you can specify the overrides and by setting the value `etcd.deploy=false`.
@@ -70,7 +70,7 @@ Here the values you can override:
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | Kubernetes affinity rules to apply to Kamaji controller pods |
| defaultDatastoreName | string | `"default"` | Specify the default DataStore name for the Kamaji instance. |
| defaultDatastoreName | string | `"default"` | If specified, all the Kamaji instances with an unassigned DataStore will inherit this default value. |
| extraArgs | list | `[]` | A list of extra arguments to add to the kamaji controller default ones |
| fullnameOverride | string | `""` | |
| healthProbeBindAddress | string | `":8081"` | The address the probe endpoint binds to. (default ":8081") |

View File

@@ -66,7 +66,6 @@ spec:
metadata:
type: object
spec:
description: TenantControlPlaneSpec defines the desired state of TenantControlPlane.
properties:
addons:
description: Addons contain which addons are enabled
@@ -6413,10 +6412,23 @@ spec:
type: object
dataStore:
description: |-
DataStore allows to specify a DataStore that should be used to store the Kubernetes data for the given Tenant Control Plane.
This parameter is optional and acts as an override over the default one which is used by the Kamaji Operator.
Migration from a different DataStore to another one is not yet supported and the reconciliation will be blocked.
DataStore specifies the DataStore that should be used to store the Kubernetes data for the given Tenant Control Plane.
When Kamaji runs with the default DataStore flag, all empty values will inherit the default value.
By leaving it empty and running Kamaji with no default DataStore flag, it is possible to achieve automatic assignment to a specific DataStore object.
Migration from one DataStore to another backed by the same Driver is possible. See: https://kamaji.clastix.io/guides/datastore-migration/
Migration from one DataStore to another backed by a different Driver is not supported.
type: string
dataStoreSchema:
description: |-
DataStoreSchema allows to specify the name of the database (for relational DataStores) or the key prefix (for etcd). This
value is optional and immutable. Note that Kamaji currently doesn't ensure that DataStoreSchema values are unique. It's up
to the user to avoid clashes between different TenantControlPlanes. If not set upon creation, Kamaji will default the
DataStoreSchema by concatenating the namespace and name of the TenantControlPlane.
type: string
x-kubernetes-validations:
- message: changing the dataStoreSchema is not supported
rule: self == oldSelf
kubernetes:
description: Kubernetes specification for tenant control plane
properties:
@@ -6539,15 +6551,47 @@ spec:
items:
type: string
type: array
clusterDomain:
default: cluster.local
description: The default domain name used for DNS resolution within the cluster.
pattern: .*\..*
type: string
x-kubernetes-validations:
- message: changing the cluster domain is not supported
rule: self == oldSelf
dnsServiceIPs:
default:
- 10.96.0.10
description: |-
The DNS Service for internal resolution, it must match the Service CIDR.
In case of an empty value, it is automatically computed according to the Service CIDR, e.g.:
Service CIDR 10.96.0.0/16, the resulting DNS Service IP will be 10.96.0.10 for IPv4,
for IPv6 from the CIDR 2001:db8:abcd::/64 the resulting DNS Service IP will be 2001:db8:abcd::10.
items:
type: string
type: array
loadBalancerClass:
description: |-
Specify the LoadBalancer class in case of multiple load balancer implementations.
Field supported only for Tenant Control Plane instances exposed using a LoadBalancer Service.
minLength: 1
type: string
x-kubernetes-validations:
- message: LoadBalancerClass is immutable
rule: self == oldSelf
loadBalancerSourceRanges:
description: |-
LoadBalancerSourceRanges restricts the IP ranges that can access
the LoadBalancer type Service. This field defines a list of IP
address ranges (in CIDR format) that are allowed to access the service.
If left empty, the service will allow traffic from all IP ranges (0.0.0.0/0).
This feature is useful for restricting access to API servers or services
to specific networks for security purposes.
Example: {"192.168.1.0/24", "10.0.0.0/8"}
items:
type: string
type: array
podCidr:
default: 10.244.0.0/16
description: CIDR for Kubernetes Pods
description: 'CIDR for Kubernetes Pods: if empty, defaulted to 10.244.0.0/16.'
type: string
port:
default: 6443
@@ -6556,13 +6600,24 @@ spec:
type: integer
serviceCidr:
default: 10.96.0.0/16
description: Kubernetes Service
description: 'CIDR for Kubernetes Services: if empty, defaulted to 10.96.0.0/16.'
type: string
type: object
required:
- controlPlane
- kubernetes
type: object
x-kubernetes-validations:
- message: unsetting the dataStore is not supported
rule: '!has(oldSelf.dataStore) || has(self.dataStore)'
- message: unsetting the dataStoreSchema is not supported
rule: '!has(oldSelf.dataStoreSchema) || has(self.dataStoreSchema)'
- message: LoadBalancer source ranges are supported only with LoadBalancer service type
rule: '!has(self.networkProfile.loadBalancerSourceRanges) || (size(self.networkProfile.loadBalancerSourceRanges) == 0 || self.controlPlane.service.serviceType == ''LoadBalancer'')'
- message: LoadBalancerClass is supported only with LoadBalancer service type
rule: '!has(self.networkProfile.loadBalancerClass) || self.controlPlane.service.serviceType == ''LoadBalancer'''
- message: LoadBalancerClass cannot be set or unset at runtime
rule: self.controlPlane.service.serviceType != 'LoadBalancer' || (oldSelf.controlPlane.service.serviceType != 'LoadBalancer' && self.controlPlane.service.serviceType == 'LoadBalancer') || has(self.networkProfile.loadBalancerClass) == has(oldSelf.networkProfile.loadBalancerClass)
status:
description: TenantControlPlaneStatus defines the observed state of TenantControlPlane.
properties:

View File

@@ -33,8 +33,9 @@ spec:
- --leader-elect
- --metrics-bind-address={{ .Values.metricsBindAddress }}
- --tmp-directory={{ .Values.temporaryDirectoryPath }}
{{- $datastoreName := .Values.defaultDatastoreName | required ".Values.defaultDatastoreName is required!" }}
- --datastore={{ $datastoreName }}
{{- if not (eq .Values.defaultDatastoreName "") }}
- --datastore={{ .Values.defaultDatastoreName }}
{{- end }}
{{- if .Values.telemetry.disabled }}
- --disable-telemetry
{{- end }}

View File

@@ -95,7 +95,7 @@ loggingDevel:
# -- Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default false)
enable: false
# -- Specify the default DataStore name for the Kamaji instance.
# -- If specified, all the Kamaji instances with an unassigned DataStore will inherit this default value.
defaultDatastoreName: default
kamaji-etcd:

View File

@@ -0,0 +1,13 @@
diff --git a/internal/resources/kubeadm_config.go b/internal/resources/kubeadm_config.go
index ae4cfc0..ec7a7da 100644
--- a/internal/resources/kubeadm_config.go
+++ b/internal/resources/kubeadm_config.go
@@ -96,7 +96,7 @@ func (r *KubeadmConfigResource) mutate(ctx context.Context, tenantControlPlane *
TenantControlPlanePort: port,
TenantControlPlaneName: tenantControlPlane.GetName(),
TenantControlPlaneNamespace: tenantControlPlane.GetNamespace(),
- TenantControlPlaneEndpoint: r.getControlPlaneEndpoint(tenantControlPlane.Spec.ControlPlane.Ingress, address, port),
+ TenantControlPlaneEndpoint: r.getControlPlaneEndpoint(tenantControlPlane.Spec.ControlPlane.Ingress, address, 443),
TenantControlPlaneCertSANs: tenantControlPlane.Spec.NetworkProfile.CertSANs,
TenantControlPlaneClusterDomain: tenantControlPlane.Spec.NetworkProfile.ClusterDomain,
TenantControlPlanePodCIDR: tenantControlPlane.Spec.NetworkProfile.PodCIDR,

View File

@@ -3,7 +3,7 @@ kamaji:
deploy: false
image:
pullPolicy: IfNotPresent
tag: v0.26.1@sha256:a0504cdab3d36d144999d9b4a8729c53c016095d6958d3cae1acf8699f2fb0b9
tag: v0.27.0@sha256:686348fc4a496ec76aac7d6af9e59e67d5d29af95dd73427054c0019ffc045e6
repository: ghcr.io/aenix-io/cozystack/kamaji
resources:
limits:

View File

@@ -6,7 +6,13 @@ spec:
instances: 2
storage:
size: 20Gi
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
{{- end }}
{{- end }}
monitoring:
enablePodMonitor: true

View File

@@ -22,4 +22,4 @@ global:
images:
kubeovn:
repository: kubeovn
tag: v1.13.2@sha256:d3fa76c0cc48207aef15ff27f6332a3f8570e3db77fb97720af8505b812cdf61
tag: v1.13.2@sha256:5ce804458e9b14856300a5bbfa3ecac6cd47203759bbb8a4e62ddb5f0684ed7b

View File

@@ -0,0 +1,25 @@
#!/bin/bash
set -e
terminate() {
echo "Caught signal, terminating"
exit 0
}
trap terminate SIGINT SIGQUIT SIGTERM
echo "Running Linstor controller plunger:"
cat "${0}"
while true; do
# timeout at the start of the loop to give some time for the linstor-controller to start
sleep 30 &
pid=$!
wait $pid
# workaround for https://github.com/LINBIT/linstor-server/issues/437
# try to delete snapshots that are stuck in the DELETE state
linstor -m s l \
| jq -r '.[][] | select(.flags | contains(["DELETE"])) | "linstor snapshot delete \(.resource_name) \(.name)"' \
| sh -x
done

View File

@@ -0,0 +1,41 @@
#!/bin/bash
set -e
terminate() {
echo "Caught signal, terminating"
exit 0
}
trap terminate SIGINT SIGQUIT SIGTERM
echo "Running Linstor per-satellite plunger:"
cat "${0}"
while true; do
# timeout at the start of the loop to give a chance for the fresh linstor-satellite instance to cleanup itself
sleep 30 &
pid=$!
wait $pid
# Detect orphaned loop devices and detach them
# the `/` path could not be a backing file for a loop device, so it's a good indicator of a stuck loop device
# TODO describe the issue in more detail
losetup --json \
| jq -r '.[][]
| select(."back-file" == "/ (deleted)")
| "echo Detaching stuck loop device \(.name);
set -x;
losetup --detach \(.name)"' \
| sh
# Detect secondary volumes that lost connection and can be simply reconnected
disconnected_secondaries=$(drbdadm status | awk '/pvc-.*role:Secondary.*force-io-failures:yes/ {print $1}')
for secondary in $disconnected_secondaries; do (
echo "Trying to reconnect secondary volume ${secondary}"
set -x
drbdadm down "${secondary}"
drbdadm up "${secondary}"
); done
done

View File

@@ -0,0 +1,24 @@
{{- define "cozy.linstor.version" -}}
{{- $piraeusConfigMap := lookup "v1" "ConfigMap" "cozy-linstor" "piraeus-operator-image-config"}}
{{- if not $piraeusConfigMap }}
{{- fail "Piraeus controller is not yet installed, ConfigMap cozy-linstor/piraeus-operator-image-config is missing" }}
{{- end }}
{{- $piraeusImagesConfig := $piraeusConfigMap | dig "data" "0_piraeus_datastore_images.yaml" nil | required "No image config" | fromYaml }}
base: {{ $piraeusImagesConfig.base | required "No image base in piraeus config" }}
controller:
image: {{ $piraeusImagesConfig | dig "components" "linstor-controller" "image" nil | required "No controller image" }}
tag: {{ $piraeusImagesConfig | dig "components" "linstor-controller" "tag" nil | required "No controller tag" }}
satellite:
image: {{ $piraeusImagesConfig | dig "components" "linstor-satellite" "image" nil | required "No satellite image" }}
tag: {{ $piraeusImagesConfig | dig "components" "linstor-satellite" "tag" nil | required "No satellite tag" }}
{{- end -}}
{{- define "cozy.linstor.version.controller" -}}
{{- $version := (include "cozy.linstor.version" .) | fromYaml }}
{{- printf "%s/%s:%s" $version.base $version.controller.image $version.controller.tag }}
{{- end -}}
{{- define "cozy.linstor.version.satellite" -}}
{{- $version := (include "cozy.linstor.version" .) | fromYaml }}
{{- printf "%s/%s:%s" $version.base $version.satellite.image $version.satellite.tag }}
{{- end -}}

View File

@@ -13,3 +13,33 @@ spec:
certManager:
name: linstor-api-ca
kind: Issuer
controller:
enabled: true
podTemplate:
spec:
containers:
- name: plunger
image: {{ include "cozy.linstor.version.controller" . }}
command:
- "/scripts/plunger-controller.sh"
securityContext:
capabilities:
drop:
- ALL
# make some room for live debugging
readOnlyRootFilesystem: false
volumeMounts:
- mountPath: /etc/linstor/client
name: client-tls
readOnly: true
- mountPath: /etc/linstor
name: etc-linstor
readOnly: true
- mountPath: /scripts
name: script-volume
readOnly: true
volumes:
- name: script-volume
configMap:
name: linstor-plunger
defaultMode: 0755

View File

@@ -0,0 +1,13 @@
{{- $files := .Files.Glob "hack/plunger/*.sh" -}}
{{/* TODO Add checksum of scripts to the pod selectors */}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: linstor-plunger
namespace: cozy-linstor
data:
{{- range $path, $file := $files }}
{{ $path | base }}: |
{{- $file | toString | nindent 4 }}
{{- end -}}

View File

@@ -1,15 +0,0 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: linstor-plunger
namespace: cozy-linstor
data:
plunger.sh: |
#!/bin/bash
set -e
while true; do
# workaround for https://github.com/LINBIT/linstor-server/issues/437
linstor -m s l | jq -r '.[][] | select(.flags | contains(["DELETE"])) | "linstor snapshot delete \(.resource_name) \(.name)"' | sh -x
sleep 1m
done

View File

@@ -1,52 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: linstor-plunger
namespace: cozy-linstor
spec:
replicas: 1
selector:
matchLabels:
app: linstor-plunger
template:
metadata:
labels:
app: linstor-plunger
annotations:
checksum/config: {{ include (print $.Template.BasePath "/plunger/configmap.yaml") . | sha256sum }}
spec:
containers:
- name: plunger
image: quay.io/piraeusdatastore/piraeus-server:v1.29.2
command: ["/bin/bash", "/scripts/plunger.sh"]
volumeMounts:
- mountPath: /etc/linstor/client
name: client-tls
readOnly: true
- mountPath: /etc/linstor
name: etc-linstor
readOnly: true
- mountPath: /scripts
name: script-volume
readOnly: true
enableServiceLinks: false
serviceAccountName: linstor-controller
tolerations:
- effect: NoSchedule
key: drbd.linbit.com/lost-quorum
- effect: NoSchedule
key: drbd.linbit.com/force-io-error
volumes:
- name: client-tls
projected:
sources:
- secret:
name: linstor-client-tls
- name: etc-linstor
configMap:
name: linstor-controller-config
- name: script-volume
configMap:
name: linstor-plunger
defaultMode: 0755

View File

@@ -0,0 +1,44 @@
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMPodScrape
metadata:
name: linstor-satellite
namespace: cozy-linstor
spec:
podMetricsEndpoints:
- port: prometheus
scheme: http
relabelConfigs:
- action: labeldrop
regex: (endpoint|namespace|pod|container)
- replacement: linstor-controller
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_node_name]
targetLabel: node
- targetLabel: tier
replacement: cluster
selector:
matchLabels:
app.kubernetes.io/component: linstor-satellite
---
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMPodScrape
metadata:
name: linstor-controller
namespace: cozy-linstor
spec:
podMetricsEndpoints:
- path: /metrics
port: api
scheme: http
relabelConfigs:
- action: labeldrop
regex: (endpoint|namespace|pod|container)
- replacement: linstor-satellite
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_node_name]
targetLabel: node
- targetLabel: tier
replacement: cluster
selector:
matchLabels:
app.kubernetes.io/component: linstor-controller

View File

@@ -0,0 +1,18 @@
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: cozystack
spec:
internalTLS:
certManager:
name: linstor-internal-ca
kind: Issuer
podTemplate:
spec:
# host-network is recommended by Piraeus while it is not default in the upstream
hostNetwork: true
containers:
- name: linstor-satellite
securityContext:
# real-world installations need some debugging from time to time
readOnlyRootFilesystem: false

View File

@@ -0,0 +1,52 @@
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: cozystack-plunger
spec:
internalTLS:
certManager:
name: linstor-internal-ca
kind: Issuer
podTemplate:
spec:
containers:
- name: plunger
image: {{ include "cozy.linstor.version.satellite" . }}
command:
- "/scripts/plunger-satellite.sh"
securityContext:
capabilities:
drop:
- ALL
# make some room for live debugging
readOnlyRootFilesystem: false
volumeMounts:
- mountPath: /run
name: host-run
- mountPath: /dev
name: dev
- mountPath: /var/lib/drbd
name: var-lib-drbd
- mountPath: /var/lib/linstor.d
name: var-lib-linstor-d
- mountPath: /etc/lvm
name: container-etc-lvm
- mountPath: /etc/lvm/archive
name: etc-lvm-archive
- mountPath: /etc/lvm/backup
name: etc-lvm-backup
- mountPath: /run/lock/lvm
name: run-lock-lvm
- mountPath: /run/lvm
name: run-lvm
- mountPath: /run/udev
name: run-udev
readOnly: true
- mountPath: /scripts
name: script-volume
readOnly: true
volumes:
- name: script-volume
configMap:
name: linstor-plunger
defaultMode: 0755

View File

@@ -1,40 +1,33 @@
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: linstor-satellites
name: cozystack-talos
spec:
internalTLS:
certManager:
name: linstor-internal-ca
kind: Issuer
#storagePools:
#- name: "data"
# lvmPool:
# volumeGroup: "data"
patches:
- target:
kind: Pod
name: satellite
patch: |
apiVersion: v1
kind: Pod
metadata:
name: satellite
spec:
hostNetwork: true
initContainers:
- target:
group: apps
version: v1
kind: DaemonSet
name: linstor-satellite
patch: |
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: linstor-satellite
spec:
template:
spec:
initContainers:
- name: drbd-shutdown-guard
$patch: delete
- name: drbd-module-loader
$patch: delete
containers:
- name: linstor-satellite
volumeMounts:
- mountPath: /run
name: host-run
securityContext:
readOnlyRootFilesystem: false
volumes:
containers:
- name: linstor-satellite
volumeMounts:
- mountPath: /run
name: host-run
volumes:
- name: run-systemd-system
$patch: delete
- name: run-drbd-shutdown-guard

View File

@@ -19,7 +19,7 @@ spec:
< 604800
for: 5m
labels:
severity: warning
severity: informational
exported_instance: '{{ $labels.namespace }}/{{ $labels.pod }}'
service: kubernetes-system-apiserver
- alert: KubeClientCertificateExpiration
@@ -34,7 +34,7 @@ spec:
< 86400
for: 5m
labels:
severity: critical
severity: informational
exported_instance: '{{ $labels.namespace }}/{{ $labels.pod }}'
service: kubernetes-system-apiserver
- alert: KubeAggregatedAPIErrors

Some files were not shown because too many files have changed in this diff Show More