Compare commits

..

56 Commits

Author SHA1 Message Date
Andrei Kvapil
fe859dfd46 Update Cilium v1.14.10 2024-05-09 10:37:11 +02:00
Andrei Kvapil
53f2365e79 Fix: kubernetes and etcd-operator issues (#119)
* Fix datastore creation depends on created secrets

* Add basic topologySpreadConstraints

* Fix kubernetes chart post-rendering

* Update release images
2024-05-06 13:59:43 +02:00
Marian Koreniuk
9145be14c1 Merge pull request #117 from aenix-io/release-0.1.0v2
Prepare release v0.4.0
2024-05-06 09:25:39 +02:00
Andrei Kvapil
fca349c641 Update Talos v1.7.1 2024-05-04 07:32:08 +02:00
Andrei Kvapil
0b38599394 Prepare release v0.4.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-03 23:12:35 +02:00
Andrei Kvapil
0a33950a40 Prepare release v0.4.0 (#115) 2024-05-03 23:02:41 +02:00
Andrei Kvapil
e3376a223e Fix tolerations in Kubernetes chart (#116) 2024-05-03 13:26:02 +02:00
Marian Koreniuk
dee190ad4f Merge pull request #95 from aenix-io/etcd-operator
Replace kamaji-etcd with aenix-io/etcd-operator
2024-05-02 22:42:52 +02:00
Marian Koreniuk
66f963bfd0 Merge pull request #104 from aenix-io/replicas
Introduce replicas options
2024-04-26 16:03:09 +02:00
Andrei Kvapil
7cd7de73ee Introduce replicas options
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-26 15:19:25 +02:00
Andrei Kvapil
4f2757731a Fix: dashboard colors for dark mode (#108)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-26 12:12:00 +02:00
Andrei Kvapil
372c3cbd17 Update Kamaji v0.5.0 (#99)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-26 11:00:06 +02:00
Andrei Kvapil
ff9ab5ba85 Fix older versions in dashboard (#102)
Workaround for https://github.com/vmware-tanzu/kubeapps/issues/7740

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-26 10:41:05 +02:00
Andrei Kvapil
c7568d2312 Update kubeapps-15.0.2 (#103)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-26 10:18:22 +02:00
Marian Koreniuk
f4778abb3f Merge pull request #105 from aenix-io/upd-linstor
Update LISNTOR v1.27.1
2024-04-25 20:49:14 +02:00
Andrei Kvapil
68a7cc52c3 Update LISNTOR v1.27.1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-25 18:29:23 +02:00
Marian Koreniuk
be508fd107 Fix etcd-operator Makefile 2024-04-24 16:21:06 +03:00
Andrei Kvapil
a6d0f7cfd4 Add etcd-operator
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-24 12:29:05 +02:00
Andrei Kvapil
a95671391f fix: Flux does not tolerate kubectl edits (#101)
https://fluxcd.io/flux/faq/#why-are-kubectl-edits-rolled-back-by-flux

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-24 11:31:32 +02:00
Andrei Kvapil
20fcd25d64 Calculate tags and version automatically (#100)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-24 11:31:22 +02:00
Andrei Kvapil
ca79f725a3 Prepare release v0.3.1 (#97)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-23 12:55:45 +03:00
Marian Koreniuk
be0603f139 Merge pull request #96 from aenix-io/missing-makefile
fix: missing package-system.mk
2024-04-23 12:53:00 +03:00
Andrei Kvapil
f8b87197d0 fix: flux dependency
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-23 09:43:04 +02:00
Andrei Kvapil
5d58e5ce7d fix: missing package-system.mk 2024-04-23 09:32:32 +02:00
Andrei Kvapil
a1340c1839 fix: clickhouse-operator watch namespaces (#93) 2024-04-23 08:50:45 +02:00
Marian Koreniuk
b838ee5729 Merge pull request #91 from artarik/main
remove duplicated entry for creating sa
2024-04-18 11:54:57 +03:00
Artem Starik
2baf532e1f HOTFIX: byump chart version 2024-04-18 11:10:14 +03:00
Artem Starik
7713e7de6b HOTFIX: remove duplicated sa from template 2024-04-18 11:07:21 +03:00
Artem Starik
aef38b6dec Merge pull request #1 from artarik/artarik-patch-1
HOTFIX: remove duplicated entry for sa
2024-04-18 11:02:12 +03:00
Artem Starik
b02c608d6c HOTFIX: remove duplicated entry for sa 2024-04-18 11:00:06 +03:00
Andrei Kvapil
f7eaab0aaa Prepare release v0.3.0 (#90) 2024-04-18 09:00:22 +02:00
Marian Koreniuk
05813c06dd Fix incorrect path to include in Makefiles (#89)
fix regression introduced by https://github.com/aenix-io/cozystack/pull/86

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 23:58:56 +02:00
Andrei Kvapil
038b3c08f4 fix: remove plus in kamaji-etcd image tag (#87)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 22:59:15 +02:00
Andrei Kvapil
5dd8d41907 fix: clickhouse-operator watch namespaces (#88)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 22:59:05 +02:00
Andrei Kvapil
2d21ed6ac9 fix: grafana ingress class (#85)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 22:51:54 +02:00
Marian Koreniuk
fe5d607cad Merge pull request #86 from aenix-io/83-refactor-makefiles
Refactor Makefiles #83
2024-04-17 22:39:14 +02:00
Marian Koreniuk
12b70d8f26 Fix victoria-metrics-operator Makefile 2024-04-17 23:30:19 +03:00
Marian Koreniuk
bc414d648d Fix redis-operator Makefile 2024-04-17 23:29:40 +03:00
Marian Koreniuk
9d4aacc832 Fix metallb Makefile 2024-04-17 23:28:40 +03:00
Marian Koreniuk
23ce7480c2 Fix mariadb-operator Makefile 2024-04-17 23:27:47 +03:00
Marian Koreniuk
994b5d97bd Fix kubevirt-operator Makefile 2024-04-17 23:26:48 +03:00
Marian Koreniuk
871f053e00 Fix kamaji Makefile 2024-04-17 23:25:37 +03:00
Marian Koreniuk
d3485eb0a3 Fix ingress-nginx Makefile 2024-04-17 23:25:06 +03:00
Marian Koreniuk
f3f65e9f9c Fix dashboard Makefile 2024-04-17 23:24:02 +03:00
Marian Koreniuk
1ef7d219de Fix cilium Makefile 2024-04-17 23:22:00 +03:00
Marian Koreniuk
3d0f65ff98 Fix cert-manager Makefile 2024-04-17 23:20:41 +03:00
Marian Koreniuk
451e124c56 Update hack/package-system.mk
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 23:02:27 +03:00
Marian Koreniuk
d86c1269eb Update hack/package-system.mk
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 23:02:14 +03:00
Marian Koreniuk
f4cf1af349 Update hack/package-system.mk
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 23:01:47 +03:00
Marian Koreniuk
758079520c fix case tabs in package-system.mk 2024-04-17 22:36:13 +03:00
Marian Koreniuk
fcebfdff24 Refactor Makefiles #83 2024-04-17 22:24:59 +03:00
Andrei Kvapil
8a2ad90882 Update clickhouse app (#82)
* Add users management
* Remove logs volume

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 20:09:16 +02:00
Andrei Kvapil
760f86d2ce Add application for Kafka (#78)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 14:23:56 +02:00
Andrei Kvapil
ad7d65f471 Add application for Clickhouse (#81) 2024-04-17 11:21:51 +02:00
Andrei Kvapil
c42dbcafc3 Add NoCloud asset for Hetzner installation (#80) 2024-04-16 21:52:50 +02:00
Andrei Kvapil
238061efbc Add clickhouse-operator (#75)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-13 08:57:49 +02:00
256 changed files with 34074 additions and 6561 deletions

View File

@@ -20,9 +20,28 @@ miss_map=$(echo "$new_map" | awk 'NR==FNR { new_map[$1 " " $2] = $3; next } { if
resolved_miss_map=$(
echo "$miss_map" | while read chart version commit; do
if [ "$commit" = HEAD ]; then
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
change_commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
commit=$(git describe --always "$change_commit~1")
line=$(awk '/^version:/ {print NR; exit}' "./$chart/Chart.yaml")
change_commit=$(git --no-pager blame -L"$line",+1 -- "$chart/Chart.yaml" | awk '{print $1}')
if [ "$change_commit" = "00000000" ]; then
# Not commited yet, use previus commit
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
if [ $(echo $commit | cut -c1) = "^" ]; then
# Previus commit not exists
commit=$(echo $commit | cut -c2-)
fi
else
# Commited, but version_map wasn't updated
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
change_commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
if [ $(echo $change_commit | cut -c1) = "^" ]; then
# Previus commit not exists
commit=$(echo $change_commit | cut -c2-)
else
commit=$(git describe --always "$change_commit~1")
fi
fi
fi
echo "$chart $version $commit"
done

View File

@@ -1,25 +0,0 @@
#!/bin/sh
set -e
if [ -e $1 ]; then
echo "Please pass version in the first argument"
echo "Example: $0 0.2.0"
exit 1
fi
version=$1
talos_version=$(awk '/^version:/ {print $2}' packages/core/installer/images/talos/profiles/installer.yaml)
set -x
sed -i "/^TAG / s|=.*|= v${version}|" \
packages/apps/http-cache/Makefile \
packages/apps/kubernetes/Makefile \
packages/core/installer/Makefile \
packages/system/dashboard/Makefile
sed -i "/^VERSION / s|=.*|= ${version}|" \
packages/core/Makefile \
packages/system/Makefile
make -C packages/core fix-chartnames
make -C packages/system fix-chartnames

View File

@@ -15,13 +15,6 @@ metadata:
namespace: cozy-system
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cozystack
namespace: cozy-system
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
@@ -70,7 +63,7 @@ spec:
serviceAccountName: cozystack
containers:
- name: cozystack
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.2.0"
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.4.0"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
@@ -89,7 +82,7 @@ spec:
fieldRef:
fieldPath: metadata.name
- name: darkhttpd
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.2.0"
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.4.0"
command:
- /usr/bin/darkhttpd
- /cozystack/assets

View File

@@ -7,7 +7,7 @@ repo:
awk '$$3 != "HEAD" {print "mkdir -p $(TMP)/" $$1 "-" $$2}' versions_map | sh -ex
awk '$$3 != "HEAD" {print "git archive " $$3 " " $$1 " | tar -xf- --strip-components=1 -C $(TMP)/" $$1 "-" $$2 }' versions_map | sh -ex
helm package -d "$(OUT)" $$(find . $(TMP) -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")' | sort -V)
cd "$(OUT)" && helm repo index .
cd "$(OUT)" && helm repo index . --url http://cozystack.cozy-system.svc/repos/apps
rm -rf "$(TMP)"
fix-chartnames:

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: clickhouse
description: Managed ClickHouse service
icon: https://cdn.worldvectorlogo.com/logos/clickhouse.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "24.3.0"

View File

@@ -0,0 +1,36 @@
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
name: "{{ .Release.Name }}"
spec:
{{- with .Values.size }}
defaults:
templates:
dataVolumeClaimTemplate: data-volume-template
{{- end }}
configuration:
{{- with .Values.users }}
users:
{{- range $name, $u := . }}
{{ $name }}/password_sha256_hex: {{ sha256sum $u.password }}
{{ $name }}/profile: {{ ternary "readonly" "default" (index $u "readonly" | default false) }}
{{- end }}
{{- end }}
profiles:
readonly/readonly: "1"
clusters:
- name: "clickhouse"
layout:
shardsCount: {{ .Values.shards }}
replicasCount: {{ .Values.replicas }}
{{- with .Values.size }}
templates:
volumeClaimTemplates:
- name: data-volume-template
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ . }}
{{- end }}

View File

@@ -0,0 +1,10 @@
size: 10Gi
shards: 1
replicas: 2
users:
user1:
password: strongpassword
user2:
readonly: true
password: hackme

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "1.25.3"

View File

@@ -1,22 +1,20 @@
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
NGINX_CACHE_TAG = v0.1.0
TAG := v0.2.0
include ../../../scripts/common-envs.mk
image: image-nginx
image-nginx:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/nginx-cache \
--provenance false \
--tag $(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG) \
--tag $(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG)-$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG) \
--tag $(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG)) \
--tag $(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/nginx-cache:latest \
--cache-to type=inline \
--metadata-file images/nginx-cache.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG)" > images/nginx-cache.tag
echo "$(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG))" > images/nginx-cache.tag
update:
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/chrislim2888/IP2Location-C-Library | awk -F'[/^]' 'END{print $$3}') && \

View File

@@ -1,4 +1,4 @@
{
"containerimage.config.digest": "sha256:0487fc50bb5f870720b05e947185424a400fad38b682af8f1ca4b418ed3c5b4b",
"containerimage.digest": "sha256:be12f3834be0e2f129685f682fab83c871610985fc43668ce6a294c9de603798"
"containerimage.config.digest": "sha256:9eb68d2d503d7e22afc6fde2635f566fd3456bbdb3caad5dc9f887be1dc2b8ab",
"containerimage.digest": "sha256:1f44274dbc2c3be2a98e6cef83d68a041ae9ef31abb8ab069a525a2a92702bdd"
}

View File

@@ -74,7 +74,7 @@ data:
option redispatch 1
default-server observe layer7 error-limit 10 on-error mark-down
{{- range $i, $e := until (int $.Values.replicas) }}
{{- range $i, $e := until (int $.Values.nginx.replicas) }}
server cache{{ $i }} {{ $.Release.Name }}-nginx-cache-{{ $i }}:80 check
{{- end }}
{{- range $i, $e := $.Values.endpoints }}

View File

@@ -7,7 +7,7 @@ metadata:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 2
replicas: {{ .Values.haproxy.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}-haproxy

View File

@@ -11,7 +11,7 @@ spec:
selector:
matchLabels:
app: {{ $.Release.Name }}-nginx-cache
{{- range $i := until 3 }}
{{- range $i := until (int $.Values.nginx.replicas) }}
---
apiVersion: apps/v1
kind: Deployment

View File

@@ -1,4 +1,10 @@
external: false
haproxy:
replicas: 2
nginx:
replicas: 2
size: 10Gi
endpoints:
- 10.100.3.1:80

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: kafka
description: Managed Kafka service
icon: https://upload.wikimedia.org/wikipedia/commons/0/05/Apache_kafka.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "3.7.0"

View File

@@ -0,0 +1,53 @@
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
kafka:
replicas: {{ .Values.replicas }}
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
- name: external
port: 9094
{{- if .Values.external }}
type: loadbalancer
{{- else }}
type: internal
{{- end }}
tls: false
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
{{- with .Values.kafka.size }}
size: {{ . }}
{{- end }}
deleteClaim: true
zookeeper:
replicas: {{ .Values.replicas }}
storage:
type: persistent-claim
{{- with .Values.zookeeper.size }}
size: {{ . }}
{{- end }}
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}

View File

@@ -0,0 +1,17 @@
{{- range $topic := .Values.topics }}
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: "{{ $.Release.Name }}-{{ kebabcase $topic.name }}"
labels:
strimzi.io/cluster: "{{ $.Release.Name }}"
spec:
topicName: "{{ $topic.name }}"
partitions: 10
replicas: 3
{{- with $topic.config }}
config:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,22 @@
external: false
kafka:
size: 10Gi
replicas: 3
zookeeper:
size: 5Gi
replicas: 3
topics:
- name: Results
partitions: 1
replicas: 3
config:
min.insync.replicas: 2
- name: Orders
config:
cleanup.policy: compact
segment.ms: 3600000
max.compaction.lag.ms: 5400000
min.insync.replicas: 2
partitions: 1
replicationFactor: 3

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "1.19.0"

View File

@@ -1,19 +1,17 @@
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.2.0
UBUNTU_CONTAINER_DISK_TAG = v1.29.1
include ../../../scripts/common-envs.mk
image: image-ubuntu-container-disk
image-ubuntu-container-disk:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/ubuntu-container-disk \
--provenance false \
--tag $(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG) \
--tag $(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG)-$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG) \
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG)) \
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/ubuntu-container-disk:latest \
--cache-to type=inline \
--metadata-file images/ubuntu-container-disk.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG)" > images/ubuntu-container-disk.tag
echo "$(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG))" > images/ubuntu-container-disk.tag

View File

@@ -1,4 +1,4 @@
{
"containerimage.config.digest": "sha256:43d0bfd01c5e364ba961f1e3dc2c7ccd7fd4ca65bd26bc8c4a5298d7ff2c9f4f",
"containerimage.digest": "sha256:908b3c186bee86f1c9476317eb6582d07f19776b291aa068e5642f8fd08fa9e7"
"containerimage.config.digest": "sha256:a7e8e6e35ac07bcf6253c9cfcf21fd3c315bd0653ad0427dd5f0cae95ffd3722",
"containerimage.digest": "sha256:c03bffeeb70fe7dd680d2eca3021d2405fbcd9961dd38437f5673560c31c72cc"
}

View File

@@ -15,6 +15,12 @@ spec:
labels:
app: {{ .Release.Name }}-cluster-autoscaler
spec:
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:
- image: ghcr.io/kvaps/test:cluster-autoscaller
name: cluster-autoscaler

View File

@@ -64,12 +64,13 @@ metadata:
cluster.x-k8s.io/managed-by: kamaji
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- range $groupName, $group := .Values.nodeGroups }}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: {{ $.Release.Namespace }}
spec:
template:
spec:
@@ -78,7 +79,7 @@ spec:
kubeletExtraArgs: {}
discovery:
bootstrapToken:
apiServerEndpoint: {{ .Release.Name }}.{{ .Release.Namespace }}.svc:6443
apiServerEndpoint: {{ $.Release.Name }}.{{ $.Release.Namespace }}.svc:6443
initConfiguration:
skipPhases:
- addon/kube-proxy
@@ -86,8 +87,8 @@ spec:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
metadata:
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: {{ $.Release.Namespace }}
spec:
template:
spec:
@@ -95,7 +96,7 @@ spec:
checkStrategy: ssh
virtualMachineTemplate:
metadata:
namespace: {{ .Release.Namespace }}
namespace: {{ $.Release.Namespace }}
spec:
runStrategy: Always
template:
@@ -103,7 +104,7 @@ spec:
domain:
cpu:
threads: 1
cores: 2
cores: {{ $group.resources.cpu }}
sockets: 1
devices:
disks:
@@ -112,7 +113,7 @@ spec:
name: containervolume
networkInterfaceMultiqueue: true
memory:
guest: 1024Mi
guest: {{ $group.resources.memory }}
evictionStrategy: External
volumes:
- containerDisk:
@@ -122,29 +123,28 @@ spec:
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: {{ $.Release.Namespace }}
annotations:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "2"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0"
capacity.cluster-autoscaler.kubernetes.io/memory: "1024Mi"
capacity.cluster-autoscaler.kubernetes.io/cpu: "2"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "{{ $group.minReplicas }}"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "{{ $group.maxReplicas }}"
capacity.cluster-autoscaler.kubernetes.io/memory: "{{ $group.resources.memory }}"
capacity.cluster-autoscaler.kubernetes.io/cpu: "{{ $group.resources.cpu }}"
spec:
clusterName: {{ .Release.Name }}
selector:
matchLabels: null
clusterName: {{ $.Release.Name }}
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: {{ .Release.Name }}-md-0
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: default
clusterName: {{ .Release.Name }}
clusterName: {{ $.Release.Name }}
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
name: {{ .Release.Name }}-md-0
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: default
version: v1.23.10
version: v1.29.0
{{- end }}

View File

@@ -16,12 +16,10 @@ spec:
spec:
serviceAccountName: {{ .Release.Name }}-kcsi
priorityClassName: system-cluster-critical
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:

View File

@@ -12,6 +12,12 @@ spec:
spec:
serviceAccountName: {{ .Release.Name }}-flux-teardown
restartPolicy: Never
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:
- name: kubectl
image: docker.io/clastix/kubectl:v1.29.1

View File

@@ -14,6 +14,12 @@ spec:
labels:
k8s-app: {{ .Release.Name }}-kccm
spec:
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:
- name: kubevirt-cloud-controller-manager
args:
@@ -44,6 +50,4 @@ spec:
- secret:
secretName: {{ .Release.Name }}-admin-kubeconfig
name: kubeconfig
tolerations:
- operator: Exists
serviceAccountName: {{ .Release.Name }}-kccm

View File

@@ -1,11 +0,0 @@
{
"$schema": "http://json-schema.org/schema#",
"type": "object",
"properties": {
"host": {
"type": "string",
"title": "Domain name for this kubernetes cluster",
"description": "This host will be used for all apps deployed in this tenant"
}
}
}

View File

@@ -1 +1,10 @@
host: ""
controlPlane:
replicas: 2
nodeGroups:
md0:
minReplicas: 0
maxReplicas: 10
resources:
cpu: 2
memory: 1024Mi

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0
version: 0.3.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "11.0.2"

View File

@@ -12,7 +12,7 @@ spec:
port: 3306
replicas: 2
replicas: {{ .Values.replicas }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@@ -28,11 +28,13 @@ spec:
- {{ .Release.Name }}
topologyKey: "kubernetes.io/hostname"
{{- if gt (int .Values.replicas) 1 }}
replication:
enabled: true
#primary:
# podIndex: 0
# automaticFailover: true
{{- end }}
metrics:
enabled: true

View File

@@ -1,6 +1,8 @@
external: false
size: 10Gi
replicas: 2
users:
root:
password: strongpassword

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "16.2"

View File

@@ -4,7 +4,7 @@ kind: Cluster
metadata:
name: {{ .Release.Name }}
spec:
instances: 2
instances: {{ .Values.replicas }}
enableSuperuserAccess: true
postgresql:

View File

@@ -1,5 +1,6 @@
external: false
size: 10Gi
replicas: 2
users:
user1:

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "3.12.2"

View File

@@ -6,7 +6,7 @@ metadata:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 3
replicas: {{ .Values.replicas }}
{{- if .Values.external }}
service:
type: LoadBalancer

View File

@@ -5,6 +5,10 @@
"external": {
"type": "boolean",
"title": "Enable external Access"
},
"replicas": {
"type": "integer",
"title": "Replicas"
}
}
}

View File

@@ -1 +1,2 @@
replicas: 3
external: false

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.1
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "6.2.6"

View File

@@ -14,7 +14,7 @@ spec:
limits:
memory: 100Mi
redis:
replicas: 3
replicas: {{ .Values.replicas }}
resources:
requests:
cpu: 150m

View File

@@ -9,6 +9,10 @@
"size": {
"type": "string",
"title": "Disk Size"
},
"replicas": {
"type": "integer",
"title": "Replicas"
}
}
}

View File

@@ -1,2 +1,3 @@
replicas: 2
external: false
size: 5Gi

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "2.9.7"

View File

@@ -7,7 +7,7 @@ metadata:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 2
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}-haproxy

View File

@@ -1,4 +1,5 @@
external: false
replicas: 2
httpAndHttps:
mode: tcp
targetPorts:

View File

@@ -1,15 +1,26 @@
http-cache 0.1.0 HEAD
kubernetes 0.1.0 HEAD
clickhouse 0.1.0 ca79f72
clickhouse 0.2.0 HEAD
http-cache 0.1.0 a956713
http-cache 0.2.0 HEAD
kafka 0.1.0 HEAD
kubernetes 0.1.0 f642698
kubernetes 0.2.0 HEAD
mysql 0.1.0 f642698
mysql 0.2.0 HEAD
postgres 0.1.0 HEAD
rabbitmq 0.1.0 HEAD
redis 0.1.1 HEAD
tcp-balancer 0.1.0 HEAD
mysql 0.2.0 8b975ff0
mysql 0.3.0 HEAD
postgres 0.1.0 f642698
postgres 0.2.0 HEAD
rabbitmq 0.1.0 f642698
rabbitmq 0.2.0 HEAD
redis 0.1.1 f642698
redis 0.2.0 HEAD
tcp-balancer 0.1.0 f642698
tcp-balancer 0.2.0 HEAD
tenant 0.1.3 3d1b86c
tenant 0.1.4 d200480
tenant 0.1.5 e3ab858
tenant 1.0.0 HEAD
virtual-machine 0.1.4 f2015d6
virtual-machine 0.1.5 HEAD
vpn 0.1.0 HEAD
vpn 0.1.0 f642698
vpn 0.2.0 HEAD

View File

@@ -1,6 +1,6 @@
apiVersion: v2
name: vpn
description: Establish a connection from your computer
description: Managed VPN service
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/6/60/Outline_VPN_icon.png/600px-Outline_VPN_icon.png
# A chart can be either an 'application' or a 'library' chart.
@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "1.8.1"

View File

@@ -4,7 +4,7 @@ kind: Deployment
metadata:
name: {{ .Release.Name }}-vpn
spec:
replicas: 2
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}-vpn

View File

@@ -1,4 +1,5 @@
external: false
replicas: 2
users:
user1:

View File

@@ -1,6 +0,0 @@
VERSION := 0.2.0
gen: fix-chartnames
fix-chartnames:
find . -name Chart.yaml -maxdepth 2 | awk -F/ '{print $$2}' | while read i; do printf "name: cozy-%s\nversion: $(VERSION)\n" "$$i" > "$$i/Chart.yaml"; done

View File

@@ -1,2 +1,3 @@
apiVersion: v2
name: cozy-fluxcd
version: 0.2.0
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -1,5 +1,5 @@
NAMESPACE=cozy-fluxcd
NAME=fluxcd
NAMESPACE=cozy-$(NAME)
API_VERSIONS_FLAGS=$(addprefix -a ,$(shell kubectl api-versions))

View File

@@ -1,2 +1,3 @@
apiVersion: v2
name: cozy-installer
version: 0.2.0
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -1,11 +1,10 @@
NAMESPACE=cozy-system
NAME=installer
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.2.0
NAMESPACE=cozy-system
TALOS_VERSION=$(shell awk '/^version:/ {print $$2}' images/talos/profiles/installer.yaml)
include ../../../scripts/common-envs.mk
show:
helm template -n $(NAMESPACE) $(NAME) .
@@ -24,33 +23,33 @@ image-cozystack:
make -C ../../.. repos
docker buildx build -f images/cozystack/Dockerfile ../../.. \
--provenance false \
--tag $(REGISTRY)/cozystack:$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/cozystack:$(TAG) \
--tag $(REGISTRY)/cozystack:$(call settag,$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/cozystack:latest \
--cache-to type=inline \
--metadata-file images/cozystack.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/cozystack:$(TAG)" > images/cozystack.tag
echo "$(REGISTRY)/cozystack:$(call settag,$(TAG))" > images/cozystack.tag
image-talos:
test -f ../../../_out/assets/installer-amd64.tar || make talos-installer
docker load -i ../../../_out/assets/installer-amd64.tar
docker tag ghcr.io/siderolabs/installer:$(TALOS_VERSION) ghcr.io/aenix-io/cozystack/talos:$(TALOS_VERSION)
docker push ghcr.io/aenix-io/cozystack/talos:$(TALOS_VERSION)
docker tag ghcr.io/siderolabs/installer:$(TALOS_VERSION) ghcr.io/aenix-io/cozystack/talos:$(call settag,$(TALOS_VERSION))
docker push ghcr.io/aenix-io/cozystack/talos:$(call settag,$(TALOS_VERSION))
image-matchbox:
test -f ../../../_out/assets/kernel-amd64 || make talos-kernel
test -f ../../../_out/assets/initramfs-metal-amd64.xz || make talos-initramfs
docker buildx build -f images/matchbox/Dockerfile ../../.. \
--provenance false \
--tag $(REGISTRY)/matchbox:$(TAG) \
--tag $(REGISTRY)/matchbox:$(TALOS_VERSION)-$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/matchbox:$(TALOS_VERSION) \
--tag $(REGISTRY)/matchbox:$(call settag,$(TAG)) \
--tag $(REGISTRY)/matchbox:$(call settag,$(TALOS_VERSION)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/matchbox:latest \
--cache-to type=inline \
--metadata-file images/matchbox.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/matchbox:$(TALOS_VERSION)" > images/matchbox.tag
echo "$(REGISTRY)/matchbox:$(call settag,$(TALOS_VERSION))" > images/matchbox.tag
assets: talos-iso talos-nocloud

View File

@@ -1,4 +1,4 @@
{
"containerimage.config.digest": "sha256:326a169fb5d4277a5c3b0359e0c885b31d1360b58475bbc316be1971c710cd8d",
"containerimage.digest": "sha256:a608bdb75b3e06f6365f5f0b3fea82ac93c564d11f316f17e3d46e8a497a321d"
"containerimage.config.digest": "sha256:aefc3ca9f56f69270d7ce6f56a1ce5b531332d5641481eb54c8e74b66b0f3341",
"containerimage.digest": "sha256:a2bf43cb7eb812166edfeb1a4fae6a76a4ddba93be2c0ba9040a804ccb53c261"
}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/cozystack:v0.2.0
ghcr.io/aenix-io/cozystack/cozystack:v0.4.0

View File

@@ -1,4 +1,4 @@
{
"containerimage.config.digest": "sha256:dc584f743bb73e04dcbebca7ab4f602f2c067190fd9609c3fd84412e83c20445",
"containerimage.digest": "sha256:39ab0bf769b269a8082eeb31a9672e39caa61dd342ba2157b954c642f54a32ff"
"containerimage.config.digest": "sha256:68ea72fcc581352fabfd87fa6fd482968cc85ee520cab7a614f1244d7ae36eb0",
"containerimage.digest": "sha256:cea915e08a19eb6892f3facf3b3648368cd4a05abefc49bc2616ba3340c27e82"
}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/matchbox:v1.6.4
ghcr.io/aenix-io/cozystack/matchbox:v1.7.1

View File

@@ -3,24 +3,25 @@
arch: amd64
platform: metal
secureboot: false
version: v1.6.4
version: v1.7.1
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.6.4
imageRef: ghcr.io/siderolabs/installer:v1.7.1
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20240115
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240115
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240115
- imageRef: ghcr.io/siderolabs/i915-ucode:20240115
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240115
- imageRef: ghcr.io/siderolabs/intel-ucode:20231114
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240115
- imageRef: ghcr.io/siderolabs/drbd:9.2.6-v1.6.4
- imageRef: ghcr.io/siderolabs/zfs:2.1.14-v1.6.4
- imageRef: ghcr.io/siderolabs/amd-ucode:20240410
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240410
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240410
- imageRef: ghcr.io/siderolabs/i915-ucode:20240410
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240410
- imageRef: ghcr.io/siderolabs/intel-ucode:20240312
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240410
- imageRef: ghcr.io/siderolabs/drbd:9.2.8-v1.7.1
- imageRef: ghcr.io/siderolabs/zfs:2.2.3-v1.7.1
output:
kind: initramfs
imageOptions: {}
outFormat: raw

View File

@@ -3,24 +3,25 @@
arch: amd64
platform: metal
secureboot: false
version: v1.6.4
version: v1.7.1
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.6.4
imageRef: ghcr.io/siderolabs/installer:v1.7.1
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20240115
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240115
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240115
- imageRef: ghcr.io/siderolabs/i915-ucode:20240115
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240115
- imageRef: ghcr.io/siderolabs/intel-ucode:20231114
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240115
- imageRef: ghcr.io/siderolabs/drbd:9.2.6-v1.6.4
- imageRef: ghcr.io/siderolabs/zfs:2.1.14-v1.6.4
- imageRef: ghcr.io/siderolabs/amd-ucode:20240410
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240410
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240410
- imageRef: ghcr.io/siderolabs/i915-ucode:20240410
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240410
- imageRef: ghcr.io/siderolabs/intel-ucode:20240312
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240410
- imageRef: ghcr.io/siderolabs/drbd:9.2.8-v1.7.1
- imageRef: ghcr.io/siderolabs/zfs:2.2.3-v1.7.1
output:
kind: installer
imageOptions: {}
outFormat: raw

View File

@@ -3,24 +3,25 @@
arch: amd64
platform: metal
secureboot: false
version: v1.6.4
version: v1.7.1
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.6.4
imageRef: ghcr.io/siderolabs/installer:v1.7.1
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20240115
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240115
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240115
- imageRef: ghcr.io/siderolabs/i915-ucode:20240115
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240115
- imageRef: ghcr.io/siderolabs/intel-ucode:20231114
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240115
- imageRef: ghcr.io/siderolabs/drbd:9.2.6-v1.6.4
- imageRef: ghcr.io/siderolabs/zfs:2.1.14-v1.6.4
- imageRef: ghcr.io/siderolabs/amd-ucode:20240410
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240410
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240410
- imageRef: ghcr.io/siderolabs/i915-ucode:20240410
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240410
- imageRef: ghcr.io/siderolabs/intel-ucode:20240312
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240410
- imageRef: ghcr.io/siderolabs/drbd:9.2.8-v1.7.1
- imageRef: ghcr.io/siderolabs/zfs:2.2.3-v1.7.1
output:
kind: iso
imageOptions: {}
outFormat: raw

View File

@@ -3,24 +3,25 @@
arch: amd64
platform: metal
secureboot: false
version: v1.6.4
version: v1.7.1
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.6.4
imageRef: ghcr.io/siderolabs/installer:v1.7.1
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20240115
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240115
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240115
- imageRef: ghcr.io/siderolabs/i915-ucode:20240115
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240115
- imageRef: ghcr.io/siderolabs/intel-ucode:20231114
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240115
- imageRef: ghcr.io/siderolabs/drbd:9.2.6-v1.6.4
- imageRef: ghcr.io/siderolabs/zfs:2.1.14-v1.6.4
- imageRef: ghcr.io/siderolabs/amd-ucode:20240410
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240410
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240410
- imageRef: ghcr.io/siderolabs/i915-ucode:20240410
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240410
- imageRef: ghcr.io/siderolabs/intel-ucode:20240312
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240410
- imageRef: ghcr.io/siderolabs/drbd:9.2.8-v1.7.1
- imageRef: ghcr.io/siderolabs/zfs:2.2.3-v1.7.1
output:
kind: kernel
imageOptions: {}
outFormat: raw

View File

@@ -3,25 +3,25 @@
arch: amd64
platform: metal
secureboot: false
version: v1.6.4
version: v1.7.1
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.6.4
imageRef: ghcr.io/siderolabs/installer:v1.7.1
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20240115
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240115
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240115
- imageRef: ghcr.io/siderolabs/i915-ucode:20240115
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240115
- imageRef: ghcr.io/siderolabs/intel-ucode:20231114
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240115
- imageRef: ghcr.io/siderolabs/drbd:9.2.6-v1.6.4
- imageRef: ghcr.io/siderolabs/zfs:2.1.14-v1.6.4
- imageRef: ghcr.io/siderolabs/amd-ucode:20240410
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240410
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240410
- imageRef: ghcr.io/siderolabs/i915-ucode:20240410
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240410
- imageRef: ghcr.io/siderolabs/intel-ucode:20240312
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240410
- imageRef: ghcr.io/siderolabs/drbd:9.2.8-v1.7.1
- imageRef: ghcr.io/siderolabs/zfs:2.2.3-v1.7.1
output:
kind: image
kind: nocloud
imageOptions: { diskSize: 1306525696, diskFormat: raw }
outFormat: .xz

View File

@@ -12,12 +12,6 @@ metadata:
name: cozystack
namespace: cozy-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cozystack
namespace: cozy-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:

View File

@@ -1,2 +1,3 @@
apiVersion: v2
name: cozy-platform
version: 0.2.0
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -1,5 +1,5 @@
NAMESPACE=cozy-system
NAME=platform
NAMESPACE=cozy-system
API_VERSIONS_FLAGS=$(addprefix -a ,$(shell kubectl api-versions))

View File

@@ -52,6 +52,12 @@ releases:
privileged: true
dependsOn: [cilium]
- name: etcd-operator
releaseName: etcd-operator
chart: cozy-etcd-operator
namespace: cozy-etcd-operator
dependsOn: [cilium,cert-manager]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
@@ -76,6 +82,12 @@ releases:
namespace: cozy-kafka-operator
dependsOn: [cilium,kubeovn]
- name: clickhouse-operator
releaseName: clickhouse-operator
chart: cozy-clickhouse-operator
namespace: cozy-clickhouse-operator
dependsOn: [cilium,kubeovn]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator

View File

@@ -26,6 +26,12 @@ releases:
privileged: true
dependsOn: [victoria-metrics-operator]
- name: etcd-operator
releaseName: etcd-operator
chart: cozy-etcd-operator
namespace: cozy-etcd-operator
dependsOn: [cert-manager]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
@@ -50,6 +56,12 @@ releases:
namespace: cozy-kafka-operator
dependsOn: [cilium,kubeovn]
- name: clickhouse-operator
releaseName: clickhouse-operator
chart: cozy-clickhouse-operator
namespace: cozy-clickhouse-operator
dependsOn: [cilium,kubeovn]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator

View File

@@ -81,6 +81,12 @@ releases:
privileged: true
dependsOn: [cilium,kubeovn]
- name: etcd-operator
releaseName: etcd-operator
chart: cozy-etcd-operator
namespace: cozy-etcd-operator
dependsOn: [cilium,kubeovn,cert-manager]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
@@ -105,6 +111,12 @@ releases:
namespace: cozy-kafka-operator
dependsOn: [cilium,kubeovn]
- name: clickhouse-operator
releaseName: clickhouse-operator
chart: cozy-clickhouse-operator
namespace: cozy-clickhouse-operator
dependsOn: [cilium,kubeovn]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator

View File

@@ -26,6 +26,12 @@ releases:
privileged: true
dependsOn: [victoria-metrics-operator]
- name: etcd-operator
releaseName: etcd-operator
chart: cozy-etcd-operator
namespace: cozy-etcd-operator
dependsOn: [cert-manager]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
@@ -50,6 +56,12 @@ releases:
namespace: cozy-kafka-operator
dependsOn: [cilium,kubeovn]
- name: clickhouse-operator
releaseName: clickhouse-operator
chart: cozy-clickhouse-operator
namespace: cozy-clickhouse-operator
dependsOn: [cilium,kubeovn]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator

View File

@@ -23,9 +23,11 @@ spec:
interval: 1m
releaseName: {{ $x.releaseName | default $x.name }}
install:
crds: CreateReplace
remediation:
retries: -1
upgrade:
crds: CreateReplace
remediation:
retries: -1
chart:

View File

@@ -7,7 +7,7 @@ repo:
awk '$$3 != "HEAD" {print "mkdir -p $(TMP)/" $$1 "-" $$2}' versions_map | sh -ex
awk '$$3 != "HEAD" {print "git archive " $$3 " " $$1 " | tar -xf- --strip-components=1 -C $(TMP)/" $$1 "-" $$2 }' versions_map | sh -ex
helm package -d "$(OUT)" $$(find . $(TMP) -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")' | sort -V)
cd "$(OUT)" && helm repo index .
cd "$(OUT)" && helm repo index . --url http://cozystack.cozy-system.svc/repos/extra
rm -rf "$(TMP)"
fix-chartnames:

View File

@@ -3,4 +3,4 @@ name: etcd
description: Storage for Kubernetes clusters
icon: https://www.svgrepo.com/show/353714/etcd.svg
type: application
version: 1.0.0
version: 2.0.0

View File

@@ -0,0 +1,50 @@
---
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: {{ .Release.Namespace }}
spec:
driver: etcd
endpoints:
- etcd-0.etcd-headless.{{ .Release.Namespace }}.svc:2379
- etcd-1.etcd-headless.{{ .Release.Namespace }}.svc:2379
- etcd-2.etcd-headless.{{ .Release.Namespace }}.svc:2379
tlsConfig:
certificateAuthority:
certificate:
secretReference:
keyPath: tls.crt
name: etcd-ca-tls
namespace: {{ .Release.Namespace }}
privateKey:
secretReference:
keyPath: tls.key
name: etcd-ca-tls
namespace: {{ .Release.Namespace }}
clientCertificate:
certificate:
secretReference:
keyPath: tls.crt
name: etcd-client-tls
namespace: {{ .Release.Namespace }}
privateKey:
secretReference:
keyPath: tls.key
name: etcd-client-tls
namespace: {{ .Release.Namespace }}
---
apiVersion: v1
kind: Secret
metadata:
name: etcd-ca-tls
annotations:
helm.sh/hook: pre-install
helm.sh/resource-policy: keep
---
apiVersion: v1
kind: Secret
metadata:
name: etcd-client-tls
annotations:
helm.sh/hook: pre-install
helm.sh/resource-policy: keep

View File

@@ -0,0 +1,176 @@
---
apiVersion: etcd.aenix.io/v1alpha1
kind: EtcdCluster
metadata:
name: etcd
spec:
storage: {}
security:
tls:
peerTrustedCASecret: etcd-peer-ca-tls
peerSecret: etcd-peer-tls
serverSecret: etcd-server-tls
clientTrustedCASecret: etcd-ca-tls
clientSecret: etcd-client-tls
podTemplate:
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "kubernetes.io/hostname"
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/instance: etcd
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: etcd-selfsigning-issuer
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: etcd-peer-ca
spec:
isCA: true
usages:
- "signing"
- "key encipherment"
- "cert sign"
commonName: etcd-peer-ca
subject:
organizations:
- ACME Inc.
organizationalUnits:
- Widgets
secretName: etcd-peer-ca-tls
privateKey:
algorithm: RSA
size: 4096
issuerRef:
name: etcd-selfsigning-issuer
kind: Issuer
group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: etcd-ca
spec:
isCA: true
usages:
- "signing"
- "key encipherment"
- "cert sign"
commonName: etcd-ca
subject:
organizations:
- ACME Inc.
organizationalUnits:
- Widgets
secretName: etcd-ca-tls
privateKey:
algorithm: RSA
size: 4096
issuerRef:
name: etcd-selfsigning-issuer
kind: Issuer
group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: etcd-peer-issuer
spec:
ca:
secretName: etcd-peer-ca-tls
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: etcd-issuer
spec:
ca:
secretName: etcd-ca-tls
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: etcd-server
spec:
secretName: etcd-server-tls
isCA: false
usages:
- "server auth"
- "signing"
- "key encipherment"
dnsNames:
- etcd-0
- etcd-0.etcd-headless
- etcd-0.etcd-headless.{{ .Release.Namespace }}.svc
- etcd-1
- etcd-1.etcd-headless
- etcd-1.etcd-headless.{{ .Release.Namespace }}.svc
- etcd-2
- etcd-2.etcd-headless
- etcd-2.etcd-headless.{{ .Release.Namespace }}.svc
- localhost
- "127.0.0.1"
privateKey:
rotationPolicy: Always
algorithm: RSA
size: 4096
issuerRef:
name: etcd-issuer
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: etcd-peer
spec:
secretName: etcd-peer-tls
isCA: false
usages:
- "server auth"
- "client auth"
- "signing"
- "key encipherment"
dnsNames:
- etcd-0
- etcd-0.etcd-headless
- etcd-0.etcd-headless.{{ .Release.Namespace }}.svc
- etcd-1
- etcd-1.etcd-headless
- etcd-1.etcd-headless.{{ .Release.Namespace }}.svc
- etcd-2
- etcd-2.etcd-headless
- etcd-2.etcd-headless.{{ .Release.Namespace }}.svc
- localhost
- "127.0.0.1"
privateKey:
rotationPolicy: Always
algorithm: RSA
size: 4096
issuerRef:
name: etcd-peer-issuer
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: etcd-client
spec:
commonName: root
secretName: etcd-client-tls
usages:
- "signing"
- "key encipherment"
- "client auth"
privateKey:
rotationPolicy: Always
algorithm: RSA
size: 4096
issuerRef:
name: etcd-issuer
kind: Issuer

View File

@@ -1,19 +0,0 @@
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kamaji-etcd
spec:
chart:
spec:
chart: cozy-kamaji-etcd
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '*'
interval: 1m0s
timeout: 5m0s
values:
kamaji-etcd:
fullnameOverride: etcd

View File

@@ -67,7 +67,7 @@ spec:
ingress:
metadata:
annotations:
kubernetes.io/ingress.class: "{{ $ingress }}"
acme.cert-manager.io/http01-ingress-class: "{{ $ingress }}"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: "{{ $ingress }}"

View File

@@ -1,3 +1,4 @@
etcd 1.0.0 HEAD
etcd 1.0.0 f7eaab0
etcd 2.0.0 HEAD
ingress 1.0.0 HEAD
monitoring 1.0.0 HEAD

View File

@@ -1,13 +1,12 @@
OUT=../../_out/repos/system
VERSION := 0.2.0
gen: fix-chartnames
include ../../scripts/common-envs.mk
repo: fix-chartnames
repo:
rm -rf "$(OUT)"
mkdir -p "$(OUT)"
helm package -d "$(OUT)" $$(find . -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")')
helm package -d "$(OUT)" $$(find . -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")') --version $(VERSION)
cd "$(OUT)" && helm repo index .
fix-chartnames:
find . -name Chart.yaml -maxdepth 2 | awk -F/ '{print $$2}' | while read i; do printf "name: cozy-%s\nversion: $(VERSION)\n" "$$i" > "$$i/Chart.yaml"; done
find . -name Chart.yaml -maxdepth 2 | awk -F/ '{print $$2}' | while read i; do sed -i "s/^name: .*/name: cozy-$$i/" "$$i/Chart.yaml"; done

View File

@@ -1,2 +1,3 @@
apiVersion: v2
name: cozy-capi-operator
version: 0.2.0
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -1,14 +1,7 @@
NAME=capi-operator
NAMESPACE=cozy-cluster-api
show:
helm template --dry-run=server -n $(NAMESPACE) $(NAME) .
apply:
helm upgrade -i -n $(NAMESPACE) $(NAME) .
diff:
helm diff upgrade --allow-unreleased --normalize-manifests -n $(NAMESPACE) $(NAME) .
include ../../../scripts/package-system.mk
update:
rm -rf charts

View File

@@ -1,2 +1,3 @@
apiVersion: v2
name: cozy-capi-providers
version: 0.2.0
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -1,11 +1,4 @@
NAME=capi-providers
NAMESPACE=cozy-cluster-api
show:
helm template --dry-run=server -n $(NAMESPACE) $(NAME) .
apply:
helm upgrade -i -n $(NAMESPACE) $(NAME) .
diff:
helm diff upgrade --allow-unreleased --normalize-manifests -n $(NAMESPACE) $(NAME) .
include ../../../scripts/package-system.mk

View File

@@ -13,7 +13,7 @@ spec:
deployment:
containers:
- name: manager
imageUrl: ghcr.io/kvaps/test:cluster-api-control-plane-provider-kamaji-v0.6.0-fix7
imageUrl: ghcr.io/kvaps/test:cluster-api-control-plane-provider-kamaji-v0.7.1-fix
---
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: BootstrapProvider

View File

@@ -1,2 +1,3 @@
apiVersion: v2
name: cozy-cert-manager-issuers
version: 0.2.0
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -1,11 +1,4 @@
NAME=cert-manager-issuers
NAMESPACE=cozy-cert-manager
show:
helm template --dry-run=server -n $(NAMESPACE) $(NAME) .
apply:
helm upgrade -i -n $(NAMESPACE) $(NAME) .
diff:
helm diff upgrade --allow-unreleased --normalize-manifests -n $(NAMESPACE) $(NAME) .
include ../../../scripts/package-system.mk

View File

@@ -1,2 +1,3 @@
apiVersion: v2
name: cozy-cert-manager
version: 0.2.0
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -1,14 +1,7 @@
NAME=cert-manager
NAMESPACE=cozy-cert-manager
NAMESPACE=cozy-$(NAME)
show:
helm template --dry-run=server -n $(NAMESPACE) $(NAME) .
apply:
helm upgrade -i -n $(NAMESPACE) $(NAME) .
diff:
helm diff upgrade --allow-unreleased --normalize-manifests -n $(NAMESPACE) $(NAME) .
include ../../../scripts/package-system.mk
update:
rm -rf charts

View File

@@ -1,2 +1,3 @@
apiVersion: v2
name: cozy-cilium
version: 0.2.0
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -1,14 +1,7 @@
NAMESPACE=cozy-cilium
NAME=cilium
NAMESPACE=cozy-$(NAME)
show:
kubectl get hr -n cozy-cilium cilium -o jsonpath='{.spec.values}' | helm template --dry-run=server -n $(NAMESPACE) $(NAME) . -f -
apply:
kubectl get hr -n cozy-cilium cilium -o jsonpath='{.spec.values}' | helm upgrade -i -n $(NAMESPACE) $(NAME) . -f -
diff:
kubectl get hr -n cozy-cilium cilium -o jsonpath='{.spec.values}' | helm diff upgrade --allow-unreleased --normalize-manifests -n $(NAMESPACE) $(NAME) . -f -
include ../../../scripts/package-system.mk
update:
rm -rf charts

View File

@@ -122,7 +122,7 @@ annotations:
description: |
CiliumPodIPPool defines an IP pool that can be used for pooled IPAM (i.e. the multi-pool IPAM mode).
apiVersion: v2
appVersion: 1.14.9
appVersion: 1.14.10
description: eBPF-based Networking, Security, and Observability
home: https://cilium.io/
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@v1.14/Documentation/images/logo-solo.svg
@@ -138,4 +138,4 @@ kubeVersion: '>= 1.16.0-0'
name: cilium
sources:
- https://github.com/cilium/cilium
version: 1.14.9
version: 1.14.10

View File

@@ -1,6 +1,6 @@
# cilium
![Version: 1.14.9](https://img.shields.io/badge/Version-1.14.9-informational?style=flat-square) ![AppVersion: 1.14.9](https://img.shields.io/badge/AppVersion-1.14.9-informational?style=flat-square)
![Version: 1.14.10](https://img.shields.io/badge/Version-1.14.10-informational?style=flat-square) ![AppVersion: 1.14.10](https://img.shields.io/badge/AppVersion-1.14.10-informational?style=flat-square)
Cilium is open source software for providing and transparently securing
network connectivity and loadbalancing between application workloads such as
@@ -131,7 +131,7 @@ contributors across the globe, there is almost always someone available to help.
| bpf.tproxy | bool | `false` | Configure the eBPF-based TPROXY to reduce reliance on iptables rules for implementing Layer 7 policy. |
| bpf.vlanBypass | list | `[]` | Configure explicitly allowed VLAN id's for bpf logic bypass. [0] will allow all VLAN id's without any filtering. |
| bpfClockProbe | bool | `false` | Enable BPF clock source probing for more efficient tick retrieval. |
| certgen | object | `{"annotations":{"cronJob":{},"job":{}},"extraVolumeMounts":[],"extraVolumes":[],"image":{"digest":"sha256:89a0847753686444daabde9474b48340993bd19c7bea66a46e45b2974b82041f","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/certgen","tag":"v0.1.9","useDigest":true},"podLabels":{},"tolerations":[],"ttlSecondsAfterFinished":1800}` | Configure certificate generation for Hubble integration. If hubble.tls.auto.method=cronJob, these values are used for the Kubernetes CronJob which will be scheduled regularly to (re)generate any certificates not provided manually. |
| certgen | object | `{"annotations":{"cronJob":{},"job":{}},"extraVolumeMounts":[],"extraVolumes":[],"image":{"digest":"sha256:5586de5019abc104637a9818a626956cd9b1e827327b958186ec412ae3d5dea6","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/certgen","tag":"v0.1.11","useDigest":true},"podLabels":{},"tolerations":[],"ttlSecondsAfterFinished":1800}` | Configure certificate generation for Hubble integration. If hubble.tls.auto.method=cronJob, these values are used for the Kubernetes CronJob which will be scheduled regularly to (re)generate any certificates not provided manually. |
| certgen.annotations | object | `{"cronJob":{},"job":{}}` | Annotations to be added to the hubble-certgen initial Job and CronJob |
| certgen.extraVolumeMounts | list | `[]` | Additional certgen volumeMounts. |
| certgen.extraVolumes | list | `[]` | Additional certgen volumes. |
@@ -155,12 +155,12 @@ contributors across the globe, there is almost always someone available to help.
| clustermesh.apiserver.extraEnv | list | `[]` | Additional clustermesh-apiserver environment variables. |
| clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. |
| clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:5c16f8b8e22ce41e11998e70846fbcecea3a6b683a38253809ead8d871f6d8a3","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.14.9","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:609fea274caa016f15646f6e0b0f1f7c56b238c551e7b261bc1e99ce64f7b798","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.14.10","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.kvstoremesh.enabled | bool | `false` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance. |
| clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. |
| clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. |
| clustermesh.apiserver.kvstoremesh.extraVolumeMounts | list | `[]` | Additional KVStoreMesh volumeMounts. |
| clustermesh.apiserver.kvstoremesh.image | object | `{"digest":"sha256:9d9efb25806660f3663b9cd803fb8679f2b115763470002a9770e2c1eb1e5b22","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/kvstoremesh","tag":"v1.14.9","useDigest":true}` | KVStoreMesh image. |
| clustermesh.apiserver.kvstoremesh.image | object | `{"digest":"sha256:871ec4e3b07401d90b4433c7e2b7210b9b0c5f1a536caab3d0281a5faeea5070","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/kvstoremesh","tag":"v1.14.10","useDigest":true}` | KVStoreMesh image. |
| clustermesh.apiserver.kvstoremesh.resources | object | `{}` | Resource requests and limits for the KVStoreMesh container |
| clustermesh.apiserver.kvstoremesh.securityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}}` | KVStoreMesh Security context |
| clustermesh.apiserver.metrics.enabled | bool | `true` | Enables exporting apiserver metrics in OpenMetrics format. |
@@ -312,7 +312,7 @@ contributors across the globe, there is almost always someone available to help.
| envoy.extraVolumes | list | `[]` | Additional envoy volumes. |
| envoy.healthPort | int | `9878` | TCP port for the health API. |
| envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s |
| envoy.image | object | `{"digest":"sha256:39b75548447978230dedcf25da8940e4d3540c741045ef391a8e74dbb9661a86","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.26.7-bbde4095997ea57ead209f56158790d47224a0f5","useDigest":true}` | Envoy container image. |
| envoy.image | object | `{"digest":"sha256:d52f476c29a97c8b250fdbfbb8472191a268916f6a8503671d0da61e323b02cc","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.27.4-21905253931655328edaacf3cd16aeda73bbea2f","useDigest":true}` | Envoy container image. |
| envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
| envoy.livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe |
| envoy.log.format | string | `"[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v"` | The format string to use for laying out the log message metadata of Envoy. |
@@ -419,7 +419,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. |
| hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay |
| hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay |
| hubble.relay.image | object | `{"digest":"sha256:f506f3c6e0a979437cde79eb781654fda4f10ddb5642cebc4dc81254cfb7eeaa","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.14.9","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.image | object | `{"digest":"sha256:c156c4fc2da520d2876142ea17490440b95431a1be755d2050e72115a495cfd0","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.14.10","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. |
| hubble.relay.listenPort | string | `"4245"` | Port to listen to. |
| hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
@@ -511,7 +511,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. |
| identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd` or `kvstore`). |
| identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. |
| image | object | `{"digest":"sha256:4ef1eb7a3bc39d0fefe14685e6c0d4e01301c40df2a89bc93ffca9a1ab927301","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.9","useDigest":true}` | Agent container image. |
| image | object | `{"digest":"sha256:0a1bcd2859c6d18d60dba6650cca8c707101716a3e47b126679040cbd621c031","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.10","useDigest":true}` | Agent container image. |
| imagePullSecrets | string | `nil` | Configure image pull secrets for pulling container images |
| ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set |
| ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. |
@@ -596,7 +596,7 @@ contributors across the globe, there is almost always someone available to help.
| nodeinit.extraEnv | list | `[]` | Additional nodeinit environment variables. |
| nodeinit.extraVolumeMounts | list | `[]` | Additional nodeinit volumeMounts. |
| nodeinit.extraVolumes | list | `[]` | Additional nodeinit volumes. |
| nodeinit.image | object | `{"override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/startup-script","tag":"62093c5c233ea914bfa26a10ba41f8780d9b737f"}` | node-init image. |
| nodeinit.image | object | `{"digest":"sha256:e1d442546e868db1a3289166c14011e0dbd32115b338b963e56f830972bc22a2","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/startup-script","tag":"62093c5c233ea914bfa26a10ba41f8780d9b737f","useDigest":true}` | node-init image. |
| nodeinit.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for nodeinit pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| nodeinit.podAnnotations | object | `{}` | Annotations to be added to node-init pods. |
| nodeinit.podLabels | object | `{}` | Labels to be added to node-init pods. |
@@ -619,7 +619,7 @@ contributors across the globe, there is almost always someone available to help.
| operator.extraVolumes | list | `[]` | Additional cilium-operator volumes. |
| operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. |
| operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. |
| operator.image | object | `{"alibabacloudDigest":"sha256:765314779093b54750f83280f009229f20fe1f28466a633d9bb4143d2ad669c5","awsDigest":"sha256:041ad5b49ae63ba0f1974e1a1d9ebf9f52541cd2813088fa687f9d544125a1ec","azureDigest":"sha256:2d3b9d868eb03fa9256d34192a734a2abab283f527a9c97b7cefcd3401649d17","genericDigest":"sha256:1552d653870dd8ebbd16ee985a5497dd78a2097370978b0cfbd2da2072f30712","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.14.9","useDigest":true}` | cilium-operator image. |
| operator.image | object | `{"alibabacloudDigest":"sha256:2fbb53c2fc9c7203db9065c4e6cedb8e98d32d5ebc64549949636b5344cd1f14","awsDigest":"sha256:72440aa4cb8a42dddb05cfc74c6fba0a18d0902b1e434f5dcde8dca0354a8be6","azureDigest":"sha256:404a46bb0a232c7d5ab7ab97a1d1a55635cdf0e334529a18d1ddb50f4aad71b4","genericDigest":"sha256:415b7f0bb0e7339c6231d4b9ee74a6a513b2865acfccec884dbc806ecc3dd909","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.14.10","useDigest":true}` | cilium-operator image. |
| operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. |
| operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods |
@@ -666,7 +666,7 @@ contributors across the globe, there is almost always someone available to help.
| preflight.extraEnv | list | `[]` | Additional preflight environment variables. |
| preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. |
| preflight.extraVolumes | list | `[]` | Additional preflight volumes. |
| preflight.image | object | `{"digest":"sha256:4ef1eb7a3bc39d0fefe14685e6c0d4e01301c40df2a89bc93ffca9a1ab927301","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.9","useDigest":true}` | Cilium pre-flight image. |
| preflight.image | object | `{"digest":"sha256:0a1bcd2859c6d18d60dba6650cca8c707101716a3e47b126679040cbd621c031","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.10","useDigest":true}` | Cilium pre-flight image. |
| preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods |
| preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |

View File

@@ -61,7 +61,7 @@ spec:
image: {{ include "cilium.image" .Values.envoy.image | quote }}
imagePullPolicy: {{ .Values.envoy.image.pullPolicy }}
command:
- /usr/bin/cilium-envoy
- /usr/bin/cilium-envoy-starter
args:
- '-c /var/run/cilium/envoy/bootstrap-config.json'
- '--base-id 0'

View File

@@ -143,10 +143,10 @@ rollOutCiliumPods: false
image:
override: ~
repository: "quay.io/cilium/cilium"
tag: "v1.14.9"
tag: "v1.14.10"
pullPolicy: "IfNotPresent"
# cilium-digest
digest: "sha256:4ef1eb7a3bc39d0fefe14685e6c0d4e01301c40df2a89bc93ffca9a1ab927301"
digest: "sha256:0a1bcd2859c6d18d60dba6650cca8c707101716a3e47b126679040cbd621c031"
useDigest: true
# -- Affinity for cilium-agent.
@@ -933,8 +933,8 @@ certgen:
image:
override: ~
repository: "quay.io/cilium/certgen"
tag: "v0.1.9"
digest: "sha256:89a0847753686444daabde9474b48340993bd19c7bea66a46e45b2974b82041f"
tag: "v0.1.11"
digest: "sha256:5586de5019abc104637a9818a626956cd9b1e827327b958186ec412ae3d5dea6"
useDigest: true
pullPolicy: "IfNotPresent"
# -- Seconds after which the completed job pod will be deleted
@@ -1109,9 +1109,9 @@ hubble:
image:
override: ~
repository: "quay.io/cilium/hubble-relay"
tag: "v1.14.9"
tag: "v1.14.10"
# hubble-relay-digest
digest: "sha256:f506f3c6e0a979437cde79eb781654fda4f10ddb5642cebc4dc81254cfb7eeaa"
digest: "sha256:c156c4fc2da520d2876142ea17490440b95431a1be755d2050e72115a495cfd0"
useDigest: true
pullPolicy: "IfNotPresent"
@@ -1853,9 +1853,9 @@ envoy:
image:
override: ~
repository: "quay.io/cilium/cilium-envoy"
tag: "v1.26.7-bbde4095997ea57ead209f56158790d47224a0f5"
tag: "v1.27.4-21905253931655328edaacf3cd16aeda73bbea2f"
pullPolicy: "IfNotPresent"
digest: "sha256:39b75548447978230dedcf25da8940e4d3540c741045ef391a8e74dbb9661a86"
digest: "sha256:d52f476c29a97c8b250fdbfbb8472191a268916f6a8503671d0da61e323b02cc"
useDigest: true
# -- Additional containers added to the cilium Envoy DaemonSet.
@@ -2269,15 +2269,15 @@ operator:
image:
override: ~
repository: "quay.io/cilium/operator"
tag: "v1.14.9"
tag: "v1.14.10"
# operator-generic-digest
genericDigest: "sha256:1552d653870dd8ebbd16ee985a5497dd78a2097370978b0cfbd2da2072f30712"
genericDigest: "sha256:415b7f0bb0e7339c6231d4b9ee74a6a513b2865acfccec884dbc806ecc3dd909"
# operator-azure-digest
azureDigest: "sha256:2d3b9d868eb03fa9256d34192a734a2abab283f527a9c97b7cefcd3401649d17"
azureDigest: "sha256:404a46bb0a232c7d5ab7ab97a1d1a55635cdf0e334529a18d1ddb50f4aad71b4"
# operator-aws-digest
awsDigest: "sha256:041ad5b49ae63ba0f1974e1a1d9ebf9f52541cd2813088fa687f9d544125a1ec"
awsDigest: "sha256:72440aa4cb8a42dddb05cfc74c6fba0a18d0902b1e434f5dcde8dca0354a8be6"
# operator-alibabacloud-digest
alibabacloudDigest: "sha256:765314779093b54750f83280f009229f20fe1f28466a633d9bb4143d2ad669c5"
alibabacloudDigest: "sha256:2fbb53c2fc9c7203db9065c4e6cedb8e98d32d5ebc64549949636b5344cd1f14"
useDigest: true
pullPolicy: "IfNotPresent"
suffix: ""
@@ -2468,6 +2468,8 @@ nodeinit:
override: ~
repository: "quay.io/cilium/startup-script"
tag: "62093c5c233ea914bfa26a10ba41f8780d9b737f"
digest: "sha256:e1d442546e868db1a3289166c14011e0dbd32115b338b963e56f830972bc22a2"
useDigest: true
pullPolicy: "IfNotPresent"
# -- The priority class to use for the nodeinit pod.
@@ -2554,9 +2556,9 @@ preflight:
image:
override: ~
repository: "quay.io/cilium/cilium"
tag: "v1.14.9"
tag: "v1.14.10"
# cilium-digest
digest: "sha256:4ef1eb7a3bc39d0fefe14685e6c0d4e01301c40df2a89bc93ffca9a1ab927301"
digest: "sha256:0a1bcd2859c6d18d60dba6650cca8c707101716a3e47b126679040cbd621c031"
useDigest: true
pullPolicy: "IfNotPresent"
@@ -2704,9 +2706,9 @@ clustermesh:
image:
override: ~
repository: "quay.io/cilium/clustermesh-apiserver"
tag: "v1.14.9"
tag: "v1.14.10"
# clustermesh-apiserver-digest
digest: "sha256:5c16f8b8e22ce41e11998e70846fbcecea3a6b683a38253809ead8d871f6d8a3"
digest: "sha256:609fea274caa016f15646f6e0b0f1f7c56b238c551e7b261bc1e99ce64f7b798"
useDigest: true
pullPolicy: "IfNotPresent"
@@ -2751,9 +2753,9 @@ clustermesh:
image:
override: ~
repository: "quay.io/cilium/kvstoremesh"
tag: "v1.14.9"
tag: "v1.14.10"
# kvstoremesh-digest
digest: "sha256:9d9efb25806660f3663b9cd803fb8679f2b115763470002a9770e2c1eb1e5b22"
digest: "sha256:871ec4e3b07401d90b4433c7e2b7210b9b0c5f1a536caab3d0281a5faeea5070"
useDigest: true
pullPolicy: "IfNotPresent"

View File

@@ -1854,9 +1854,9 @@ envoy:
image:
override: ~
repository: "quay.io/cilium/cilium-envoy"
tag: "v1.26.7-bbde4095997ea57ead209f56158790d47224a0f5"
tag: "v1.27.4-21905253931655328edaacf3cd16aeda73bbea2f"
pullPolicy: "${PULL_POLICY}"
digest: "sha256:39b75548447978230dedcf25da8940e4d3540c741045ef391a8e74dbb9661a86"
digest: "sha256:d52f476c29a97c8b250fdbfbb8472191a268916f6a8503671d0da61e323b02cc"
useDigest: true
# -- Additional containers added to the cilium Envoy DaemonSet.
@@ -2469,6 +2469,8 @@ nodeinit:
override: ~
repository: "${CILIUM_NODEINIT_REPO}"
tag: "${CILIUM_NODEINIT_VERSION}"
digest: "${CILIUM_NODEINIT_DIGEST}"
useDigest: true
pullPolicy: "${PULL_POLICY}"
# -- The priority class to use for the nodeinit pod.

View File

@@ -0,0 +1,3 @@
apiVersion: v2
name: cozy-clickhouse-operator
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -0,0 +1,10 @@
NAME=clickhouse-operator
NAMESPACE=cozy-clickhouse-operator
include ../../../scripts/package-system.mk
update:
rm -rf charts
helm repo add clickhouse-operator https://docs.altinity.com/clickhouse-operator/
helm repo update clickhouse-operator
helm pull clickhouse-operator/altinity-clickhouse-operator --untar --untardir charts

View File

@@ -0,0 +1,17 @@
apiVersion: v2
appVersion: 0.23.4
description: 'Helm chart to deploy [altinity-clickhouse-operator](https://github.com/Altinity/clickhouse-operator). The
ClickHouse Operator creates, configures and manages ClickHouse clusters running
on Kubernetes. For upgrade please install CRDs separately: ```bash kubectl apply
-f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseinstallations.clickhouse.altinity.com.yaml kubectl
apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseinstallationtemplates.clickhouse.altinity.com.yaml kubectl
apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseoperatorconfigurations.clickhouse.altinity.com.yaml
```'
home: https://github.com/Altinity/clickhouse-operator
icon: https://logosandtypes.com/wp-content/uploads/2020/12/altinity.svg
maintainers:
- email: support@altinity.com
name: altinity
name: altinity-clickhouse-operator
type: application
version: 0.23.4

View File

@@ -0,0 +1,65 @@
# altinity-clickhouse-operator
![Version: 0.23.4](https://img.shields.io/badge/Version-0.23.4-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.23.4](https://img.shields.io/badge/AppVersion-0.23.4-informational?style=flat-square)
Helm chart to deploy [altinity-clickhouse-operator](https://github.com/Altinity/clickhouse-operator).
The ClickHouse Operator creates, configures and manages ClickHouse clusters running on Kubernetes.
For upgrade please install CRDs separately:
```bash
kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseinstallations.clickhouse.altinity.com.yaml
kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseinstallationtemplates.clickhouse.altinity.com.yaml
kubectl apply -f https://github.com/Altinity/clickhouse-operator/raw/master/deploy/helm/crds/CustomResourceDefinition-clickhouseoperatorconfigurations.clickhouse.altinity.com.yaml
```
**Homepage:** <https://github.com/Altinity/clickhouse-operator>
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| altinity | <support@altinity.com> | |
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| additionalResources | list | `[]` | list of additional resources to create (are processed via `tpl` function), useful for create ClickHouse clusters together with clickhouse-operator, look `kubectl explain chi` for details |
| affinity | object | `{}` | affinity for scheduler pod assignment, look `kubectl explain pod.spec.affinity` for details |
| configs | object | check the values.yaml file for the config content, auto-generated from latest operator release | clickhouse-operator configs |
| dashboards.additionalLabels | object | `{"grafana_dashboard":""}` | labels to add to a secret with dashboards |
| dashboards.annotations | object | `{}` | annotations to add to a secret with dashboards |
| dashboards.enabled | bool | `false` | provision grafana dashboards as secrets (can be synced by grafana dashboards sidecar https://github.com/grafana/helm-charts/blob/grafana-6.33.1/charts/grafana/values.yaml#L679 ) |
| dashboards.grafana_folder | string | `"clickhouse"` | |
| fullnameOverride | string | `""` | full name of the chart. |
| imagePullSecrets | list | `[]` | image pull secret for private images in clickhouse-operator pod possible value format [{"name":"your-secret-name"}] look `kubectl explain pod.spec.imagePullSecrets` for details |
| metrics.containerSecurityContext | object | `{}` | |
| metrics.enabled | bool | `true` | |
| metrics.env | list | `[]` | additional environment variables for the deployment of metrics-exporter containers possible format value [{"name": "SAMPLE", "value": "text"}] |
| metrics.image.pullPolicy | string | `"IfNotPresent"` | image pull policy |
| metrics.image.repository | string | `"altinity/metrics-exporter"` | image repository |
| metrics.image.tag | string | `""` | image tag (chart's appVersion value will be used if not set) |
| metrics.resources | object | `{}` | custom resource configuration |
| nameOverride | string | `""` | override name of the chart |
| nodeSelector | object | `{}` | node for scheduler pod assignment, look `kubectl explain pod.spec.nodeSelector` for details |
| operator.containerSecurityContext | object | `{}` | |
| operator.env | list | `[]` | additional environment variables for the clickhouse-operator container in deployment possible format value [{"name": "SAMPLE", "value": "text"}] |
| operator.image.pullPolicy | string | `"IfNotPresent"` | image pull policy |
| operator.image.repository | string | `"altinity/clickhouse-operator"` | image repository |
| operator.image.tag | string | `""` | image tag (chart's appVersion value will be used if not set) |
| operator.resources | object | `{}` | custom resource configuration, look `kubectl explain pod.spec.containers.resources` for details |
| podAnnotations | object | `{"clickhouse-operator-metrics/port":"9999","clickhouse-operator-metrics/scrape":"true","prometheus.io/port":"8888","prometheus.io/scrape":"true"}` | annotations to add to the clickhouse-operator pod, look `kubectl explain pod.spec.annotations` for details |
| podLabels | object | `{}` | labels to add to the clickhouse-operator pod |
| podSecurityContext | object | `{}` | |
| rbac.create | bool | `true` | specifies whether cluster roles and cluster role bindings should be created |
| secret.create | bool | `true` | create a secret with operator credentials |
| secret.password | string | `"clickhouse_operator_password"` | operator credentials password |
| secret.username | string | `"clickhouse_operator"` | operator credentials username |
| serviceAccount.annotations | object | `{}` | annotations to add to the service account |
| serviceAccount.create | bool | `true` | specifies whether a service account should be created |
| serviceAccount.name | string | `nil` | the name of the service account to use; if not set and create is true, a name is generated using the fullname template |
| serviceMonitor.additionalLabels | object | `{}` | additional labels for service monitor |
| serviceMonitor.enabled | bool | `false` | ServiceMonitor Custom resource is created for a (prometheus-operator)[https://github.com/prometheus-operator/prometheus-operator] |
| tolerations | list | `[]` | tolerations for scheduler pod assignment, look `kubectl explain pod.spec.tolerations` for details |

Some files were not shown because too many files have changed in this diff Show More