Compare commits

..

26 Commits

Author SHA1 Message Date
Ahmad Murzahmatov
fa054f3ea1 [feat] Hetzner
Add Hetzner CCM
Add Hetzner RobotLB

Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
2025-07-11 11:20:25 +06:00
Andrei Kvapil
1b7a597f1c [talos] Update Talos Linux v1.10.5 (#1186)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[talos] Update Talos Linux v1.10.5
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated system firmware, microcode, and storage extension versions to
the latest releases across all installer profiles.
* Increased profile version from v1.10.3 to v1.10.5 for improved
component compatibility and reliability.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 14:29:15 +02:00
Andrei Kvapil
aa84b1c054 [talos] Update Talos Linux v1.10.5
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-10 14:28:49 +02:00
Timofei Larkin
6e96dd0a33 [docs] Changelogs for v0.32.1 and changelog template (#1111)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Added a new changelog template with predefined sections for consistent
release documentation.
* Published a detailed changelog for version 0.32.1, outlining major
features, fixes, dependency updates, documentation changes, testing
improvements, and CI/CD enhancements.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 13:13:17 +04:00
Andrei Kvapil
08cb7c0f28 Release v0.34.0-beta.1 (#1187)
This PR prepares the release `v0.34.0-beta.1`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated multiple container image versions and tags across various
components to newer releases, including several beta versions.
  * Refreshed image digests to ensure the latest builds are used.
  * Updated dashboard configuration to reflect the new app version.
  * No changes to functionality or user interface.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 11:03:03 +02:00
Nick Volynkin
ef30e69245 [docs] Changelog for v0.32.1 and changelog template
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-10 11:50:39 +03:00
Timofei Larkin
847980f03d [release-v0.31] [docs] Release notes for v0.31.1 and v0.31.2 (#1068)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Added detailed changelog entries for versions 0.31.1 and 0.31.2,
highlighting recent fixes, improvements, and security updates.
- Included a summary of key changes, security fixes, and platform,
dashboard, and application enhancements.
  - Provided links and references for further details on each release.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 12:48:06 +04:00
cozystack-bot
0ecb8585bc Prepare release v0.34.0-beta.1
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2025-07-10 08:09:47 +00:00
Andrei Kvapil
32aea4254b [cilium] Update Cilium v1.17.5 (#1181)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cilium] Update Cilium v1.17.5
```
2025-07-10 10:05:47 +02:00
Andrei Kvapil
e49918745e [kube-ovn] Update Kube-OVN v1.13.14 (#1182)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kube-ovn] Update Kube-OVN v1.13.14
```
2025-07-10 09:31:06 +02:00
Andrei Kvapil
220c347cc5 [kamaji] Update Kamaji edge-25.7.1 (#1184)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kamaji] Update Kamaji edge-25.7.1 #
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Removed all Helm chart files, templates, configuration, documentation,
and related scripts for the Kamaji Etcd component.
* Deleted Kubernetes resource definitions, backup/defrag jobs,
monitoring, RBAC, and ServiceAccount templates associated with Kamaji
Etcd.
* Removed supporting patches and Makefiles for managing the Kamaji Etcd
Helm chart.
* All user-facing configuration and deployment options for Kamaji Etcd
via Helm are no longer available.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 01:32:23 +02:00
Andrei Kvapil
a4ec46a941 [cozystack-api] Specify OpenAPI schema for apps (#1174)
Depends on https://github.com/cozystack/cozystack/pull/1173

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cozystack-api] Specify OpenAPI schema for apps
```
2025-07-10 01:23:19 +02:00
Andrei Kvapil
2c126786b3 Update Flux Operator (0.24.0) (#1167)
This PR updates Flux Operator to 0.24.0 - some changes have been
undertaken to make upgrading Flux on any version of the flux-operator
more reliable - these are related to `spec.distribution.artifact` which
I think you have already seen


https://fluxcd.control-plane.io/operator/fluxinstance/#distribution-artifact

May be relevant to air-gapped environments.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added support for specifying extra pod volumes and container volume
mounts via new configuration options in the Helm chart.
* Extended CRD schemas to support additional provider types, new
filtering options, and enhanced validation and authentication fields.
* Introduced new fields for improved authentication and workload
identity federation in CRDs.

* **Documentation**
* Updated README files to document new configuration options and reflect
the latest chart versions.

* **Chores**
* Bumped Helm chart and app versions to 0.24.0 for both operator and
instance charts.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 00:19:35 +02:00
Andrei Kvapil
784f1454ba [kubevirt][cdi] Update KubeVirt v1.5.2 and CDI v1.62.0 (#1183)
- [kubevirt] Update KubeVirt v1.5.2
- [cdi] Update CDI v1.62.0

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kubevirt] Update KubeVirt v1.5.2
[cdi] Update CDI v1.62.0
```
2025-07-10 00:16:45 +02:00
Andrei Kvapil
9d9226b575 [linstor] Update LINSTOR v1.31.2 (#1180)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[linstor] Update LINSTOR v1.31.2
```
2025-07-10 00:16:28 +02:00
Andrei Kvapil
5727110542 [kamaji] Update Kamaji edge-25.7.1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 19:03:07 +02:00
Andrei Kvapil
f2fffb03e4 [cdi] Update CDI v1.62.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:58:43 +02:00
Andrei Kvapil
ab5eae3fbc [kubevirt] Update KubeVirt v1.5.2
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:58:39 +02:00
Andrei Kvapil
38cf5fd58c [cilium] Update Cilium v1.17.5
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:54:42 +02:00
Andrei Kvapil
cda554b58c [kube-ovn] Update Kube-OVN v1.13.14
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:54:01 +02:00
Andrei Kvapil
a73794d751 [linstor] Update LINSTOR v1.31.2
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:45:12 +02:00
Andrei Kvapil
9af6ce25bc [cozystack-api] Specify OpenAPI schema for apps
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 08:43:48 +02:00
Kingdon B
e70dfdec31 Update Flux Operator - 0.24.0
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-07-08 10:39:45 -04:00
Kingdon B
08c0eecbc5 Update flux-instance chart
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-07-08 10:38:38 -04:00
Nick Volynkin
1db08d0b73 [docs] Add release notes for v0.31.2
Resolves #1060

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-25 01:07:01 +02:00
Nick Volynkin
b2ed7525cd [docs] Add release notes for v0.31.1
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-25 01:07:00 +02:00
133 changed files with 1850 additions and 1783 deletions

View File

@@ -12,10 +12,10 @@ repos:
name: Run 'make generate' in all app directories
entry: |
/bin/bash -c '
for dir in ./packages/apps/*/; do
for dir in ./packages/apps/*/ ./packages/extra/*/ ./packages/system/cozystack-api/; do
if [ -d "$dir" ]; then
echo "Running make generate in $dir"
(cd "$dir" && make generate)
make generate -C "$dir"
fi
done
git diff --color=always | cat

View File

@@ -0,0 +1,11 @@
## Major Features and Improvements
## Security
## Fixes
## Dependencies
## Documentation
## Development, Testing, and CI/CD

View File

@@ -0,0 +1,8 @@
## Fixes
* [build] Update Talos Linux v1.10.3 and fix assets. (@kvaps in https://github.com/cozystack/cozystack/pull/1006)
* [ci] Fix uploading released artifacts to GitHub. (@kvaps in https://github.com/cozystack/cozystack/pull/1009)
* [ci] Separate build and testing jobs. (@kvaps in https://github.com/cozystack/cozystack/pull/1005)
* [docs] Write a full release post for v0.31.1. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/999)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.31.0...v0.31.1

View File

@@ -0,0 +1,12 @@
## Security
* Resolve a security problem that allowed a tenant administrator to gain enhanced privileges outside the tenant. (@kvaps in https://github.com/cozystack/cozystack/pull/1062, backported in https://github.com/cozystack/cozystack/pull/1066)
## Fixes
* [platform] Fix dependencies in `distro-full` bundle. (@klinch0 in https://github.com/cozystack/cozystack/pull/1056, backported in https://github.com/cozystack/cozystack/pull/1064)
* [platform] Fix RBAC for annotating namespaces. (@kvaps in https://github.com/cozystack/cozystack/pull/1031, backported in https://github.com/cozystack/cozystack/pull/1037)
* [platform] Reduce system resource consumption by using smaller resource presets for VerticalPodAutoscaler, SeaweedFS, and KubeOVN. (@klinch0 in https://github.com/cozystack/cozystack/pull/1054, backported in https://github.com/cozystack/cozystack/pull/1058)
* [dashboard] Fix a number of issues in the Cozystack Dashboard (@kvaps in https://github.com/cozystack/cozystack/pull/1042, backported in https://github.com/cozystack/cozystack/pull/1066)
* [apps] Specify minimal working resource presets. (@kvaps in https://github.com/cozystack/cozystack/pull/1040, backported in https://github.com/cozystack/cozystack/pull/1041)
* [apps] Update built-in documentation and configuration reference for managed Clickhouse application. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1059, backported in https://github.com/cozystack/cozystack/pull/1065)

View File

@@ -0,0 +1,38 @@
## Major Features and Improvements
* [postgres] Introduce new functionality for backup and restore in PostgreSQL. (@klinch0 in https://github.com/cozystack/cozystack/pull/1086)
* [apps] Refactor resources in managed applications. (@kvaps in https://github.com/cozystack/cozystack/pull/1106)
* [system] Make VMAgent's `extraArgs` tunable. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1091)
## Fixes
* [postgres] Escape users and database names. (@kvaps in https://github.com/cozystack/cozystack/pull/1087)
* [tenant] Fix monitoring agents HelmReleases for tenant clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/1079)
* [kubernetes] Wrap cert-manager CRDs in a conditional. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1076)
* [kubernetes] Remove `useCustomSecretForPatchContainerd` option and enable it by default. (@kvaps in https://github.com/cozystack/cozystack/pull/1104)
* [apps] Increase default resource presets for Clickhouse and Kafka from `nano` to `small`. Update OpenAPI specs and readme's. (@kvaps in https://github.com/cozystack/cozystack/pull/1103 and https://github.com/cozystack/cozystack/pull/1105)
* [linstor] Add configurable DRBD network options for connection and timeout settings, replacing scripted logic for detecting devices that lost connection. (@kvaps in https://github.com/cozystack/cozystack/pull/1094)
## Dependencies
* Update cozy-proxy to v0.2.0 (@kvaps in https://github.com/cozystack/cozystack/pull/1081)
* Update Kafka Operator to 0.45.1-rc1 (@kvaps in https://github.com/cozystack/cozystack/pull/1082 and https://github.com/cozystack/cozystack/pull/1102)
* Update Flux Operator to 0.23.0 (@kingdonb in https://github.com/cozystack/cozystack/pull/1078)
## Documentation
* [docs] Release notes for v0.32.0 and two beta-versions. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1043)
## Development, Testing, and CI/CD
* [tests] Add Kafka, Redis. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1077)
* [tests] Increase disk space for VMs in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1097)
* [tests] Upd Kubernetes v1.33. (@kvaps in https://github.com/cozystack/cozystack/pull/1083)
* [tests] increase postgres timeouts. (@kvaps in https://github.com/cozystack/cozystack/pull/1108)
* [tests] don't wait for postgres ro service. (@kvaps in https://github.com/cozystack/cozystack/pull/1109)
* [ci] Setup systemd timer to tear down sandbox. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1092)
* [ci] Split testing job into several. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1075)
* [ci] Run E2E tests as separate parallel jobs. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1093)
* [ci] Refactor GitHub workflows. (@kvaps in https://github.com/cozystack/cozystack/pull/1107)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.32.0...v0.32.1

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.6.0@sha256:b7633717cd7449c0042ae92d8ca9b36e4d69566561f5c7d44e21058e7d05c6d5
ghcr.io/cozystack/cozystack/nginx-cache:0.6.0@sha256:50ac1581e3100bd6c477a71161cb455a341ffaf9e5e2f6086802e4e25271e8af

View File

@@ -76,7 +76,7 @@ input:
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
imageRef: "ghcr.io/siderolabs/installer:${TALOS_VERSION}"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:${AMD_UCODE_VERSION}
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:${BNX2_BNX2X_VERSION}

View File

@@ -3,22 +3,22 @@
arch: amd64
platform: metal
secureboot: false
version: v1.10.3
version: v1.10.5
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
output:
kind: initramfs
imageOptions: {}

View File

@@ -3,22 +3,22 @@
arch: amd64
platform: metal
secureboot: false
version: v1.10.3
version: v1.10.5
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
output:
kind: installer
imageOptions: {}

View File

@@ -3,22 +3,22 @@
arch: amd64
platform: metal
secureboot: false
version: v1.10.3
version: v1.10.5
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
output:
kind: iso
imageOptions: {}

View File

@@ -3,22 +3,22 @@
arch: amd64
platform: metal
secureboot: false
version: v1.10.3
version: v1.10.5
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
output:
kind: kernel
imageOptions: {}

View File

@@ -3,22 +3,22 @@
arch: amd64
platform: metal
secureboot: false
version: v1.10.3
version: v1.10.5
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
output:
kind: image
imageOptions: { diskSize: 1306525696, diskFormat: raw }

View File

@@ -3,22 +3,22 @@
arch: amd64
platform: nocloud
secureboot: false
version: v1.10.3
version: v1.10.5
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: "ghcr.io/siderolabs/installer:v1.10.3"
imageRef: "ghcr.io/siderolabs/installer:v1.10.5"
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20250509
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250509
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250509
- imageRef: ghcr.io/siderolabs/intel-ucode:20250211
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250509
- imageRef: ghcr.io/siderolabs/drbd:9.2.13-v1.10.3
- imageRef: ghcr.io/siderolabs/zfs:2.3.2-v1.10.3
- imageRef: ghcr.io/siderolabs/amd-ucode:20250708
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20250708
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20250708
- imageRef: ghcr.io/siderolabs/intel-ucode:20250512
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20250708
- imageRef: ghcr.io/siderolabs/drbd:9.2.14-v1.10.5
- imageRef: ghcr.io/siderolabs/zfs:2.3.3-v1.10.5
output:
kind: image
imageOptions: { diskSize: 1306525696, diskFormat: raw }

View File

@@ -1,2 +1,2 @@
cozystack:
image: ghcr.io/cozystack/cozystack/installer:v0.33.2@sha256:9d96e8b0398c4847783ea8b866d37a5a7de01fc6b7522764f8b3901cd6709018
image: ghcr.io/cozystack/cozystack/installer:v0.34.0-beta.1@sha256:6f29c93e52d686ae6144d64bbcd92c138cbd1b432b06a74273c5dc35b11fe048

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.33.2@sha256:4af1266f9e055306deb6054be88230e864bfe420b8fa887ab675e6d2efb0f4fa
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.34.0-beta.1@sha256:f0a7a45218122b57022e51d41c0e6b18d31621c8ec504651d2347f47e5e5f256

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/matchbox:v0.33.2@sha256:04e42e125127c0e696cfcb516821e72ce178caa8a92e4c06d3f65c3ef6b3ef1e
ghcr.io/cozystack/cozystack/matchbox:v0.34.0-beta.1@sha256:a0bd0076e0bc866858d3f08adca5944fb75004ad0ead2ace369d1c155d780383

View File

@@ -7,7 +7,6 @@
| Name | Description | Value |
| ---------------- | ----------------------------------------------------------------- | ------- |
| `replicas` | Number of ingress-nginx replicas | `2` |
| `externalIPs` | List of externalIPs for service. | `[]` |
| `whitelist` | List of client networks | `[]` |
| `clouflareProxy` | Restoring original visitor IPs when Cloudflare proxied is enabled | `false` |

View File

@@ -7,14 +7,6 @@
"description": "Number of ingress-nginx replicas",
"default": 2
},
"externalIPs": {
"type": "array",
"description": "List of externalIPs for service.",
"default": "[]",
"items": {
"type": "string"
}
},
"whitelist": {
"type": "array",
"description": "List of client networks",

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:1ba30c6c3443e826006b4f6d1c6c251e74a4ffde6bd940f73aef1058a9d10751
ghcr.io/cozystack/cozystack/s3manager:v0.5.0@sha256:45e02729edbee171519068b23cd3516009315769b36f59465c420a618320e363

View File

@@ -79,7 +79,7 @@ annotations:
Pod IP Pool\n description: |\n CiliumPodIPPool defines an IP pool that can
be used for pooled IPAM (i.e. the multi-pool IPAM mode).\n"
apiVersion: v2
appVersion: 1.17.4
appVersion: 1.17.5
description: eBPF-based Networking, Security, and Observability
home: https://cilium.io/
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@main/Documentation/images/logo-solo.svg
@@ -95,4 +95,4 @@ kubeVersion: '>= 1.21.0-0'
name: cilium
sources:
- https://github.com/cilium/cilium
version: 1.17.4
version: 1.17.5

View File

@@ -1,6 +1,6 @@
# cilium
![Version: 1.17.4](https://img.shields.io/badge/Version-1.17.4-informational?style=flat-square) ![AppVersion: 1.17.4](https://img.shields.io/badge/AppVersion-1.17.4-informational?style=flat-square)
![Version: 1.17.5](https://img.shields.io/badge/Version-1.17.5-informational?style=flat-square) ![AppVersion: 1.17.5](https://img.shields.io/badge/AppVersion-1.17.5-informational?style=flat-square)
Cilium is open source software for providing and transparently securing
network connectivity and loadbalancing between application workloads such as
@@ -85,7 +85,7 @@ contributors across the globe, there is almost always someone available to help.
| authentication.mutual.spire.install.agent.tolerations | list | `[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"},{"key":"CriticalAddonsOnly","operator":"Exists"}]` | SPIRE agent tolerations configuration By default it follows the same tolerations as the agent itself to allow the Cilium agent on this node to connect to SPIRE. ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
| authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true |
| authentication.mutual.spire.install.existingNamespace | bool | `false` | SPIRE namespace already exists. Set to true if Helm should not create, manage, and import the SPIRE namespace. |
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:37f7b378a29ceb4c551b1b5582e27747b855bbfaa73fa11914fe0df028dc581f","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:f85340bf132ae937d2c2a763b8335c9bab35d6e8293f70f606b9c6178d84f42b","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
| authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into |
| authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration |
| authentication.mutual.spire.install.server.annotations | object | `{}` | SPIRE server annotations |
@@ -197,7 +197,7 @@ contributors across the globe, there is almost always someone available to help.
| clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. |
| clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. |
| clustermesh.apiserver.healthPort | int | `9880` | TCP port for the clustermesh-apiserver health API. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:0b72f3046cf36ff9b113d53cc61185e893edb5fe728a2c9e561c1083f806453d","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.17.4","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:78dc40b9cb8d7b1ad21a76ff3e11541809acda2ac4ef94150cc832100edc247d","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.17.5","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.kvstoremesh.enabled | bool | `true` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance. |
| clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. |
| clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. |
@@ -243,6 +243,7 @@ contributors across the globe, there is almost always someone available to help.
| clustermesh.apiserver.service.enableSessionAffinity | string | `"HAOnly"` | Defines when to enable session affinity. Each replica in a clustermesh-apiserver deployment runs its own discrete etcd cluster. Remote clients connect to one of the replicas through a shared Kubernetes Service. A client reconnecting to a different backend will require a full resync to ensure data integrity. Session affinity can reduce the likelihood of this happening, but may not be supported by all cloud providers. Possible values: - "HAOnly" (default) Only enable session affinity for deployments with more than 1 replica. - "Always" Always enable session affinity. - "Never" Never enable session affinity. Useful in environments where session affinity is not supported, but may lead to slightly degraded performance due to more frequent reconnections. |
| clustermesh.apiserver.service.externalTrafficPolicy | string | `"Cluster"` | The externalTrafficPolicy of service used for apiserver access. |
| clustermesh.apiserver.service.internalTrafficPolicy | string | `"Cluster"` | The internalTrafficPolicy of service used for apiserver access. |
| clustermesh.apiserver.service.labels | object | `{}` | Labels for the clustermesh-apiserver service. |
| clustermesh.apiserver.service.loadBalancerClass | string | `nil` | Configure a loadBalancerClass. Allows to configure the loadBalancerClass on the clustermesh-apiserver LB service in case the Service type is set to LoadBalancer (requires Kubernetes 1.24+). |
| clustermesh.apiserver.service.loadBalancerIP | string | `nil` | Configure a specific loadBalancerIP. Allows to configure a specific loadBalancerIP on the clustermesh-apiserver LB service in case the Service type is set to LoadBalancer. |
| clustermesh.apiserver.service.loadBalancerSourceRanges | list | `[]` | Configure loadBalancerSourceRanges. Allows to configure the source IP ranges allowed to access the clustermesh-apiserver LB service in case the Service type is set to LoadBalancer. |
@@ -377,7 +378,7 @@ contributors across the globe, there is almost always someone available to help.
| envoy.healthPort | int | `9878` | TCP port for the health API. |
| envoy.httpRetryCount | int | `3` | Maximum number of retries for each HTTP request |
| envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s |
| envoy.image | object | `{"digest":"sha256:a04218c6879007d60d96339a441c448565b6f86650358652da27582e0efbf182","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.32.6-1746661844-0f602c28cb2aa57b29078195049fb257d5b5246c","useDigest":true}` | Envoy container image. |
| envoy.image | object | `{"digest":"sha256:9f69e290a7ea3d4edf9192acd81694089af048ae0d8a67fb63bd62dc1d72203e","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.32.6-1749271279-0864395884b263913eac200ee2048fd985f8e626","useDigest":true}` | Envoy container image. |
| envoy.initialFetchTimeoutSeconds | int | `30` | Time in seconds after which the initial fetch on an xDS stream is considered timed out |
| envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
| envoy.livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe |
@@ -518,7 +519,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. |
| hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay |
| hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay |
| hubble.relay.image | object | `{"digest":"sha256:c16de12a64b8b56de62b15c1652d036253b40cd7fa643d7e1a404dc71dc66441","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.17.4","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.image | object | `{"digest":"sha256:fbb8a6afa8718200fca9381ad274ed695792dbadd2417b0e99c36210ae4964ff","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.17.5","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. |
| hubble.relay.listenPort | string | `"4245"` | Port to listen to. |
| hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
@@ -625,7 +626,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. |
| identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd`, `kvstore` or `doublewrite-readkvstore` / `doublewrite-readcrd` for migrating between identity backends). |
| identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. |
| image | object | `{"digest":"sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.4","useDigest":true}` | Agent container image. |
| image | object | `{"digest":"sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.5","useDigest":true}` | Agent container image. |
| imagePullSecrets | list | `[]` | Configure image pull secrets for pulling container images |
| ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set |
| ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. |
@@ -763,7 +764,7 @@ contributors across the globe, there is almost always someone available to help.
| operator.hostNetwork | bool | `true` | HostNetwork setting |
| operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. |
| operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. |
| operator.image | object | `{"alibabacloudDigest":"sha256:eaa7b18b7cda65af1d454d54224d175fdb69a35199fa949ae7dfda2789c18dd6","awsDigest":"sha256:3c31583e57648470fbf6646ac67122ac5896ce5f979ab824d9a38cfc7eafc753","azureDigest":"sha256:d8d95049bfeab47cb1a3f995164e1ca2cdec8e6c7036c29799647999cdae07b1","genericDigest":"sha256:a3906412f477b09904f46aac1bed28eb522bef7899ed7dd81c15f78b7aa1b9b5","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.17.4","useDigest":true}` | cilium-operator image. |
| operator.image | object | `{"alibabacloudDigest":"sha256:654db67929f716b6178a34a15cb8f95e391465085bcf48cdba49819a56fcd259","awsDigest":"sha256:3e189ec1e286f1bf23d47c45bdeac6025ef7ec3d2dc16190ee768eb94708cbc3","azureDigest":"sha256:add78783fdaced7453a324612eeb9ebecf56002b56c14c73596b3b4923321026","genericDigest":"sha256:f954c97eeb1b47ed67d08cc8fb4108fb829f869373cbb3e698a7f8ef1085b09e","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.17.5","useDigest":true}` | cilium-operator image. |
| operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. |
| operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods |
@@ -813,7 +814,7 @@ contributors across the globe, there is almost always someone available to help.
| preflight.extraEnv | list | `[]` | Additional preflight environment variables. |
| preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. |
| preflight.extraVolumes | list | `[]` | Additional preflight volumes. |
| preflight.image | object | `{"digest":"sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.4","useDigest":true}` | Cilium pre-flight image. |
| preflight.image | object | `{"digest":"sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.5","useDigest":true}` | Cilium pre-flight image. |
| preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods |
| preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |

View File

@@ -1008,7 +1008,7 @@ spec:
defaultMode: 0400
sources:
- secret:
name: {{ .Values.hubble.tls.server.existingSecret | default "hubble-metrics-server-certs" }}
name: {{ .Values.hubble.metrics.tls.server.existingSecret | default "hubble-metrics-server-certs" }}
optional: true
items:
- key: tls.crt

View File

@@ -378,7 +378,7 @@ data:
bpf-events-default-burst-limit: {{ .Values.bpf.events.default.burstLimit | quote }}
{{- end}}
{{- if .Values.bpf.mapDynamicSizeRatio }}
{{- if ne 0.0 ( .Values.bpf.mapDynamicSizeRatio | float64) }}
# Specifies the ratio (0.0-1.0] of total system memory to use for dynamic
# sizing of the TCP CT, non-TCP CT, NAT and policy BPF maps.
bpf-map-dynamic-size-ratio: {{ .Values.bpf.mapDynamicSizeRatio | quote }}

View File

@@ -11,7 +11,9 @@ metadata:
{{- with .Values.commonLabels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.clustermesh.apiserver.service.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- if or .Values.clustermesh.apiserver.service.annotations .Values.clustermesh.annotations }}
annotations:
{{- with .Values.clustermesh.annotations }}

View File

@@ -597,7 +597,8 @@
"mapDynamicSizeRatio": {
"type": [
"null",
"number"
"number",
"string"
]
},
"masquerade": {
@@ -1246,6 +1247,9 @@
"Cluster"
]
},
"labels": {
"type": "object"
},
"loadBalancerClass": {
"type": [
"null",

View File

@@ -191,10 +191,10 @@ image:
# @schema
override: ~
repository: "quay.io/cilium/cilium"
tag: "v1.17.4"
tag: "v1.17.5"
pullPolicy: "IfNotPresent"
# cilium-digest
digest: "sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a"
digest: "sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6"
useDigest: true
# -- Scheduling configurations for cilium pods
scheduling:
@@ -561,7 +561,7 @@ bpf:
# @schema
policyMapMax: 16384
# @schema
# type: [null, number]
# type: [null, number, string]
# @schema
# -- (float64) Configure auto-sizing for all BPF maps based on available memory.
# ref: https://docs.cilium.io/en/stable/network/ebpf/maps/
@@ -1440,9 +1440,9 @@ hubble:
# @schema
override: ~
repository: "quay.io/cilium/hubble-relay"
tag: "v1.17.4"
tag: "v1.17.5"
# hubble-relay-digest
digest: "sha256:c16de12a64b8b56de62b15c1652d036253b40cd7fa643d7e1a404dc71dc66441"
digest: "sha256:fbb8a6afa8718200fca9381ad274ed695792dbadd2417b0e99c36210ae4964ff"
useDigest: true
pullPolicy: "IfNotPresent"
# -- Specifies the resources for the hubble-relay pods
@@ -2353,9 +2353,9 @@ envoy:
# @schema
override: ~
repository: "quay.io/cilium/cilium-envoy"
tag: "v1.32.6-1746661844-0f602c28cb2aa57b29078195049fb257d5b5246c"
tag: "v1.32.6-1749271279-0864395884b263913eac200ee2048fd985f8e626"
pullPolicy: "IfNotPresent"
digest: "sha256:a04218c6879007d60d96339a441c448565b6f86650358652da27582e0efbf182"
digest: "sha256:9f69e290a7ea3d4edf9192acd81694089af048ae0d8a67fb63bd62dc1d72203e"
useDigest: true
# -- Additional containers added to the cilium Envoy DaemonSet.
extraContainers: []
@@ -2710,15 +2710,15 @@ operator:
# @schema
override: ~
repository: "quay.io/cilium/operator"
tag: "v1.17.4"
tag: "v1.17.5"
# operator-generic-digest
genericDigest: "sha256:a3906412f477b09904f46aac1bed28eb522bef7899ed7dd81c15f78b7aa1b9b5"
genericDigest: "sha256:f954c97eeb1b47ed67d08cc8fb4108fb829f869373cbb3e698a7f8ef1085b09e"
# operator-azure-digest
azureDigest: "sha256:d8d95049bfeab47cb1a3f995164e1ca2cdec8e6c7036c29799647999cdae07b1"
azureDigest: "sha256:add78783fdaced7453a324612eeb9ebecf56002b56c14c73596b3b4923321026"
# operator-aws-digest
awsDigest: "sha256:3c31583e57648470fbf6646ac67122ac5896ce5f979ab824d9a38cfc7eafc753"
awsDigest: "sha256:3e189ec1e286f1bf23d47c45bdeac6025ef7ec3d2dc16190ee768eb94708cbc3"
# operator-alibabacloud-digest
alibabacloudDigest: "sha256:eaa7b18b7cda65af1d454d54224d175fdb69a35199fa949ae7dfda2789c18dd6"
alibabacloudDigest: "sha256:654db67929f716b6178a34a15cb8f95e391465085bcf48cdba49819a56fcd259"
useDigest: true
pullPolicy: "IfNotPresent"
suffix: ""
@@ -2993,9 +2993,9 @@ preflight:
# @schema
override: ~
repository: "quay.io/cilium/cilium"
tag: "v1.17.4"
tag: "v1.17.5"
# cilium-digest
digest: "sha256:24a73fe795351cf3279ac8e84918633000b52a9654ff73a6b0d7223bcff4a67a"
digest: "sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6"
useDigest: true
pullPolicy: "IfNotPresent"
# -- The priority class to use for the preflight pod.
@@ -3142,9 +3142,9 @@ clustermesh:
# @schema
override: ~
repository: "quay.io/cilium/clustermesh-apiserver"
tag: "v1.17.4"
tag: "v1.17.5"
# clustermesh-apiserver-digest
digest: "sha256:0b72f3046cf36ff9b113d53cc61185e893edb5fe728a2c9e561c1083f806453d"
digest: "sha256:78dc40b9cb8d7b1ad21a76ff3e11541809acda2ac4ef94150cc832100edc247d"
useDigest: true
pullPolicy: "IfNotPresent"
# -- TCP port for the clustermesh-apiserver health API.
@@ -3246,6 +3246,8 @@ clustermesh:
# * EKS: service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
# * GKE: networking.gke.io/load-balancer-type: "Internal"
annotations: {}
# -- Labels for the clustermesh-apiserver service.
labels: {}
# @schema
# enum: [Local, Cluster]
# @schema
@@ -3651,7 +3653,7 @@ authentication:
override: ~
repository: "docker.io/library/busybox"
tag: "1.37.0"
digest: "sha256:37f7b378a29ceb4c551b1b5582e27747b855bbfaa73fa11914fe0df028dc581f"
digest: "sha256:f85340bf132ae937d2c2a763b8335c9bab35d6e8293f70f606b9c6178d84f42b"
useDigest: true
pullPolicy: "IfNotPresent"
# SPIRE agent configuration

View File

@@ -566,7 +566,7 @@ bpf:
# @schema
policyMapMax: 16384
# @schema
# type: [null, number]
# type: [null, number, string]
# @schema
# -- (float64) Configure auto-sizing for all BPF maps based on available memory.
# ref: https://docs.cilium.io/en/stable/network/ebpf/maps/
@@ -3276,6 +3276,8 @@ clustermesh:
# * EKS: service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
# * GKE: networking.gke.io/load-balancer-type: "Internal"
annotations: {}
# -- Labels for the clustermesh-apiserver service.
labels: {}
# @schema
# enum: [Local, Cluster]
# @schema

View File

@@ -1,2 +1,2 @@
ARG VERSION=v1.17.4
ARG VERSION=v1.17.5
FROM quay.io/cilium/cilium:${VERSION}

View File

@@ -14,7 +14,7 @@ cilium:
mode: "kubernetes"
image:
repository: ghcr.io/cozystack/cozystack/cilium
tag: 1.17.4
digest: "sha256:91f628cbdc4652b4459af79c5a0501282cc0bc0a9fc11e3d8cb65e884f94e751"
tag: 1.17.5
digest: "sha256:2def2dccfc17870be6e1d63584c25b32e812f21c9cdcfa06deadd2787606654d"
envoy:
enabled: false

View File

@@ -21,3 +21,8 @@ image-cozystack-api:
IMAGE="$(REGISTRY)/cozystack-api:$(call settag,$(TAG))@$$(yq e '."containerimage.digest"' images/cozystack-api.json -o json -r)" \
yq -i '.cozystackAPI.image = strenv(IMAGE)' values.yaml
rm -f images/cozystack-api.json
generate:
rm -rf openapi-schemas
mkdir -p openapi-schemas
find ../../apps ../../extra -maxdepth 2 -name values.schema.json -exec sh -c 'ln -s ../{} openapi-schemas/$$(basename $$(dirname {})).json' \;

View File

@@ -0,0 +1 @@
../../../extra/bootbox/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/bucket/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/clickhouse/values.schema.json

View File

@@ -0,0 +1 @@
../../../extra/etcd/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/ferretdb/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/http-cache/values.schema.json

View File

@@ -0,0 +1 @@
../../../extra/info/values.schema.json

View File

@@ -0,0 +1 @@
../../../extra/ingress/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/kafka/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/kubernetes/values.schema.json

View File

@@ -0,0 +1 @@
../../../extra/monitoring/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/mysql/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/nats/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/postgres/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/rabbitmq/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/redis/values.schema.json

View File

@@ -0,0 +1 @@
../../../extra/seaweedfs/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/tcp-balancer/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/tenant/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/virtual-machine/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/vm-disk/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/vm-instance/values.schema.json

View File

@@ -0,0 +1 @@
../../../apps/vpn/values.schema.json

View File

@@ -10,6 +10,7 @@ data:
kind: Bucket
singular: bucket
plural: buckets
openAPISchema: {{ .Files.Get "openapi-schemas/bucket.json" | fromJson | toJson | quote }}
release:
prefix: bucket-
labels:
@@ -24,6 +25,7 @@ data:
kind: ClickHouse
singular: clickhouse
plural: clickhouses
openAPISchema: {{ .Files.Get "openapi-schemas/clickhouse.json" | fromJson | toJson | quote }}
release:
prefix: clickhouse-
labels:
@@ -38,6 +40,7 @@ data:
kind: HTTPCache
singular: httpcache
plural: httpcaches
openAPISchema: {{ .Files.Get "openapi-schemas/http-cache.json" | fromJson | toJson | quote }}
release:
prefix: http-cache-
labels:
@@ -52,6 +55,7 @@ data:
kind: NATS
singular: nats
plural: natses
openAPISchema: {{ .Files.Get "openapi-schemas/nats.json" | fromJson | toJson | quote }}
release:
prefix: nats-
labels:
@@ -66,6 +70,7 @@ data:
kind: TCPBalancer
singular: tcpbalancer
plural: tcpbalancers
openAPISchema: {{ .Files.Get "openapi-schemas/tcp-balancer.json" | fromJson | toJson | quote }}
release:
prefix: tcp-balancer-
labels:
@@ -80,6 +85,7 @@ data:
kind: VirtualMachine
singular: virtualmachine
plural: virtualmachines
openAPISchema: {{ .Files.Get "openapi-schemas/virtual-machine.json" | fromJson | toJson | quote }}
release:
prefix: virtual-machine-
labels:
@@ -94,6 +100,7 @@ data:
kind: VPN
singular: vpn
plural: vpns
openAPISchema: {{ .Files.Get "openapi-schemas/vpn.json" | fromJson | toJson | quote }}
release:
prefix: vpn-
labels:
@@ -108,6 +115,7 @@ data:
kind: MySQL
singular: mysql
plural: mysqls
openAPISchema: {{ .Files.Get "openapi-schemas/mysql.json" | fromJson | toJson | quote }}
release:
prefix: mysql-
labels:
@@ -122,6 +130,7 @@ data:
kind: Tenant
singular: tenant
plural: tenants
openAPISchema: {{ .Files.Get "openapi-schemas/tenant.json" | fromJson | toJson | quote }}
release:
prefix: tenant-
labels:
@@ -136,6 +145,7 @@ data:
kind: Kubernetes
singular: kubernetes
plural: kuberneteses
openAPISchema: {{ .Files.Get "openapi-schemas/kubernetes.json" | fromJson | toJson | quote }}
release:
prefix: kubernetes-
labels:
@@ -150,6 +160,7 @@ data:
kind: Redis
singular: redis
plural: redises
openAPISchema: {{ .Files.Get "openapi-schemas/redis.json" | fromJson | toJson | quote }}
release:
prefix: redis-
labels:
@@ -164,6 +175,7 @@ data:
kind: RabbitMQ
singular: rabbitmq
plural: rabbitmqs
openAPISchema: {{ .Files.Get "openapi-schemas/rabbitmq.json" | fromJson | toJson | quote }}
release:
prefix: rabbitmq-
labels:
@@ -178,6 +190,7 @@ data:
kind: Postgres
singular: postgres
plural: postgreses
openAPISchema: {{ .Files.Get "openapi-schemas/postgres.json" | fromJson | toJson | quote }}
release:
prefix: postgres-
labels:
@@ -192,6 +205,7 @@ data:
kind: FerretDB
singular: ferretdb
plural: ferretdb
openAPISchema: {{ .Files.Get "openapi-schemas/ferretdb.json" | fromJson | toJson | quote }}
release:
prefix: ferretdb-
labels:
@@ -206,6 +220,7 @@ data:
kind: Kafka
singular: kafka
plural: kafkas
openAPISchema: {{ .Files.Get "openapi-schemas/kafka.json" | fromJson | toJson | quote }}
release:
prefix: kafka-
labels:
@@ -220,6 +235,7 @@ data:
kind: VMDisk
plural: vmdisks
singular: vmdisk
openAPISchema: {{ .Files.Get "openapi-schemas/vm-disk.json" | fromJson | toJson | quote }}
release:
prefix: vm-disk-
labels:
@@ -234,6 +250,7 @@ data:
kind: VMInstance
plural: vminstances
singular: vminstance
openAPISchema: {{ .Files.Get "openapi-schemas/vm-instance.json" | fromJson | toJson | quote }}
release:
prefix: vm-instance-
labels:
@@ -248,6 +265,7 @@ data:
kind: Monitoring
plural: monitorings
singular: monitoring
openAPISchema: {{ .Files.Get "openapi-schemas/monitoring.json" | fromJson | toJson | quote }}
release:
prefix: ""
labels:
@@ -262,6 +280,7 @@ data:
kind: Etcd
plural: etcds
singular: etcd
openAPISchema: {{ .Files.Get "openapi-schemas/etcd.json" | fromJson | toJson | quote }}
release:
prefix: ""
labels:
@@ -276,6 +295,7 @@ data:
kind: Ingress
plural: ingresses
singular: ingress
openAPISchema: {{ .Files.Get "openapi-schemas/ingress.json" | fromJson | toJson | quote }}
release:
prefix: ""
labels:
@@ -290,6 +310,7 @@ data:
kind: SeaweedFS
plural: seaweedfses
singular: seaweedfs
openAPISchema: {{ .Files.Get "openapi-schemas/seaweedfs.json" | fromJson | toJson | quote }}
release:
prefix: ""
labels:
@@ -304,6 +325,7 @@ data:
kind: BootBox
plural: bootboxes
singular: bootbox
openAPISchema: {{ .Files.Get "openapi-schemas/bootbox.json" | fromJson | toJson | quote }}
release:
prefix: ""
labels:
@@ -318,6 +340,7 @@ data:
kind: Info
plural: infos
singular: info
openAPISchema: {{ .Files.Get "openapi-schemas/info.json" | fromJson | toJson | quote }}
release:
prefix: ""
labels:

View File

@@ -1,2 +1,2 @@
cozystackAPI:
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.33.2@sha256:724a166d2daa9cae3caeb18bffdc7146d80de310a6f97360c2beaef340076e6d
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.34.0-beta.1@sha256:724a166d2daa9cae3caeb18bffdc7146d80de310a6f97360c2beaef340076e6d

View File

@@ -1,5 +1,5 @@
cozystackController:
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.33.2@sha256:34e641b1bda248c254bbf259450d6ccad6ef632b92d28f3a6da4bbfde7983335
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.34.0-beta.1@sha256:cf0b80f2540ac8f6ddd226b3bab87e001602eb0ebc8a527a1c14a0a6f23eb427
debug: false
disableTelemetry: false
cozystackVersion: "v0.33.2"
cozystackVersion: "v0.34.0-beta.1"

View File

@@ -76,7 +76,7 @@ data:
"kubeappsNamespace": {{ .Release.Namespace | quote }},
"helmGlobalNamespace": {{ include "kubeapps.helmGlobalPackagingNamespace" . | quote }},
"carvelGlobalNamespace": {{ .Values.kubeappsapis.pluginConfig.kappController.packages.v1alpha1.globalPackagingNamespace | quote }},
"appVersion": "v0.33.2",
"appVersion": "v0.34.0-beta.1",
"authProxyEnabled": {{ .Values.authProxy.enabled }},
"oauthLoginURI": {{ .Values.authProxy.oauthLoginURI | quote }},
"oauthLogoutURI": {{ .Values.authProxy.oauthLogoutURI | quote }},

View File

@@ -19,7 +19,7 @@ kubeapps:
image:
registry: ghcr.io/cozystack/cozystack
repository: dashboard
tag: v0.33.2
tag: v0.34.0-beta.1
digest: "sha256:ac2b5348d85fe37ad70a4cc159881c4eaded9175a4b586cfa09a52b0fbe5e1e5"
redis:
master:
@@ -37,8 +37,8 @@ kubeapps:
image:
registry: ghcr.io/cozystack/cozystack
repository: kubeapps-apis
tag: v0.33.2
digest: "sha256:65325a916974e63e813fca1a89dc40ae58b5bfc2a8ffc4581916106136d19563"
tag: v0.34.0-beta.1
digest: "sha256:0270aea2e4b21a906db7f03214e7f6b0786be64a1b66e998e4ed8ef7da12da58"
pluginConfig:
flux:
packages:

View File

@@ -8,7 +8,7 @@ annotations:
- name: Upstream Project
url: https://github.com/controlplaneio-fluxcd/flux-operator
apiVersion: v2
appVersion: v0.23.0
appVersion: v0.24.0
description: 'A Helm chart for deploying the Flux Operator. '
home: https://github.com/controlplaneio-fluxcd
icon: https://raw.githubusercontent.com/cncf/artwork/main/projects/flux/icon/color/flux-icon-color.png
@@ -25,4 +25,4 @@ sources:
- https://github.com/controlplaneio-fluxcd/flux-operator
- https://github.com/controlplaneio-fluxcd/charts
type: application
version: 0.23.0
version: 0.24.0

View File

@@ -1,6 +1,6 @@
# flux-operator
![Version: 0.23.0](https://img.shields.io/badge/Version-0.23.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.23.0](https://img.shields.io/badge/AppVersion-v0.23.0-informational?style=flat-square)
![Version: 0.24.0](https://img.shields.io/badge/Version-0.24.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.24.0](https://img.shields.io/badge/AppVersion-v0.24.0-informational?style=flat-square)
The [Flux Operator](https://github.com/controlplaneio-fluxcd/flux-operator) provides a
declarative API for the installation and upgrade of CNCF [Flux](https://fluxcd.io) and the
@@ -38,6 +38,8 @@ see the Flux Operator [documentation](https://fluxcd.control-plane.io/operator/)
| commonLabels | object | `{}` | Common labels to add to all deployed objects including pods. |
| extraArgs | list | `[]` | Container extra arguments. |
| extraEnvs | list | `[]` | Container extra environment variables. |
| extraVolumeMounts | list | `[]` | Container extra volume mounts. |
| extraVolumes | list | `[]` | Pod extra volumes. |
| fullnameOverride | string | `""` | |
| hostNetwork | bool | `false` | If `true`, the container ports (`8080` and `8081`) are exposed on the host network. |
| image | object | `{"imagePullPolicy":"IfNotPresent","pullSecrets":[],"repository":"ghcr.io/controlplaneio-fluxcd/flux-operator","tag":""}` | Container image settings. The image tag defaults to the chart appVersion. |

View File

@@ -586,6 +586,9 @@ spec:
description: ServerVersion is the version of the Kubernetes API
server.
type: string
required:
- platform
- serverVersion
type: object
components:
description: ComponentsStatus is the status of the Flux controller
@@ -637,6 +640,23 @@ spec:
- entitlement
- status
type: object
operator:
description: Operator is the version information of the Flux Operator.
properties:
apiVersion:
description: APIVersion is the API version of the Flux Operator.
type: string
platform:
description: Platform is the os/arch of Flux Operator.
type: string
version:
description: Version is the version number of Flux Operator.
type: string
required:
- apiVersion
- platform
- version
type: object
reconcilers:
description: |-
ReconcilersStatus is the list of Flux reconcilers and
@@ -858,8 +878,10 @@ spec:
- a PEM-encoded CA certificate (`ca.crt`)
- a PEM-encoded client certificate (`tls.crt`) and private key (`tls.key`)
When connecting to a Git provider that uses self-signed certificates, the CA certificate
When connecting to a Git or OCI provider that uses self-signed certificates, the CA certificate
must be set in the Secret under the 'ca.crt' key to establish the trust relationship.
When connecting to an OCI provider that supports client certificates (mTLS), the client certificate
and private key must be set in the Secret under the 'tls.crt' and 'tls.key' keys, respectively.
properties:
name:
description: Name of the referent.
@@ -884,11 +906,21 @@ spec:
ExcludeBranch specifies the regular expression to filter the branches
that the input provider should exclude.
type: string
excludeTag:
description: |-
ExcludeTag specifies the regular expression to filter the tags
that the input provider should exclude.
type: string
includeBranch:
description: |-
IncludeBranch specifies the regular expression to filter the branches
that the input provider should include.
type: string
includeTag:
description: |-
IncludeTag specifies the regular expression to filter the tags
that the input provider should include.
type: string
labels:
description: Labels specifies the list of labels to filter the
input provider response.
@@ -896,13 +928,17 @@ spec:
type: string
type: array
limit:
default: 100
description: |-
Limit specifies the maximum number of input sets to return.
When not set, the default limit is 100.
type: integer
semver:
description: Semver specifies the semantic version range to filter
and order the tags.
description: |-
Semver specifies a semantic version range to filter and sort the tags.
If this field is not specified, the tags will be sorted in reverse
alphabetical order.
Supported only for tags at the moment.
type: string
type: object
schedule:
@@ -933,10 +969,12 @@ spec:
secretRef:
description: |-
SecretRef specifies the Kubernetes Secret containing the basic-auth credentials
to access the input provider. The secret must contain the keys
'username' and 'password'.
When connecting to a Git provider, the password should be a personal access token
to access the input provider.
When connecting to a Git provider, the secret must contain the keys
'username' and 'password', and the password should be a personal access token
that grants read-only access to the repository.
When connecting to an OCI provider, the secret must contain a Kubernetes
Image Pull Secret, as if created by `kubectl create secret docker-registry`.
properties:
name:
description: Name of the referent.
@@ -944,6 +982,14 @@ spec:
required:
- name
type: object
serviceAccountName:
description: |-
ServiceAccountName specifies the name of the Kubernetes ServiceAccount
used for authentication with AWS, Azure or GCP services through
workload identity federation features. If not specified, the
authentication for these cloud providers will use the ServiceAccount
of the operator (or any other environment authentication configuration).
type: string
skip:
description: Skip defines whether we need to skip input provider response
updates.
@@ -966,12 +1012,20 @@ spec:
- GitLabBranch
- GitLabTag
- GitLabMergeRequest
- AzureDevOpsBranch
- AzureDevOpsTag
- AzureDevOpsPullRequest
- OCIArtifactTag
- ACRArtifactTag
- ECRArtifactTag
- GARArtifactTag
type: string
url:
description: |-
URL specifies the HTTP/S address of the input provider API.
URL specifies the HTTP/S or OCI address of the input provider API.
When connecting to a Git provider, the URL should point to the repository address.
pattern: ^((http|https)://.*){0,1}$
When connecting to an OCI provider, the URL should point to the OCI repository address.
pattern: ^((http|https|oci)://.*){0,1}$
type: string
required:
- type
@@ -981,6 +1035,27 @@ spec:
rule: self.type != 'Static' || !has(self.url)
- message: spec.url must not be empty when spec.type is not 'Static'
rule: self.type == 'Static' || has(self.url)
- message: spec.url must start with 'http://' or 'https://' when spec.type
is a Git provider
rule: '!self.type.startsWith(''Git'') || self.url.startsWith(''http'')'
- message: spec.url must start with 'http://' or 'https://' when spec.type
is a Git provider
rule: '!self.type.startsWith(''AzureDevOps'') || self.url.startsWith(''http'')'
- message: spec.url must start with 'oci://' when spec.type is an OCI
provider
rule: '!self.type.endsWith(''ArtifactTag'') || self.url.startsWith(''oci'')'
- message: cannot specify spec.serviceAccountName when spec.type is not
one of AzureDevOps* or *ArtifactTag
rule: '!has(self.serviceAccountName) || self.type.startsWith(''AzureDevOps'')
|| self.type.endsWith(''ArtifactTag'')'
- message: cannot specify spec.certSecretRef when spec.type is one of
Static, AzureDevOps*, ACRArtifactTag, ECRArtifactTag or GARArtifactTag
rule: '!has(self.certSecretRef) || !(self.url == ''Static'' || self.type.startsWith(''AzureDevOps'')
|| (self.type.endsWith(''ArtifactTag'') && self.type != ''OCIArtifactTag''))'
- message: cannot specify spec.secretRef when spec.type is one of Static,
ACRArtifactTag, ECRArtifactTag or GARArtifactTag
rule: '!has(self.secretRef) || !(self.url == ''Static'' || (self.type.endsWith(''ArtifactTag'')
&& self.type != ''OCIArtifactTag''))'
status:
description: ResourceSetInputProviderStatus defines the observed state
of ResourceSetInputProvider.

View File

@@ -99,9 +99,15 @@ spec:
volumeMounts:
- name: temp
mountPath: /tmp
{{- if .Values.extraVolumeMounts }}
{{- toYaml .Values.extraVolumeMounts | nindent 12 }}
{{- end }}
volumes:
- name: temp
emptyDir: {}
{{- if .Values.extraVolumes }}
{{- toYaml .Values.extraVolumes | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}

View File

@@ -116,12 +116,18 @@ nodeSelector: { } # @schema type: object
# -- If `true`, the container ports (`8080` and `8081`) are exposed on the host network.
hostNetwork: false # @schema default: false
# -- Pod extra volumes.
extraVolumes: [ ] # @schema item: object ; uniqueItems: true
# -- Container extra environment variables.
extraEnvs: [ ] # @schema item: object ; uniqueItems: true
# -- Container extra arguments.
extraArgs: [ ] # @schema item: string ; uniqueItems: true
# -- Container extra volume mounts.
extraVolumeMounts: [ ] # @schema item: object ; uniqueItems: true
# -- Container logging level flag.
logLevel: "info" # @schema enum:[debug,info,error]

View File

@@ -8,7 +8,7 @@ annotations:
- name: Upstream Project
url: https://github.com/controlplaneio-fluxcd/flux-operator
apiVersion: v2
appVersion: v0.23.0
appVersion: v0.24.0
description: 'A Helm chart for deploying a Flux instance managed by Flux Operator. '
home: https://github.com/controlplaneio-fluxcd
icon: https://raw.githubusercontent.com/cncf/artwork/main/projects/flux/icon/color/flux-icon-color.png
@@ -25,4 +25,4 @@ sources:
- https://github.com/controlplaneio-fluxcd/flux-operator
- https://github.com/controlplaneio-fluxcd/charts
type: application
version: 0.23.0
version: 0.24.0

View File

@@ -1,6 +1,6 @@
# flux-instance
![Version: 0.23.0](https://img.shields.io/badge/Version-0.23.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.23.0](https://img.shields.io/badge/AppVersion-v0.23.0-informational?style=flat-square)
![Version: 0.24.0](https://img.shields.io/badge/Version-0.24.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.24.0](https://img.shields.io/badge/AppVersion-v0.24.0-informational?style=flat-square)
This chart is a thin wrapper around the `FluxInstance` custom resource, which is
used by the [Flux Operator](https://github.com/controlplaneio-fluxcd/flux-operator)

View File

@@ -0,0 +1,2 @@
name: hetzner-ccm
version: 1.26.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -0,0 +1,10 @@
export NAME=hetzner-ccm
export NAMESPACE=kube-system
include ../../../scripts/package.mk
update:
rm -rf charts
helm repo add hcloud https://charts.hetzner.cloud
helm repo update hcloud
helm pull hcloud/hcloud-cloud-controller-manager --untar --untardir charts

View File

@@ -0,0 +1,96 @@
---
# Source: hcloud-cloud-controller-manager/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: hcloud-cloud-controller-manager
namespace: kube-system
---
# Source: hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "system:hcloud-cloud-controller-manager"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: hcloud-cloud-controller-manager
namespace: kube-system
---
# Source: hcloud-cloud-controller-manager/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hcloud-cloud-controller-manager
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app.kubernetes.io/instance: 'hcloud-hccm'
app.kubernetes.io/name: 'hcloud-cloud-controller-manager'
template:
metadata:
labels:
app.kubernetes.io/instance: 'hcloud-hccm'
app.kubernetes.io/name: 'hcloud-cloud-controller-manager'
spec:
serviceAccountName: hcloud-cloud-controller-manager
dnsPolicy: Default
tolerations:
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM.
- key: "node.cloudprovider.kubernetes.io/uninitialized"
value: "true"
effect: "NoSchedule"
- key: "CriticalAddonsOnly"
operator: "Exists"
# Allow HCCM to schedule on control plane nodes.
- key: "node-role.kubernetes.io/master"
effect: NoSchedule
operator: Exists
- key: "node-role.kubernetes.io/control-plane"
effect: NoSchedule
operator: Exists
- key: "node.kubernetes.io/not-ready"
effect: "NoExecute"
containers:
- name: hcloud-cloud-controller-manager
args:
- "--allow-untagged-cloud"
- "--cloud-provider=hcloud"
- "--route-reconciliation-period=30s"
- "--webhook-secure-port=0"
- "--leader-elect=false"
env:
- name: HCLOUD_TOKEN
valueFrom:
secretKeyRef:
key: token
name: hcloud
- name: ROBOT_PASSWORD
valueFrom:
secretKeyRef:
key: robot-password
name: hcloud
optional: true
- name: ROBOT_USER
valueFrom:
secretKeyRef:
key: robot-user
name: hcloud
optional: true
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version
ports:
- name: metrics
containerPort: 8233
resources:
requests:
cpu: 100m
memory: 50Mi
priorityClassName: "system-cluster-critical"

View File

@@ -0,0 +1,113 @@
---
# Source: hcloud-cloud-controller-manager/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: hcloud-cloud-controller-manager
namespace: kube-system
---
# Source: hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "system:hcloud-cloud-controller-manager"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: hcloud-cloud-controller-manager
namespace: kube-system
---
# Source: hcloud-cloud-controller-manager/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: hcloud-cloud-controller-manager
namespace: kube-system
spec:
revisionHistoryLimit: 2
selector:
matchLabels:
app.kubernetes.io/instance: 'hcloud-hccm'
app.kubernetes.io/name: 'hcloud-cloud-controller-manager'
template:
metadata:
labels:
app.kubernetes.io/instance: 'hcloud-hccm'
app.kubernetes.io/name: 'hcloud-cloud-controller-manager'
pod-label: pod-label
annotations:
pod-annotation: pod-annotation
spec:
serviceAccountName: hcloud-cloud-controller-manager
dnsPolicy: Default
tolerations:
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM.
- key: "node.cloudprovider.kubernetes.io/uninitialized"
value: "true"
effect: "NoSchedule"
- key: "CriticalAddonsOnly"
operator: "Exists"
# Allow HCCM to schedule on control plane nodes.
- key: "node-role.kubernetes.io/master"
effect: NoSchedule
operator: Exists
- key: "node-role.kubernetes.io/control-plane"
effect: NoSchedule
operator: Exists
- key: "node.kubernetes.io/not-ready"
effect: "NoExecute"
- effect: NoSchedule
key: example-key
operator: Exists
nodeSelector:
foo: bar
containers:
- name: hcloud-cloud-controller-manager
command:
- "/bin/hcloud-cloud-controller-manager"
- "--allow-untagged-cloud"
- "--cloud-provider=hcloud"
- "--route-reconciliation-period=30s"
- "--webhook-secure-port=0"
env:
- name: HCLOUD_TOKEN
valueFrom:
secretKeyRef:
key: token
name: hcloud
- name: ROBOT_PASSWORD
valueFrom:
secretKeyRef:
key: robot-password
name: hcloud
optional: true
- name: ROBOT_USER
valueFrom:
secretKeyRef:
key: robot-user
name: hcloud
optional: true
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version
ports:
- name: metrics
containerPort: 8233
resources:
requests:
cpu: 100m
memory: 50Mi
volumeMounts:
- mountPath: /var/run/secrets/hcloud
name: token-volume
readOnly: true
priorityClassName: system-cluster-critical
volumes:
- name: token-volume
secret:
secretName: hcloud-token

View File

@@ -0,0 +1,51 @@
kind: DaemonSet
monitoring:
podMonitor:
labels:
environment: staging
annotations:
release: kube-prometheus-stack
additionalTolerations:
- key: "example-key"
operator: "Exists"
effect: "NoSchedule"
nodeSelector:
foo: bar
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- antarctica-east1
- antarctica-west1
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
podLabels:
pod-label: pod-label
podAnnotations:
pod-annotation: pod-annotation
extraVolumeMounts:
- name: token-volume
readOnly: true
mountPath: /var/run/secrets/hcloud
extraVolumes:
- name: token-volume
secret:
secretName: hcloud-token

View File

@@ -0,0 +1,4 @@
apiVersion: v2
name: hcloud-cloud-controller-manager
type: application
version: 1.26.0

View File

@@ -0,0 +1,61 @@
# hcloud-cloud-controller-manager Helm Chart
This Helm chart is the recommended installation method for [hcloud-cloud-controller-manager](https://github.com/hetznercloud/hcloud-cloud-controller-manager).
## Quickstart
First, [install Helm 3](https://helm.sh/docs/intro/install/).
The following snippet will deploy hcloud-cloud-controller-manager to the kube-system namespace.
```sh
# Sync the Hetzner Cloud helm chart repository to your local computer.
helm repo add hcloud https://charts.hetzner.cloud
helm repo update hcloud
# Install the latest version of the hcloud-cloud-controller-manager chart.
helm install hccm hcloud/hcloud-cloud-controller-manager -n kube-system
# If you want to install hccm with private networking support (see main Deployment guide for more info).
helm install hccm hcloud/hcloud-cloud-controller-manager -n kube-system --set networking.enabled=true
```
Please note that additional configuration is necessary. See the main [Deployment](https://github.com/hetznercloud/hcloud-cloud-controller-manager#deployment) guide.
If you're unfamiliar with Helm it would behoove you to peep around the documentation. Perhaps start with the [Quickstart Guide](https://helm.sh/docs/intro/quickstart/)?
### Upgrading from static manifests
If you previously installed hcloud-cloud-controller-manager with this command:
```sh
kubectl apply -f https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml
```
You can uninstall that same deployment, by running the following command:
```sh
kubectl delete -f https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm.yaml
```
Then you can follow the Quickstart installation steps above.
## Configuration
This chart aims to be highly flexible. Please review the [values.yaml](./values.yaml) for a full list of configuration options.
If you've already deployed hccm using the `helm install` command above, you can easily change configuration values:
```sh
helm upgrade hccm hcloud/hcloud-cloud-controller-manager -n kube-system --set monitoring.podMonitor.enabled=true
```
### Multiple replicas / DaemonSet
You can choose between different deployment options. By default the chart will deploy a single replica as a Deployment.
If you want to change the replica count you can adjust the value `replicaCount` inside the helm values.
If you have more than 1 replica leader election will be turned on automatically.
If you want to deploy hccm as a DaemonSet you can set `kind` to `DaemonSet` inside the values.
To adjust on which nodes the DaemonSet should be deployed you can use the `nodeSelector` and `additionalTolerations` values.

View File

@@ -0,0 +1,5 @@
{{ if (and $.Values.monitoring.enabled $.Values.monitoring.podMonitor.enabled) }}
{{ if not ($.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor") }}
WARNING: monitoring.podMonitoring.enabled=true but PodMonitor could not be installed: the CRD was not detected.
{{ end }}
{{ end }}

View File

@@ -0,0 +1,7 @@
{{- define "hcloud-cloud-controller-manager.name" -}}
{{- $.Values.nameOverride | default $.Chart.Name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "hcloud-cloud-controller-manager.selectorLabels" -}}
{{- tpl (toYaml $.Values.selectorLabels) $ }}
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if .Values.rbac.create }}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: {{ include "hcloud-cloud-controller-manager.name" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,108 @@
{{- if eq $.Values.kind "DaemonSet" }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "hcloud-cloud-controller-manager.name" . }}
namespace: {{ .Release.Namespace }}
spec:
revisionHistoryLimit: 2
selector:
matchLabels:
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 8 }}
{{- if .Values.podLabels }}
{{- toYaml .Values.podLabels | nindent 8 }}
{{- end }}
{{- if .Values.podAnnotations }}
annotations:
{{- toYaml .Values.podAnnotations | nindent 8 }}
{{- end }}
spec:
{{- with .Values.image.pullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "hcloud-cloud-controller-manager.name" . }}
dnsPolicy: Default
tolerations:
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM.
- key: "node.cloudprovider.kubernetes.io/uninitialized"
value: "true"
effect: "NoSchedule"
- key: "CriticalAddonsOnly"
operator: "Exists"
# Allow HCCM to schedule on control plane nodes.
- key: "node-role.kubernetes.io/master"
effect: NoSchedule
operator: Exists
- key: "node-role.kubernetes.io/control-plane"
effect: NoSchedule
operator: Exists
- key: "node.kubernetes.io/not-ready"
effect: "NoExecute"
{{- if gt (len .Values.additionalTolerations) 0 }}
{{ toYaml .Values.additionalTolerations | nindent 8 }}
{{- end }}
{{- if gt (len .Values.nodeSelector) 0 }}
nodeSelector:
{{ toYaml .Values.nodeSelector | nindent 8 }}
{{- end }}
{{- if $.Values.networking.enabled }}
hostNetwork: true
{{- end }}
containers:
- name: hcloud-cloud-controller-manager
command:
- "/bin/hcloud-cloud-controller-manager"
{{- range $key, $value := $.Values.args }}
{{- if not (eq $value nil) }}
- "--{{ $key }}{{ if $value }}={{ $value }}{{ end }}"
{{- end }}
{{- end }}
{{- if $.Values.networking.enabled }}
- "--allocate-node-cidrs=true"
- "--cluster-cidr={{ $.Values.networking.clusterCIDR }}"
{{- end }}
env:
{{- range $key, $value := $.Values.env }}
- name: {{ $key }}
{{- tpl (toYaml $value) $ | nindent 14 }}
{{- end }}
{{- if $.Values.networking.enabled }}
- name: HCLOUD_NETWORK
{{- tpl (toYaml $.Values.networking.network) $ | nindent 14 }}
{{- end }}
{{- if not $.Values.monitoring.enabled }}
- name: HCLOUD_METRICS_ENABLED
value: "false"
{{- end }}
{{- if $.Values.robot.enabled }}
- name: ROBOT_ENABLED
value: "true"
{{- end }}
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version
ports:
{{- if $.Values.monitoring.enabled }}
- name: metrics
containerPort: 8233
{{- end }}
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- with .Values.extraVolumeMounts }}
volumeMounts:
{{- toYaml . | nindent 12 }}
{{- end }}
priorityClassName: system-cluster-critical
{{- with .Values.extraVolumes }}
volumes:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,118 @@
{{- if eq $.Values.kind "Deployment" }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "hcloud-cloud-controller-manager.name" . }}
namespace: {{ .Release.Namespace }}
spec:
replicas: {{ .Values.replicaCount }}
revisionHistoryLimit: 2
selector:
matchLabels:
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 8 }}
{{- if .Values.podLabels }}
{{- toYaml .Values.podLabels | nindent 8 }}
{{- end }}
{{- if .Values.podAnnotations }}
annotations:
{{- toYaml .Values.podAnnotations | nindent 8 }}
{{- end }}
spec:
{{- with .Values.image.pullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "hcloud-cloud-controller-manager.name" . }}
dnsPolicy: Default
tolerations:
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM.
- key: "node.cloudprovider.kubernetes.io/uninitialized"
value: "true"
effect: "NoSchedule"
- key: "CriticalAddonsOnly"
operator: "Exists"
# Allow HCCM to schedule on control plane nodes.
- key: "node-role.kubernetes.io/master"
effect: NoSchedule
operator: Exists
- key: "node-role.kubernetes.io/control-plane"
effect: NoSchedule
operator: Exists
- key: "node.kubernetes.io/not-ready"
effect: "NoExecute"
{{- if gt (len .Values.additionalTolerations) 0 }}
{{ toYaml .Values.additionalTolerations | nindent 8 }}
{{- end }}
{{- if gt (len .Values.nodeSelector) 0 }}
nodeSelector:
{{ toYaml .Values.nodeSelector | nindent 8 }}
{{- end }}
{{- if gt (len .Values.affinity) 0 }}
affinity:
{{ toYaml .Values.affinity | nindent 8 }}
{{- end }}
{{- if $.Values.networking.enabled }}
hostNetwork: true
{{- end }}
containers:
- name: hcloud-cloud-controller-manager
args:
{{- range $key, $value := $.Values.args }}
{{- if not (eq $value nil) }}
- "--{{ $key }}{{ if $value }}={{ $value }}{{ end }}"
{{- end }}
{{- end }}
{{- if $.Values.networking.enabled }}
- "--allocate-node-cidrs=true"
- "--cluster-cidr={{ $.Values.networking.clusterCIDR }}"
{{- end }}
{{- if (eq (int $.Values.replicaCount) 1) }}
- "--leader-elect=false"
{{- end }}
env:
{{- range $key, $value := $.Values.env }}
- name: {{ $key }}
{{- tpl (toYaml $value) $ | nindent 14 }}
{{- end }}
{{- if $.Values.networking.enabled }}
- name: HCLOUD_NETWORK
{{- tpl (toYaml $.Values.networking.network) $ | nindent 14 }}
{{- end }}
{{- if not $.Values.monitoring.enabled }}
- name: HCLOUD_METRICS_ENABLED
value: "false"
{{- end }}
{{- if $.Values.robot.enabled }}
- name: ROBOT_ENABLED
value: "true"
{{- end }}
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version
ports:
{{- if $.Values.monitoring.enabled }}
- name: metrics
containerPort: 8233
{{- end }}
resources:
{{- toYaml $.Values.resources | nindent 12 }}
{{- with .Values.extraVolumeMounts }}
volumeMounts:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName | quote }}
{{- end }}
{{- with .Values.extraVolumes }}
volumes:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,22 @@
{{ if (and $.Values.monitoring.enabled $.Values.monitoring.podMonitor.enabled) }}
{{ if $.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor" }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ include "hcloud-cloud-controller-manager.name" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- with $.Values.monitoring.podMonitor.labels }}
{{- toYaml . | nindent 4 }}
{{- end }}
annotations:
{{- range $key, $value := .Values.monitoring.podMonitor.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
{{- tpl (toYaml $.Values.monitoring.podMonitor.spec) $ | nindent 2 }}
selector:
matchLabels:
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }}
{{ end }}
{{ end }}

View File

@@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "hcloud-cloud-controller-manager.name" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,154 @@
# hccm program command line arguments.
# The following flags are managed by the chart and should *not* be set directly here:
# --allocate-node-cidrs
# --cluster-cidr
# --leader-elect
args:
cloud-provider: hcloud
allow-untagged-cloud: ""
# Read issue #395 to understand how changes to this value affect you.
# https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/395
route-reconciliation-period: 30s
# We do not use the webhooks feature and there is no need to bind a port that is unused.
# https://github.com/kubernetes/kubernetes/issues/120043
# https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/492
webhook-secure-port: "0"
# Change deployment kind from "Deployment" to "DaemonSet"
kind: Deployment
# change replicaCount (only used when kind is "Deployment")
replicaCount: 1
# hccm environment variables
env:
# The following variables are managed by the chart and should *not* be set here:
# HCLOUD_METRICS_ENABLED - see monitoring.enabled
# HCLOUD_NETWORK - see networking.enabled
# ROBOT_ENABLED - see robot.enabled
# You can also use a file to provide secrets to the hcloud-cloud-controller-manager.
# This is currently possible for HCLOUD_TOKEN, ROBOT_USER, and ROBOT_PASSWORD.
# Use the env var appended with _FILE (e.g. HCLOUD_TOKEN_FILE) and set the value to the file path that should be read
# The file must be provided externally (e.g. via secret injection).
# Example:
# HCLOUD_TOKEN_FILE:
# value: "/etc/hetzner/token"
# to disable reading the token from the secret you have to disable the original env var:
# HCLOUD_TOKEN: null
HCLOUD_TOKEN:
valueFrom:
secretKeyRef:
name: hcloud
key: token
ROBOT_USER:
valueFrom:
secretKeyRef:
name: hcloud
key: robot-user
optional: true
ROBOT_PASSWORD:
valueFrom:
secretKeyRef:
name: hcloud
key: robot-password
optional: true
image:
repository: docker.io/hetznercloud/hcloud-cloud-controller-manager
tag: "v{{ $.Chart.Version }}"
# Optionally specify an array of imagePullSecrets.
# Secrets must be manually created in the namespace.
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
# e.g:
# pullSecrets:
# - myRegistryKeySecretName
#
pullSecrets: []
monitoring:
# When enabled, the hccm Pod will serve metrics on port :8233
enabled: true
podMonitor:
# When enabled (and metrics.enabled=true), a PodMonitor will be deployed to scrape metrics.
# The PodMonitor [1] CRD must already exist in the target cluster.
enabled: false
# PodMonitor Labels
labels: {}
# release: kube-prometheus-stack
# PodMonitor Annotations
annotations: {}
# PodMonitorSpec to be deployed. The "selector" field is set elsewhere and should *not* be used here.
# https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitorSpec
spec:
podMetricsEndpoints:
- port: metrics
nameOverride: ~
networking:
# If enabled, hcloud-ccm will be deployed with networking support.
enabled: false
# If networking is enabled, clusterCIDR must match the PodCIDR subnet your cluster has been configured with.
# The default "10.244.0.0/16" assumes you're using Flannel with default configuration.
clusterCIDR: 10.244.0.0/16
network:
valueFrom:
secretKeyRef:
name: hcloud
key: network
# Resource requests for the deployed hccm Pod.
resources:
requests:
cpu: 100m
memory: 50Mi
selectorLabels:
app.kubernetes.io/name: '{{ include "hcloud-cloud-controller-manager.name" $ }}'
app.kubernetes.io/instance: "{{ $.Release.Name }}"
additionalTolerations: []
# nodeSelector:
# node-role.kubernetes.io/control-plane: ""
nodeSelector: {}
# Set the affinity for pods. (Only works with kind=Deployment)
affinity: {}
# pods priorityClassName
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption
priorityClassName: "system-cluster-critical"
robot:
# Set to true to enable support for Robot (Dedicated) servers.
enabled: false
rbac:
# Create a cluster role binding with admin access for the service account.
create: true
podLabels: {}
podAnnotations: {}
# Mounts the specified volume to the hcloud-cloud-controller-manager container.
extraVolumeMounts: []
# # Example
# extraVolumeMounts:
# - name: token-volume
# readOnly: true
# mountPath: /var/run/secrets/hcloud
# Adds extra volumes to the pod.
extraVolumes: []
# # Example
# extraVolumes:
# - name: token-volume
# secret:
# secretName: hcloud-token

View File

@@ -0,0 +1,172 @@
# hccm program command line arguments.
# The following flags are managed by the chart and should *not* be set directly here:
# --allocate-node-cidrs
# --cluster-cidr
# --leader-elect
args:
cloud-provider: hcloud
allow-untagged-cloud: ""
# Read issue #395 to understand how changes to this value affect you.
# https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/395
route-reconciliation-period: 30s
# We do not use the webhooks feature and there is no need to bind a port that is unused.
# https://github.com/kubernetes/kubernetes/issues/120043
# https://github.com/hetznercloud/hcloud-cloud-controller-manager/issues/492
webhook-secure-port: "0"
# Change deployment kind from "Deployment" to "DaemonSet"
kind: Deployment
# change replicaCount (only used when kind is "Deployment")
replicaCount: 1
# hccm environment variables
env:
# The following variables are managed by the chart and should *not* be set here:
# HCLOUD_METRICS_ENABLED - see monitoring.enabled
# HCLOUD_NETWORK - see networking.enabled
# ROBOT_ENABLED - see robot.enabled
# You can also use a file to provide secrets to the hcloud-cloud-controller-manager.
# This is currently possible for HCLOUD_TOKEN, ROBOT_USER, and ROBOT_PASSWORD.
# Use the env var appended with _FILE (e.g. HCLOUD_TOKEN_FILE) and set the value to the file path that should be read
# The file must be provided externally (e.g. via secret injection).
# Example:
# HCLOUD_TOKEN_FILE:
# value: "/etc/hetzner/token"
# to disable reading the token from the secret you have to disable the original env var:
# HCLOUD_TOKEN: null
HCLOUD_TOKEN:
valueFrom:
secretKeyRef:
name: hcloud
key: token
ROBOT_USER:
valueFrom:
secretKeyRef:
name: hcloud
key: robot-user
optional: true
ROBOT_PASSWORD:
valueFrom:
secretKeyRef:
name: hcloud
key: robot-password
optional: true
image:
repository: docker.io/hetznercloud/hcloud-cloud-controller-manager
tag: "v{{ $.Chart.Version }}"
# Optionally specify an array of imagePullSecrets.
# Secrets must be manually created in the namespace.
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
# e.g:
# pullSecrets:
# - myRegistryKeySecretName
#
pullSecrets: []
monitoring:
# When enabled, the hccm Pod will serve metrics on port :8233
enabled: false
podMonitor:
# When enabled (and metrics.enabled=true), a PodMonitor will be deployed to scrape metrics.
# The PodMonitor [1] CRD must already exist in the target cluster.
enabled: false
# PodMonitor Labels
labels: {}
# release: kube-prometheus-stack
# PodMonitor Annotations
annotations: {}
# PodMonitorSpec to be deployed. The "selector" field is set elsewhere and should *not* be used here.
# https://prometheus-operator.dev/docs/operator/api/#monitoring.coreos.com/v1.PodMonitorSpec
spec:
podMetricsEndpoints:
- port: metrics
nameOverride: "hetzner-ccm"
networking:
# If enabled, hcloud-ccm will be deployed with networking support.
enabled: false
# If networking is enabled, clusterCIDR must match the PodCIDR subnet your cluster has been configured with.
# The default "10.244.0.0/16" assumes you're using Flannel with default configuration.
clusterCIDR: 10.244.0.0/16
network:
valueFrom:
secretKeyRef:
name: hcloud
key: network
# Resource requests for the deployed hccm Pod.
resources:
cpu: ""
memory: ""
selectorLabels:
app.kubernetes.io/name: '{{ include "hcloud-cloud-controller-manager.name" $ }}'
app.kubernetes.io/instance: "{{ $.Release.Name }}"
additionalTolerations: []
# nodeSelector:
# node-role.kubernetes.io/control-plane: ""
nodeSelector: {}
# Set the affinity for pods. (Only works with kind=Deployment)
affinity: {}
# pods priorityClassName
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption
priorityClassName: "system-cluster-critical"
robot:
# Set to true to enable support for Robot (Dedicated) servers.
enabled: false
rbac:
# Create a cluster role binding with admin access for the service account.
create: true
podLabels: {}
podAnnotations: {}
# Mounts the specified volume to the hcloud-cloud-controller-manager container.
extraVolumeMounts: []
# # Example
# extraVolumeMounts:
# - name: token-volume
# readOnly: true
# mountPath: /var/run/secrets/hcloud
# Adds extra volumes to the pod.
extraVolumes: []
# # Example
# extraVolumes:
# - name: token-volume
# secret:
# secretName: hcloud-token

View File

@@ -0,0 +1,2 @@
name: hetzner-robotlb
version: 0.1.3 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -0,0 +1,9 @@
export NAME=hetzner-robotlb
export NAMESPACE=kube-system
include ../../../scripts/package.mk
update:
rm -rf charts
mkdir -p charts
helm pull oci://ghcr.io/intreecom/charts/robotlb --untar --untardir charts

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,6 @@
apiVersion: v2
appVersion: 0.0.5
description: A Helm chart for robotlb (loadbalancer on hetzner cloud).
name: robotlb
type: application
version: 0.1.3

View File

@@ -0,0 +1,4 @@
The RobotLB Operator was successfully installed.
Please follow the readme to create loadbalanced services.
README: https://github.com/intreecom/robotlb

View File

@@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "robotlb.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "robotlb.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "robotlb.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "robotlb.labels" -}}
helm.sh/chart: {{ include "robotlb.chart" . }}
{{ include "robotlb.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "robotlb.selectorLabels" -}}
app.kubernetes.io/name: {{ include "robotlb.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "robotlb.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "robotlb.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,66 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "robotlb.fullname" . }}
labels:
{{- include "robotlb.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
{{- include "robotlb.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "robotlb.labels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "robotlb.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- /usr/local/bin/robotlb
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.envs }}
env:
{{- range $key, $val := . }}
- name: {{ $key | quote }}
value: {{ $val | quote }}
{{ end -}}
{{- end }}
{{- with .Values.existingSecrets }}
envFrom:
{{- range $val := . }}
- secretRef:
name: {{ $val | quote }}
{{ end -}}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,21 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "robotlb.fullname" . }}-cr
rules:
{{- toYaml .Values.serviceAccount.permissions | nindent 2 }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "robotlb.fullname" . }}-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "robotlb.fullname" . }}-cr
subjects:
- kind: ServiceAccount
name: {{ include "robotlb.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "robotlb.serviceAccountName" . }}
labels:
{{- include "robotlb.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
automountServiceAccountToken: {{ .Values.serviceAccount.automount }}
{{- end }}

View File

@@ -0,0 +1,73 @@
# Default values for robotlb.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
repository: ghcr.io/intreecom/robotlb
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
envs:
ROBOTLB_LOG_LEVEL: "INFO"
existingSecrets: []
serviceAccount:
# Specifies whether a service account should be created
create: true
# Automatically mount a ServiceAccount's API credentials?
automount: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
# This is a list of cluster permissions to apply to the service account.
# By default it grants all permissions.
permissions:
- apiGroups: [""]
resources: [services, services/status]
verbs: [get, list, patch, update, watch]
- apiGroups: [""]
resources: [nodes, pods]
verbs: [get, list, watch]
podAnnotations: {}
podLabels: {}
podSecurityContext:
{}
# fsGroup: 2000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
resources:
{}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -0,0 +1,81 @@
image:
repository: ghcr.io/intreecom/robotlb
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: "hetzner-robotlb"
envs:
ROBOTLB_LOG_LEVEL: "INFO"
existingSecrets: []
serviceAccount:
# Specifies whether a service account should be created
create: true
# Automatically mount a ServiceAccount's API credentials?
automount: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
# This is a list of cluster permissions to apply to the service account.
# By default it grants all permissions.
permissions:
- apiGroups: [""]
resources: [services, services/status]
verbs: [get, list, patch, update, watch]
- apiGroups: [""]
resources: [nodes, pods]
verbs: [get, list, watch]
podAnnotations: {}
podLabels: {}
# fsGroup: 2000
podSecurityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
securityContext:
{}
## Number of robotlb replicas
replicas: 1
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# resources:
# cpu: 100m
# memory: 128Mi
resources:
cpu: ""
memory: ""
nodeSelector: {}
tolerations: []
affinity: {}

View File

@@ -1,2 +0,0 @@
images
hack

View File

@@ -1,3 +0,0 @@
apiVersion: v2
name: cozy-kamaji-etcd
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -1,9 +0,0 @@
update:
rm -rf charts
helm repo add clastix https://clastix.github.io/charts
helm repo update clastix
helm pull clastix/kamaji-etcd --untar --untardir charts
sed -i 's/hook-failed/before-hook-creation,hook-failed/' `grep -rl hook-failed charts`
patch --no-backup-if-mismatch -p4 < patches/fix-svc.diff
patch --no-backup-if-mismatch -p4 < patches/fullnameOverride.diff
patch --no-backup-if-mismatch -p4 < patches/remove-plus.patch

View File

@@ -1,15 +0,0 @@
apiVersion: v2
appVersion: 3.5.6
description: Helm chart for deploying a multi-tenant `etcd` cluster.
home: https://github.com/clastix/kamaji-etcd
kubeVersion: '>=1.22.0-0'
maintainers:
- email: me@bsctl.io
name: Adriano Pezzuto
- email: dario@tranchitella.eu
name: Dario Tranchitella
name: kamaji-etcd
sources:
- https://github.com/clastix/kamaji-etcd
type: application
version: 0.5.1

View File

@@ -1,9 +0,0 @@
docs: HELMDOCS_VERSION := v1.8.1
docs: docker
@docker run --rm -v "$$(pwd):/helm-docs" -u $$(id -u) jnorwood/helm-docs:$(HELMDOCS_VERSION)
docker:
@hash docker 2>/dev/null || {\
echo "You need docker" &&\
exit 1;\
}

View File

@@ -1,133 +0,0 @@
# kamaji-etcd
![Version: 0.5.1](https://img.shields.io/badge/Version-0.5.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 3.5.6](https://img.shields.io/badge/AppVersion-3.5.6-informational?style=flat-square)
Helm chart for deploying a multi-tenant `etcd` cluster.
[Kamaji](https://github.com/clastix/kamaji) turns any Kubernetes cluster into an _admin cluster_ to orchestrate other Kubernetes clusters called _tenant clusters_.
The Control Plane of a _tenant cluster_ is made of regular pods running in a namespace of the _admin cluster_ instead of a dedicated set of Virtual Machines.
This solution makes running control planes at scale cheaper and easier to deploy and operate.
As of any Kubernetes cluster, a _tenant cluster_ needs a datastore where to save the state and be able to retrieve data.
This chart provides a multi-tenant `etcd` as datastore for Kamaji as well as a standalone multi-tenant `etcd` cluster.
## Install kamaji-etcd
To install the Chart with the release name `kamaji-etcd`:
helm repo add clastix https://clastix.github.io/charts
helm repo update
helm install kamaji-etcd clastix/kamaji-etcd -n kamaji-etcd --create-namespace
Show the status:
helm status kamaji-etcd -n kamaji-etcd
Upgrade the Chart
helm upgrade kamaji-etcd -n kamaji-etcd clastix/kamaji-etcd
Uninstall the Chart
helm uninstall kamaji-etcd -n kamaji-etcd
## Customize the installation
There are two methods for specifying overrides of values during Chart installation: `--values` and `--set`.
The `--values` option is the preferred method because it allows you to keep your overrides in a YAML file, rather than specifying them all on the command line.
Create a copy of the YAML file `values.yaml` and add your overrides to it.
Specify your overrides file when you install the Chart:
helm upgrade kamaji-etcd --install --namespace kamaji-etcd --create-namespacekamaji-etcd --values myvalues.yaml
The values in your overrides file `myvalues.yaml` will override their counterparts in the Chart's values.yaml file.
Any values in `values.yaml` that weren't overridden will keep their defaults.
If you only need to make minor customizations, you can specify them on the command line by using the `--set` option. For example:
helm upgrade kamaji-etcd --install --namespace kamaji-etcd --create-namespace kamaji-etcd --set replicas=5
Here the values you can override:
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | Kubernetes affinity rules to apply to etcd controller pods |
| alerts.annotations | object | `{}` | Assign additional Annotations |
| alerts.enabled | bool | `false` | Enable alerts for Alertmanager |
| alerts.labels | object | `{}` | Assign additional labels according to Prometheus' Alerts matching labels |
| alerts.namespace | string | `""` | Install the Alerts into a different Namespace, as the monitoring stack one (default: the release one) |
| alerts.rules | list | `[]` | The rules for alerts |
| autoCompactionMode | string | `"periodic"` | Interpret 'auto-compaction-retention' one of: periodic|revision. Use 'periodic' for duration based retention, 'revision' for revision number based retention. |
| autoCompactionRetention | string | `"5m"` | Auto compaction retention length. 0 means disable auto compaction. |
| backup | object | `{"all":false,"enabled":false,"s3":{"accessKey":{"value":"","valueFrom":{}},"bucket":"mybucket","image":{"pullPolicy":"IfNotPresent","repository":"minio/mc","tag":"RELEASE.2022-11-07T23-47-39Z"},"retention":"","secretKey":{"value":"","valueFrom":{}},"url":"http://mys3storage:9000"},"schedule":"20 3 * * *","snapshotDateFormat":"$(date +%Y%m%d)","snapshotNamePrefix":"mysnapshot"}` | Enable storage backup |
| backup.all | bool | `false` | Enable backup for all endpoints. When disabled, only the leader will be taken |
| backup.enabled | bool | `false` | Enable scheduling backup job |
| backup.s3 | object | `{"accessKey":{"value":"","valueFrom":{}},"bucket":"mybucket","image":{"pullPolicy":"IfNotPresent","repository":"minio/mc","tag":"RELEASE.2022-11-07T23-47-39Z"},"retention":"","secretKey":{"value":"","valueFrom":{}},"url":"http://mys3storage:9000"}` | The S3 storage config section |
| backup.s3.accessKey | object | `{"value":"","valueFrom":{}}` | The S3 storage ACCESS KEY credential. The plain value has precedence over the valueFrom that can be used to retrieve the value from a Secret. |
| backup.s3.bucket | string | `"mybucket"` | The S3 storage bucket |
| backup.s3.image | object | `{"pullPolicy":"IfNotPresent","repository":"minio/mc","tag":"RELEASE.2022-11-07T23-47-39Z"}` | The S3 client image config section |
| backup.s3.image.pullPolicy | string | `"IfNotPresent"` | Pull policy to use |
| backup.s3.image.repository | string | `"minio/mc"` | Install image from specific repo |
| backup.s3.image.tag | string | `"RELEASE.2022-11-07T23-47-39Z"` | Install image with specific tag |
| backup.s3.retention | string | `""` | The S3 storage object lifecycle management rules; N.B. enabling this option will delete previously set lifecycle rules |
| backup.s3.secretKey | object | `{"value":"","valueFrom":{}}` | The S3 storage SECRET KEY credential. The plain value has precedence over the valueFrom that can be used to retrieve the value from a Secret. |
| backup.s3.url | string | `"http://mys3storage:9000"` | The S3 storage url |
| backup.schedule | string | `"20 3 * * *"` | The job scheduled maintenance time for backup |
| backup.snapshotDateFormat | string | `"$(date +%Y%m%d)"` | The backup file date format (bash) |
| backup.snapshotNamePrefix | string | `"mysnapshot"` | The backup file name prefix |
| clientPort | int | `2379` | The client request port. |
| datastore.enabled | bool | `false` | Create a datastore custom resource for Kamaji |
| defragmentation | object | `{"schedule":"*/15 * * * *"}` | Enable storage defragmentation |
| defragmentation.schedule | string | `"*/15 * * * *"` | The job scheduled maintenance time for defrag (empty to disable) |
| extraArgs | list | `[]` | A list of extra arguments to add to the etcd default ones |
| image.pullPolicy | string | `"IfNotPresent"` | Pull policy to use |
| image.repository | string | `"quay.io/coreos/etcd"` | Install image from specific repo |
| image.tag | string | `""` | Install image with specific tag, overwrite the tag in the chart |
| livenessProbe | object | `{}` | The livenessProbe for the etcd container |
| metricsPort | int | `2381` | The port where etcd exposes metrics. |
| nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Kubernetes node selector rules to schedule etcd |
| peerApiPort | int | `2380` | The peer API port which servers are listening to. |
| persistentVolumeClaim.accessModes | list | `["ReadWriteOnce"]` | The Access Mode to storage |
| persistentVolumeClaim.customAnnotations | object | `{}` | The custom annotations to add to the PVC |
| persistentVolumeClaim.size | string | `"10Gi"` | The size of persistent storage for etcd data |
| persistentVolumeClaim.storageClassName | string | `""` | A specific storage class |
| podAnnotations | object | `{}` | Annotations to add to all etcd pods |
| podLabels | object | `{"application":"kamaji-etcd"}` | Labels to add to all etcd pods |
| priorityClassName | string | `"system-cluster-critical"` | The priorityClassName to apply to etcd |
| quotaBackendBytes | string | `"8589934592"` | Raise alarms when backend size exceeds the given quota. It will put the cluster into a maintenance mode which only accepts key reads and deletes. |
| replicas | int | `3` | Size of the etcd cluster |
| resources | object | `{"limits":{},"requests":{}}` | Resources assigned to the etcd containers |
| securityContext | object | `{"allowPrivilegeEscalation":false}` | The securityContext to apply to etcd |
| serviceAccount | object | `{"create":true,"name":""}` | Install an etcd with enabled multi-tenancy |
| serviceAccount.create | bool | `true` | Create a ServiceAccount, required to install and provision the etcd backing storage (default: true) |
| serviceAccount.name | string | `""` | Define the ServiceAccount name to use during the setup and provision of the etcd backing storage (default: "") |
| serviceMonitor.annotations | object | `{}` | Assign additional Annotations |
| serviceMonitor.enabled | bool | `false` | Enable ServiceMonitor for Prometheus |
| serviceMonitor.endpoint.interval | string | `"15s"` | Set the scrape interval for the endpoint of the serviceMonitor |
| serviceMonitor.endpoint.metricRelabelings | list | `[]` | Set metricRelabelings for the endpoint of the serviceMonitor |
| serviceMonitor.endpoint.relabelings | list | `[]` | Set relabelings for the endpoint of the serviceMonitor |
| serviceMonitor.endpoint.scrapeTimeout | string | `""` | Set the scrape timeout for the endpoint of the serviceMonitor |
| serviceMonitor.labels | object | `{}` | Assign additional labels according to Prometheus' serviceMonitorSelector matching labels |
| serviceMonitor.matchLabels | object | `{}` | Change matching labels |
| serviceMonitor.namespace | string | `""` | Install the ServiceMonitor into a different Namespace, as the monitoring stack one (default: the release one) |
| serviceMonitor.serviceAccount.name | string | `"etcd"` | ServiceAccount for Metrics RBAC |
| serviceMonitor.serviceAccount.namespace | string | `"etcd-system"` | ServiceAccount Namespace for Metrics RBAC |
| serviceMonitor.targetLabels | list | `[]` | Set targetLabels for the serviceMonitor |
| snapshotCount | string | `"10000"` | Number of committed transactions to trigger a snapshot to disk. |
| tolerations | list | `[]` | Kubernetes node taints that the etcd pods would tolerate |
| topologySpreadConstraints | list | `[]` | Kubernetes topology spread constraints to apply to etcd controller pods |
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Adriano Pezzuto | <me@bsctl.io> | |
| Dario Tranchitella | <dario@tranchitella.eu> | |
## Source Code
* <https://github.com/clastix/kamaji-etcd>

View File

@@ -1,59 +0,0 @@
{{ template "chart.header" . }}
{{ template "chart.deprecationWarning" . }}
{{ template "chart.badgesSection" . }}
{{ template "chart.description" . }}
[Kamaji](https://github.com/clastix/kamaji) turns any Kubernetes cluster into an _admin cluster_ to orchestrate other Kubernetes clusters called _tenant clusters_.
The Control Plane of a _tenant cluster_ is made of regular pods running in a namespace of the _admin cluster_ instead of a dedicated set of Virtual Machines.
This solution makes running control planes at scale cheaper and easier to deploy and operate.
As of any Kubernetes cluster, a _tenant cluster_ needs a datastore where to save the state and be able to retrieve data.
This chart provides a multi-tenant `etcd` as datastore for Kamaji as well as a standalone multi-tenant `etcd` cluster.
## Install kamaji-etcd
To install the Chart with the release name `kamaji-etcd`:
helm repo add clastix https://clastix.github.io/charts
helm repo update
helm install kamaji-etcd clastix/kamaji-etcd -n kamaji-etcd --create-namespace
Show the status:
helm status kamaji-etcd -n kamaji-etcd
Upgrade the Chart
helm upgrade kamaji-etcd -n kamaji-etcd clastix/kamaji-etcd
Uninstall the Chart
helm uninstall kamaji-etcd -n kamaji-etcd
## Customize the installation
There are two methods for specifying overrides of values during Chart installation: `--values` and `--set`.
The `--values` option is the preferred method because it allows you to keep your overrides in a YAML file, rather than specifying them all on the command line.
Create a copy of the YAML file `values.yaml` and add your overrides to it.
Specify your overrides file when you install the Chart:
helm upgrade kamaji-etcd --install --namespace kamaji-etcd --create-namespacekamaji-etcd --values myvalues.yaml
The values in your overrides file `myvalues.yaml` will override their counterparts in the Chart's values.yaml file.
Any values in `values.yaml` that weren't overridden will keep their defaults.
If you only need to make minor customizations, you can specify them on the command line by using the `--set` option. For example:
helm upgrade kamaji-etcd --install --namespace kamaji-etcd --create-namespace kamaji-etcd --set replicas=5
Here the values you can override:
{{ template "chart.valuesSection" . }}
{{ template "chart.maintainersSection" . }}
{{ template "chart.sourcesSection" . }}

Some files were not shown because too many files have changed in this diff Show More