Compare commits

...

42 Commits

Author SHA1 Message Date
Andrei Kvapil
a9c8133fd4 fix dependencies for kafka-operator and clickhouse-operator (#748)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-03 00:58:16 +02:00
Andrei Kvapil
065abdd95a [linstor] fix reloader patch (#747)
this PR fixes problem

```
* admission webhook "vlinstorcluster.kb.io" denied the request: LinstorCluster.piraeus.io "linstorcluster" is invalid: spec.patches.0.patch: Invalid value: "apiVersion: apps/v1\nkind: Deployment\nmetadata:\n  annotations:\n    secret.reloader.stakater.com/auto: \"true\"\n": Failed to parse patch as either Strategic Merge Patch (missing metadata.name in object {{apps/v1 Deployment} {{ } map[] map[secret.reloader.stakater.com/auto:true]}}) or JSON Patch (json: cannot unmarshal object into Go value of type jsonpatch.Patch)
```

used solution from
-
https://github.com/piraeusdatastore/piraeus-operator/issues/701#issuecomment-2377702085

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-03 00:45:27 +02:00
Andrei Kvapil
cd8c6a8b9a Fix dependency for clickhouse-operator (#746)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-03 00:36:25 +02:00
Andrei Kvapil
459673f764 Fix CiliumNetworkPolicy depends on cilium (#745) 2025-04-03 00:21:13 +02:00
Nick Volynkin
c795e4fb68 Prepare release v0.29.0 (#740)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Streamlined the asset release process to automatically replace
existing files during uploads.
  
- **Container Image Updates**
- Upgraded versions across multiple components—including backup,
caching, autoscaling, API, dashboard, monitoring, and more—to align with
the latest release (e.g., updating from v0.28.0 to v0.29.0 and other
minor version increments).
- Updated specific images for Grafana, PostgreSQL, MariaDB, ClickHouse,
and others to their latest versions.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-02 23:45:25 +02:00
Andrei Kvapil
7c98248e45 Update Talos Linux to v1.9.5 (#744)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-02 22:56:41 +02:00
Andrei Kvapil
16c771aa77 [vm-disk] disable immediate bind for non-upload disks (#742)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-02 22:42:51 +02:00
Andrei Kvapil
d971f2ff29 Enhance versions_map generator logic (#741)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Enhanced version mapping with improved error reporting and clearer
version resolution, ensuring more accurate and reliable version
displays.
- Updated version references for multiple packages to maintain
consistency and stability across the system.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-02 21:43:17 +02:00
Nick Volynkin
ead0cca5af Fix bootbox version commit (#739)
Fixup to 116196b4d4

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Upgraded the Bootbox package to the latest release, ensuring continued
stability and a solid foundation for future improvements.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-02 15:35:41 +02:00
Nick Volynkin
47ff861f00 Upload build assets with make upload_assets (#737)
# Upload build assets with make upload_assets; keep image digests

* Keep image digests in cozystack-installer.yml
* Remove manifests/cozysаtack-installer.yaml, which is a build asset,
not source file.
* Requires `gh`, the GitHub CLI app.

Co-authored-by:  Andrei Kvapil <kvapss@gmail.com>
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced an automated process that uploads release assets during
each update cycle, ensuring smoother and more reliable deployments.
	- Added a new target for uploading assets in the build process.

- **Refactor**
- Streamlined the organization and generation of deployment artifacts to
improve efficiency and consistency in our release workflow.
	- Adjusted output file management for better structure.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-02 14:46:02 +02:00
Andrei Kvapil
dbc1fb8a09 Enable Cilium host firewall (#738)
This commit enables Cilium's host firewall feature and makes use of it
to deny external connections to two exporters running as daemonset pods
in the host network namespace.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
Co-authored-by: Timofei Larkin <lllamnyp@gmail.com>
2025-04-02 14:42:19 +02:00
Timofei Larkin
d9c6fb7625 Enable Cilium host firewall (#736)
This commit enables Cilium's host firewall feature and makes use of it
to deny external connections to two exporters running as daemonset pods
in the host network namespace.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Host firewall is now enabled by default, adding an extra layer of
security.
  - Enhanced network traffic management with new policies:
    - One policy tightens access to critical service ports.
- Another secures monitoring endpoints by restricting unauthorized
external access.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-04-02 13:16:15 +02:00
Andrei Kvapil
cd23a30e76 [kubernetes] Fix patches for kubevirt-cloud-provider (#735)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
	- Updated the cloud provider’s container image to a new stable version.
- **New Features**
- Enhanced multi-cluster support to ensure services are correctly scoped
to their respective clusters.
- **Refactor**
	- Removed legacy service management logic for streamlined processing.
- **Tests**
- Adjusted test coverage to validate the improved multi-cluster and
service filtering behaviors.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-02 11:02:29 +02:00
Timur Tukaev
f4fba7924b Create GOVERNANCE.md (#733)
This is a first draft of Cozystack project governance

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Documentation**
- Introduced a new governance document outlining the project's community
roles and responsibilities.
- Clarified roles for various community members including users,
contributors, and leadership.
- Detailed the decision-making process and guidelines to promote
transparent engagement and active participation.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
Co-authored-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-01 18:48:14 +02:00
Timofei Larkin
01b3a82ee2 [linstor] Introduce Reloader to automatically reload certificates (#715)
* Add stakater/Reloader to the storage-enabled bundles.
* Add annotations to Linstor components to reload when secrets change.

Closes #456 

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a new reloader component that triggers automatic rolling
updates when configuration or secret changes are detected.
- Delivered a fully customizable Helm chart and configuration schema,
including a reload strategy based on annotations for enhanced deployment
control.
  
- **Tests**
- Added test cases to validate container security settings and
environment variable propagation, ensuring robust high-availability
configurations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-04-01 18:47:18 +02:00
Andrei Kvapil
0403f30cd6 [kubernetes] Fix race-condition between two KubeVirt CCMs working with the services with identifical names (#722)
This PR fixes race condition when you have two clusters with two
services with simillar names, two eps-controllers might continiusly
conflict between each other.

Upstream issue
https://github.com/kubevirt/cloud-provider-kubevirt/pull/341

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced multi-cluster support with a new configuration option to
ensure that services and deployments are correctly scoped across
different environments.

- **Bug Fixes**
- Refined endpoint management by improving service port handling and
eliminating duplicate entries, resulting in more consistent and reliable
service routing.

- **Chores**
- Updated the image versioning strategy to a dynamic tag, streamlining
deployments and simplifying future upgrades.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-01 18:47:01 +02:00
Andrei Kvapil
116196b4d4 [bootbox] Automatically lowercase mac addresses (#732)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- MAC addresses are now consistently displayed in lowercase for uniform
formatting.
- **Chores**
- Updated the package version to 0.1.1 and refined version tracking for
improved package management.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-01 18:45:26 +02:00
xy2
bdc3525c9b [linstor] Make satellite plunger script continue on errors. (#728)
Do not restart the container if something goes wrong while plunger tries
to fix the linstor state.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved error notifications during system operations to ensure any
issues are promptly reported.
- **Documentation**
- Updated descriptive information on secondary volume management to help
clarify operational conditions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Denis Seleznev <kto.3decb@gmail.com>
2025-03-31 19:47:26 +02:00
Andrei Kvapil
9a0f459655 [linstor] Plunger: detach also / devices (#731)
I just noticed that many devices are still attached to `/`, they are not
deleted but still blocking DRBD operations

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved detection of inactive loop devices by broadening the
filtering criteria, ensuring more accurate identification in various
scenarios.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-03-31 15:51:10 +02:00
Timofei Larkin
59aac50ebd Merge pull request #711 from cozystack/fix-regression
Fix dependency for piraeus-operator
2025-03-28 20:06:12 +04:00
Timofei Larkin
5f1b14ab53 Merge pull request #720 from cozystack/709-backport-ingress-nginx
Use backported ingress controller
2025-03-28 20:00:22 +04:00
Timofei Larkin
5c900a7467 Revert ingress nginx chart to v4.11.2
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-03-28 17:26:24 +03:00
Timofei Larkin
cc9abbc505 Use backported ingress controller
Due to upstream compat issues we backport the security patches to
v1.11.2 of the ingress controller and do not rebuild the existing
protobuf exporter.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-03-28 16:49:31 +03:00
Timofei Larkin
c66eb9f94c Merge pull request #710 from cozystack/709-update-ingress-nginx
Update ingress-nginx to mitigate CVE-2025-1974
2025-03-26 14:11:44 +04:00
Timofei Larkin
0045ddc757 Update ingress-nginx to mitigate CVE-2025-1974
Closes #709

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-03-26 12:12:06 +03:00
Andrei Kvapil
209a3ef181 Fix dependency for piraeus-operator
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-03-25 12:58:21 +01:00
Nick Volynkin
92e2173fa5 Fix typo in VirtualPodAutoscaler Makefile
Makefile was copied from VictoriaMetrics Operator, some lines were not
changed.

Follow-up to #676
Resolves #705

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-03-25 12:51:01 +02:00
xy2
2d997a4f8d Remove workaround for ZFS tools. (#704)
The mounting of /run parts was fixed in a consistent way upstream.

fixes #699

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Streamlined satellite configurations by removing an unnecessary volume
setting from a container.
- Eliminated an unneeded container entry along with its related
configuration, reducing overall complexity.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Denis Seleznev <kto.3decb@gmail.com>
2025-03-23 21:22:26 +01:00
Kingdon Barrett
b1baaa7d98 Update Flux Operator to 0.18.0 (#703)
Released early this week


https://github.com/controlplaneio-fluxcd/flux-operator/releases/tag/v0.18.0

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
  - Upgraded the charts to version 0.18.0.
  - Added options for custom pod scheduling using node selectors.
- Introduced a reporting configuration with a customizable interval
through an environment variable.

- **Documentation**
- Updated release information and configuration details to reflect the
new options and version update.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-03-23 21:22:03 +01:00
Timofei Larkin
869bd4f12c Merge pull request #698 from klinch0/bugfix/fix-vpa
bugfix/fix-vpa
2025-03-20 16:12:52 +04:00
kklinch0
f6d4541db3 fix cpu
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-03-20 14:59:47 +03:00
kklinch0
5729666e72 fix cpu
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-03-20 14:59:47 +03:00
kklinch0
aa3a36831c fix mi
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-03-20 14:59:47 +03:00
kklinch0
1e03ba4a02 revert image
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-03-20 14:59:47 +03:00
kklinch0
d12dd0e117 fix vm and k8s resources
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-03-20 14:59:47 +03:00
kklinch0
077045b094 fix apps resources
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-03-20 14:59:47 +03:00
kklinch0
85ec09b8de bugfix/add-limits-for-extra-packages
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-03-20 14:59:47 +03:00
kklinch0
e0a63c32b0 bugfix/fix-monitoring-resources
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-03-20 14:59:47 +03:00
klinch0
a2af07d1dc bugfix/fix-longterm (#697)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated the remote write configuration to support multiple endpoints,
allowing data ingestion from both short-term and long-term services for
improved flexibility.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-03-14 10:58:16 +01:00
Timofei Larkin
0cb9e72f99 Merge pull request #695 from klinch0/feature/add-presets
feature/add-presets
2025-03-13 19:25:55 +04:00
kklinch0
b4584b4d17 fix rebbit template 2025-03-13 18:22:40 +03:00
kklinch0
ea3b092128 feature/add-presets 2025-03-13 17:03:00 +03:00
204 changed files with 4228 additions and 839 deletions

91
GOVERNANCE.md Normal file
View File

@@ -0,0 +1,91 @@
# Cozystack Governance
This document defines the governance structure of the Cozystack community, outlining how members collaborate to achieve shared goals.
## Overview
**Cozystack**, a Cloud Native Computing Foundation (CNCF) project, is committed
to building an open, inclusive, productive, and self-governing open source
community focused on building a high-quality open source PaaS and framework for building clouds.
## Code Repositories
The following code repositories are governed by the Cozystack community and
maintained under the `cozystack` namespace:
* **[Cozystack](https://github.com/cozystack/cozystack):** Main Cozystack codebase
* **[website](https://github.com/cozystack/website):** Cozystack website and documentation sources
* **[Talm](https://github.com/cozystack/talm):** Tool for managing Talos Linux the GitOps way
* **[cozy-proxy](https://github.com/cozystack/cozy-proxy):** A simple kube-proxy addon for 1:1 NAT services in Kubernetes with NFT backend
* **[cozystack-telemetry-server](https://github.com/cozystack/cozystack-telemetry-server):** Cozystack telemetry
* **[talos-bootstrap](https://github.com/cozystack/talos-bootstrap):** An interactive Talos Linux installer
* **[talos-meta-tool](https://github.com/cozystack/talos-meta-tool):** Tool for writing network metadata into META partition
## Community Roles
* **Users:** Members that engage with the Cozystack community via any medium, including Slack, Telegram, GitHub, and mailing lists.
* **Contributors:** Members contributing to the projects by contributing and reviewing code, writing documentation,
responding to issues, participating in proposal discussions, and so on.
* **Directors:** Non-technical project leaders.
* **Maintainers**: Technical project leaders.
## Contributors
Cozystack is for everyone. Anyone can become a Cozystack contributor simply by
contributing to the project, whether through code, documentation, blog posts,
community management, or other means.
As with all Cozystack community members, contributors are expected to follow the
[Cozystack Code of Conduct](https://github.com/cozystack/cozystack/blob/main/CODE_OF_CONDUCT.md).
All contributions to Cozystack code, documentation, or other components in the
Cozystack GitHub organisation must follow the
[contributing guidelines](https://github.com/cozystack/cozystack/blob/main/CONTRIBUTING.md).
Whether these contributions are merged into the project is the prerogative of the maintainers.
## Directors
Directors are responsible for non-technical leadership functions within the project.
This includes representing Cozystack and its maintainers to the community, to the press,
and to the outside world; interfacing with CNCF and other governance entities;
and participating in project decision-making processes when appropriate.
Directors are elected by a majority vote of the maintainers.
## Maintainers
Maintainers have the right to merge code into the project.
Anyone can become a Cozystack maintainer (see "Becoming a maintainer" below).
### Expectations
Cozystack maintainers are expected to:
* Review pull requests, triage issues, and fix bugs in their areas of
expertise, ensuring that all changes go through the project's code review
and integration processes.
* Monitor cncf-cozystack-* emails, the Cozystack Slack channels in Kubernetes
and CNCF Slack workspaces, Telegram groups, and help out when possible.
* Rapidly respond to any time-sensitive security release processes.
* Attend Cozystack community meetings.
If a maintainer is no longer interested in or cannot perform the duties
listed above, they should move themselves to emeritus status.
If necessary, this can also occur through the decision-making process outlined below.
### Becoming a Maintainer
Anyone can become a Cozystack maintainer. Maintainers should be extremely
proficient in cloud native technologies and/or Go; have relevant domain expertise;
have the time and ability to meet the maintainer's expectations above;
and demonstrate the ability to work with the existing maintainers and project processes.
To become a maintainer, start by expressing interest to existing maintainers.
Existing maintainers will then ask you to demonstrate the qualifications above
by contributing PRs, doing code reviews, and other such tasks under their guidance.
After several months of working together, maintainers will decide whether to grant maintainer status.
## Project Decision-making Process
Ideally, all project decisions are resolved by consensus of maintainers and directors.
If this is not possible, a vote will be called.
The voting process is a simple majority in which each maintainer and director receives one vote.

View File

@@ -19,10 +19,6 @@ build:
make -C packages/core/installer image
make manifests
manifests:
(cd packages/core/installer/; helm template -n cozy-installer installer .) > manifests/cozystack-installer.yaml
sed -i 's|@sha256:[^"]\+||' manifests/cozystack-installer.yaml
repos:
rm -rf _out
make -C packages/apps check-version-map
@@ -33,6 +29,11 @@ repos:
mkdir -p _out/logos
cp ./packages/apps/*/logos/*.svg ./packages/extra/*/logos/*.svg _out/logos/
manifests:
mkdir -p _out/assets
(cd packages/core/installer/; helm template -n cozy-installer installer .) > _out/assets/cozystack-installer.yaml
assets:
make -C packages/core/installer/ assets
@@ -44,3 +45,6 @@ test:
generate:
hack/update-codegen.sh
upload_assets: manifests
hack/upload-assets.sh

View File

@@ -1,12 +1,13 @@
#!/bin/sh
set -e
file=versions_map
charts=$(find . -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")')
# <chart> <version> <commit>
new_map=$(
for chart in $charts; do
awk '/^name:/ {chart=$2} /^version:/ {version=$2} END{printf "%s %s %s\n", chart, version, "HEAD"}' $chart/Chart.yaml
awk '/^name:/ {chart=$2} /^version:/ {version=$2} END{printf "%s %s %s\n", chart, version, "HEAD"}' "$chart/Chart.yaml"
done
)
@@ -15,47 +16,48 @@ if [ ! -f "$file" ] || [ ! -s "$file" ]; then
exit 0
fi
miss_map=$(echo "$new_map" | awk 'NR==FNR { new_map[$1 " " $2] = $3; next } { if (!($1 " " $2 in new_map)) print $1, $2, $3}' - $file)
miss_map=$(echo "$new_map" | awk 'NR==FNR { nm[$1 " " $2] = $3; next } { if (!($1 " " $2 in nm)) print $1, $2, $3}' - "$file")
# search accross all tags sorted by version
search_commits=$(git ls-remote --tags origin | grep 'refs/tags/v' | sort -k2,2 -rV | awk '{print $1}')
# add latest main commit to search
search_commits="${search_commits} $(git rev-parse "origin/main")"
resolved_miss_map=$(
echo "$miss_map" | while read chart version commit; do
if [ "$commit" = HEAD ]; then
line=$(awk '/^version:/ {print NR; exit}' "./$chart/Chart.yaml")
change_commit=$(git --no-pager blame -L"$line",+1 -- "$chart/Chart.yaml" | awk '{print $1}')
if [ "$change_commit" = "00000000" ]; then
# Not committed yet, use previous commit
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
if [ $(echo $commit | cut -c1) = "^" ]; then
# Previous commit not exists
commit=$(echo $commit | cut -c2-)
fi
else
# Committed, but version_map wasn't updated
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
change_commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
if [ $(echo $change_commit | cut -c1) = "^" ]; then
# Previous commit not exists
commit=$(echo $change_commit | cut -c2-)
else
commit=$(git describe --always "$change_commit~1")
fi
echo "$miss_map" | while read -r chart version commit; do
# if version is found in HEAD, it's HEAD
if grep -q "^version: $version$" ./${chart}/Chart.yaml; then
echo "$chart $version HEAD"
continue
fi
# if commit is not HEAD, check if it's valid
if [ $commit != "HEAD" ]; then
if ! git show "${commit}:./${chart}/Chart.yaml" 2>/dev/null | grep -q "^version: $version$"; then
echo "Commit $commit for $chart $version is not valid" >&2
exit 1
fi
# Check if the commit belongs to the main branch
if ! git merge-base --is-ancestor "$commit" main; then
# Find the closest parent commit that belongs to main
commit_in_main=$(git log --pretty=format:"%h" main -- "$chart" | head -n 1)
if [ -n "$commit_in_main" ]; then
commit="$commit_in_main"
else
# No valid commit found in main branch for $chart, skipping..."
continue
fi
fi
commit=$(git rev-parse --short "$commit")
echo "$chart $version $commit"
continue
fi
echo "$chart $version $commit"
# if commit is HEAD, but version is not found in HEAD, check all tags
found_tag=""
for tag in $search_commits; do
if git show "${tag}:./${chart}/Chart.yaml" 2>/dev/null | grep -q "^version: $version$"; then
found_tag=$(git rev-parse --short "${tag}")
break
fi
done
if [ -z "$found_tag" ]; then
echo "Can't find $chart $version in any version tag or in the latest main commit" >&2
exit 1
fi
echo "$chart $version $found_tag"
done
)

8
hack/upload-assets.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/bin/bash
set -xe
version=$(git describe --tags)
gh release upload --clobber $version _out/assets/cozystack-installer.yaml
gh release upload --clobber $version _out/assets/metal-amd64.iso
gh release upload --clobber $version _out/assets/metal-amd64.raw.xz
gh release upload --clobber $version _out/assets/nocloud-amd64.raw.xz

View File

@@ -1,105 +0,0 @@
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: Namespace
metadata:
name: cozy-system
labels:
cozystack.io/system: "true"
pod-security.kubernetes.io/enforce: privileged
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cozystack
namespace: cozy-system
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cozystack
subjects:
- kind: ServiceAccount
name: cozystack
namespace: cozy-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: Service
metadata:
name: cozystack
namespace: cozy-system
spec:
ports:
- name: http
port: 80
targetPort: 8123
selector:
app: cozystack
type: ClusterIP
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cozystack
namespace: cozy-system
spec:
replicas: 1
selector:
matchLabels:
app: cozystack
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
template:
metadata:
labels:
app: cozystack
spec:
hostNetwork: true
serviceAccountName: cozystack
containers:
- name: cozystack
image: "ghcr.io/cozystack/cozystack/installer:v0.28.0"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
- name: KUBERNETES_SERVICE_PORT
value: "7445"
- name: K8S_AWAIT_ELECTION_ENABLED
value: "1"
- name: K8S_AWAIT_ELECTION_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAMESPACE
value: cozy-system
- name: K8S_AWAIT_ELECTION_IDENTITY
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: assets
image: "ghcr.io/cozystack/cozystack/installer:v0.28.0"
command:
- /usr/bin/cozystack-assets-server
- "-dir=/cozystack/assets"
- "-address=:8123"
ports:
- name: http
containerPort: 8123
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
- key: "node.cilium.io/agent-not-ready"
operator: "Exists"
effect: "NoSchedule"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.2
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -36,13 +36,15 @@ more details:
### Backup parameters
| Name | Description | Value |
| ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/clickhouse-backup:0.6.2@sha256:67dd53efa86b704fc5cb876aca055fef294b31ab67899b683a4821ea12582ea7
ghcr.io/cozystack/cozystack/clickhouse-backup:0.7.0@sha256:67dd53efa86b704fc5cb876aca055fef294b31ab67899b683a4821ea12582ea7

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -121,6 +121,11 @@ spec:
containers:
- name: clickhouse
image: clickhouse/clickhouse-server:24.9.2.42
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 16 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 16 }}
{{- end }}
volumeMounts:
- name: data-volume-template
mountPath: /var/lib/clickhouse

View File

@@ -76,6 +76,16 @@
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}

View File

@@ -46,3 +46,16 @@ backup:
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.2
version: 0.5.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -21,15 +21,17 @@
### Backup parameters
| Name | Description | Value |
| ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.9.0@sha256:2b6ba87f5688a439bd2ac12835a5ab9e601feb15c0c44ed0d9ca48cec7c52521
ghcr.io/cozystack/cozystack/postgres-backup:0.10.0@sha256:2b6ba87f5688a439bd2ac12835a5ab9e601feb15c0c44ed0d9ca48cec7c52521

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -15,7 +15,11 @@ spec:
{{- end }}
minSyncReplicas: {{ .Values.quorum.minSyncReplicas }}
maxSyncReplicas: {{ .Values.quorum.maxSyncReplicas }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}
monitoring:
enablePodMonitor: true

View File

@@ -81,6 +81,16 @@
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}

View File

@@ -48,3 +48,16 @@ backup:
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.1
version: 0.4.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -60,13 +60,17 @@ VTS module shows wrong upstream resonse time
### Common parameters
| Name | Description | Value |
| ------------------ | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `storageClass` | StorageClass used to store the data | `""` |
| `haproxy.replicas` | Number of HAProxy replicas | `2` |
| `nginx.replicas` | Number of Nginx replicas | `2` |
| Name | Description | Value |
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `storageClass` | StorageClass used to store the data | `""` |
| `haproxy.replicas` | Number of HAProxy replicas | `2` |
| `nginx.replicas` | Number of Nginx replicas | `2` |
| `haproxy.resources` | Resources | `{}` |
| `haproxy.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| `nginx.resources` | Resources | `{}` |
| `nginx.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
### Configuration parameters

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.3.1@sha256:2b82eae28239ca0f9968602c69bbb752cd2a5818e64934ccd06cb91d95d019c7
ghcr.io/cozystack/cozystack/nginx-cache:0.4.0@sha256:859f9c1f500300c49cfe162a848364df1ba0d7e72f4d2bdb4728f03e9614f3b4

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -33,6 +33,11 @@ spec:
containers:
- image: haproxy:latest
name: haproxy
{{- if .Values.haproxy.resources }}
resources: {{- toYaml .Values.haproxy.resources | nindent 10 }}
{{- else if ne .Values.haproxy.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.haproxy.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
ports:
- containerPort: 8080
name: http

View File

@@ -52,6 +52,11 @@ spec:
shareProcessNamespace: true
containers:
- name: nginx
{{- if $.Values.nginx.resources }}
resources: {{- toYaml $.Values.nginx.resources | nindent 10 }}
{{- else if ne $.Values.nginx.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" $.Values.nginx.resourcesPreset "Release" $.Release) | nindent 10 }}
{{- end }}
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}"
readinessProbe:
httpGet:
@@ -83,6 +88,13 @@ spec:
- name: reloader
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}"
command: ["/usr/bin/nginx-reloader.sh"]
resources:
limits:
cpu: 50m
memory: 50Mi
requests:
cpu: 50m
memory: 50Mi
#command: ["sleep", "infinity"]
volumeMounts:
- mountPath: /etc/nginx/nginx.conf

View File

@@ -24,6 +24,16 @@
"type": "number",
"description": "Number of HAProxy replicas",
"default": 2
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
},
@@ -34,6 +44,16 @@
"type": "number",
"description": "Number of Nginx replicas",
"default": 2
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
},

View File

@@ -12,8 +12,32 @@ size: 10Gi
storageClass: ""
haproxy:
replicas: 2
## @param haproxy.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param haproxy.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
nginx:
replicas: 2
## @param nginx.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param nginx.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
## @section Configuration parameters

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.3
version: 0.5.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -4,15 +4,19 @@
### Common parameters
| Name | Description | Value |
| ------------------------ | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `kafka.size` | Persistent Volume size for Kafka | `10Gi` |
| `kafka.replicas` | Number of Kafka replicas | `3` |
| `kafka.storageClass` | StorageClass used to store the Kafka data | `""` |
| `zookeeper.size` | Persistent Volume size for ZooKeeper | `5Gi` |
| `zookeeper.replicas` | Number of ZooKeeper replicas | `3` |
| `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data | `""` |
| Name | Description | Value |
| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `kafka.size` | Persistent Volume size for Kafka | `10Gi` |
| `kafka.replicas` | Number of Kafka replicas | `3` |
| `kafka.storageClass` | StorageClass used to store the Kafka data | `""` |
| `zookeeper.size` | Persistent Volume size for ZooKeeper | `5Gi` |
| `zookeeper.replicas` | Number of ZooKeeper replicas | `3` |
| `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data | `""` |
| `kafka.resources` | Resources | `{}` |
| `kafka.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| `zookeeper.resources` | Resources | `{}` |
| `zookeeper.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
### Configuration parameters

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -8,6 +8,11 @@ metadata:
spec:
kafka:
replicas: {{ .Values.kafka.replicas }}
{{- if .Values.kafka.resources }}
resources: {{- toYaml .Values.kafka.resources | nindent 6 }}
{{- else if ne .Values.kafka.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kafka.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
listeners:
- name: plain
port: 9092
@@ -65,6 +70,11 @@ spec:
key: kafka-metrics-config.yml
zookeeper:
replicas: {{ .Values.zookeeper.replicas }}
{{- if .Values.zookeeper.resources }}
resources: {{- toYaml .Values.zookeeper.resources | nindent 6 }}
{{- else if ne .Values.zookeeper.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.zookeeper.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
storage:
type: persistent-claim
{{- with .Values.zookeeper.size }}

View File

@@ -24,6 +24,16 @@
"type": "string",
"description": "StorageClass used to store the Kafka data",
"default": ""
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
},
@@ -44,6 +54,16 @@
"type": "string",
"description": "StorageClass used to store the ZooKeeper data",
"default": ""
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
},

View File

@@ -14,10 +14,35 @@ kafka:
size: 10Gi
replicas: 3
storageClass: ""
## @param kafka.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kafka.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
zookeeper:
size: 5Gi
replicas: 3
storageClass: ""
## @param zookeeper.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param zookeeper.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
## @section Configuration parameters

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.15.2
version: 0.17.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.15.2@sha256:967e51702102d0dbd97f9847de4159d62681b31eb606322d2c29755393c2236e
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.17.0@sha256:6b89c7543a25cca612160f9a140d8e90fc360cc4e6ebee6df8d7ded05d83ca8a

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.15.2@sha256:5e054eae6274963b6e84f87bf3330c94325103c6407b08bfb1189da721333b5c
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.17.0@sha256:47e4d676bad3bdd056d617a9c652376bfe6031a7a3254e058f372ffe3cefae79

View File

@@ -3,12 +3,11 @@ FROM --platform=linux/amd64 golang:1.20.6 AS builder
RUN git clone https://github.com/kubevirt/cloud-provider-kubevirt /go/src/kubevirt.io/cloud-provider-kubevirt \
&& cd /go/src/kubevirt.io/cloud-provider-kubevirt \
&& git checkout da9e0cf
&& git checkout 443a1fe
WORKDIR /go/src/kubevirt.io/cloud-provider-kubevirt
# see: https://github.com/kubevirt/cloud-provider-kubevirt/pull/335
# see: https://github.com/kubevirt/cloud-provider-kubevirt/pull/336
ADD patches /patches
RUN git apply /patches/*.diff
RUN go get 'k8s.io/endpointslice/util@v0.28' 'k8s.io/apiserver@v0.28'

View File

@@ -1,20 +0,0 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index a3c1aa33..95c31438 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -412,11 +412,11 @@ func (c *Controller) reconcileByAddressType(service *v1.Service, tenantSlices []
// Create the desired port configuration
var desiredPorts []discovery.EndpointPort
- for _, port := range service.Spec.Ports {
+ for i := range service.Spec.Ports {
desiredPorts = append(desiredPorts, discovery.EndpointPort{
- Port: &port.TargetPort.IntVal,
- Protocol: &port.Protocol,
- Name: &port.Name,
+ Port: &service.Spec.Ports[i].TargetPort.IntVal,
+ Protocol: &service.Spec.Ports[i].Protocol,
+ Name: &service.Spec.Ports[i].Name,
})
}

View File

@@ -1,129 +0,0 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index a3c1aa33..6f6e3d32 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -108,32 +108,24 @@ func newRequest(reqType ReqType, obj interface{}, oldObj interface{}) *Request {
}
func (c *Controller) Init() error {
-
- // Act on events from Services on the infra cluster. These are created by the EnsureLoadBalancer function.
- // We need to watch for these events so that we can update the EndpointSlices in the infra cluster accordingly.
+ // Existing Service event handlers...
_, err := c.infraFactory.Core().V1().Services().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
- // cast obj to Service
svc := obj.(*v1.Service)
- // Only act on Services of type LoadBalancer
if svc.Spec.Type == v1.ServiceTypeLoadBalancer {
klog.Infof("Service added: %v/%v", svc.Namespace, svc.Name)
c.queue.Add(newRequest(AddReq, obj, nil))
}
},
UpdateFunc: func(oldObj, newObj interface{}) {
- // cast obj to Service
newSvc := newObj.(*v1.Service)
- // Only act on Services of type LoadBalancer
if newSvc.Spec.Type == v1.ServiceTypeLoadBalancer {
klog.Infof("Service updated: %v/%v", newSvc.Namespace, newSvc.Name)
c.queue.Add(newRequest(UpdateReq, newObj, oldObj))
}
},
DeleteFunc: func(obj interface{}) {
- // cast obj to Service
svc := obj.(*v1.Service)
- // Only act on Services of type LoadBalancer
if svc.Spec.Type == v1.ServiceTypeLoadBalancer {
klog.Infof("Service deleted: %v/%v", svc.Namespace, svc.Name)
c.queue.Add(newRequest(DeleteReq, obj, nil))
@@ -144,7 +136,7 @@ func (c *Controller) Init() error {
return err
}
- // Monitor endpoint slices that we are interested in based on known services in the infra cluster
+ // Existing EndpointSlice event handlers in tenant cluster...
_, err = c.tenantFactory.Discovery().V1().EndpointSlices().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
eps := obj.(*discovery.EndpointSlice)
@@ -194,10 +186,80 @@ func (c *Controller) Init() error {
return err
}
- //TODO: Add informer for EndpointSlices in the infra cluster to watch for (unwanted) changes
+ // Add an informer for EndpointSlices in the infra cluster
+ _, err = c.infraFactory.Discovery().V1().EndpointSlices().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
+ AddFunc: func(obj interface{}) {
+ eps := obj.(*discovery.EndpointSlice)
+ if c.managedByController(eps) {
+ svc, svcErr := c.getInfraServiceForEPS(context.TODO(), eps)
+ if svcErr != nil {
+ klog.Errorf("Failed to get infra Service for EndpointSlice %s/%s: %v", eps.Namespace, eps.Name, svcErr)
+ return
+ }
+ if svc != nil {
+ klog.Infof("Infra EndpointSlice added: %v/%v, requeuing Service: %v/%v", eps.Namespace, eps.Name, svc.Namespace, svc.Name)
+ c.queue.Add(newRequest(AddReq, svc, nil))
+ }
+ }
+ },
+ UpdateFunc: func(oldObj, newObj interface{}) {
+ eps := newObj.(*discovery.EndpointSlice)
+ if c.managedByController(eps) {
+ svc, svcErr := c.getInfraServiceForEPS(context.TODO(), eps)
+ if svcErr != nil {
+ klog.Errorf("Failed to get infra Service for EndpointSlice %s/%s: %v", eps.Namespace, eps.Name, svcErr)
+ return
+ }
+ if svc != nil {
+ klog.Infof("Infra EndpointSlice updated: %v/%v, requeuing Service: %v/%v", eps.Namespace, eps.Name, svc.Namespace, svc.Name)
+ c.queue.Add(newRequest(UpdateReq, svc, nil))
+ }
+ }
+ },
+ DeleteFunc: func(obj interface{}) {
+ eps := obj.(*discovery.EndpointSlice)
+ if c.managedByController(eps) {
+ svc, svcErr := c.getInfraServiceForEPS(context.TODO(), eps)
+ if svcErr != nil {
+ klog.Errorf("Failed to get infra Service for EndpointSlice %s/%s on delete: %v", eps.Namespace, eps.Name, svcErr)
+ return
+ }
+ if svc != nil {
+ klog.Infof("Infra EndpointSlice deleted: %v/%v, requeuing Service: %v/%v", eps.Namespace, eps.Name, svc.Namespace, svc.Name)
+ c.queue.Add(newRequest(DeleteReq, svc, nil))
+ }
+ }
+ },
+ })
+ if err != nil {
+ return err
+ }
+
return nil
}
+// getInfraServiceForEPS returns the Service in the infra cluster associated with the given EndpointSlice.
+// It does this by reading the "kubernetes.io/service-name" label from the EndpointSlice, which should correspond
+// to the Service name. If not found or if the Service doesn't exist, it returns nil.
+func (c *Controller) getInfraServiceForEPS(ctx context.Context, eps *discovery.EndpointSlice) (*v1.Service, error) {
+ svcName := eps.Labels[discovery.LabelServiceName]
+ if svcName == "" {
+ // No service name label found, can't determine infra service.
+ return nil, nil
+ }
+
+ svc, err := c.infraClient.CoreV1().Services(c.infraNamespace).Get(ctx, svcName, metav1.GetOptions{})
+ if err != nil {
+ if k8serrors.IsNotFound(err) {
+ // Service doesn't exist
+ return nil, nil
+ }
+ return nil, err
+ }
+
+ return svc, nil
+}
+
// Run starts an asynchronous loop that monitors and updates GKENetworkParamSet in the cluster.
func (c *Controller) Run(numWorkers int, stopCh <-chan struct{}, controllerManagerMetrics *controllersmetrics.ControllerManagerMetrics) {
defer utilruntime.HandleCrash()

View File

@@ -0,0 +1,689 @@
diff --git a/.golangci.yml b/.golangci.yml
index cf72a41a2..1c9237e83 100644
--- a/.golangci.yml
+++ b/.golangci.yml
@@ -122,3 +122,9 @@ linters:
# - testpackage
# - revive
# - wsl
+issues:
+ exclude-rules:
+ - filename: "kubevirteps_controller_test.go"
+ linters:
+ - govet
+ text: "declaration of \"err\" shadows"
diff --git a/cmd/kubevirt-cloud-controller-manager/kubevirteps.go b/cmd/kubevirt-cloud-controller-manager/kubevirteps.go
index 74166b5d9..4e744f8de 100644
--- a/cmd/kubevirt-cloud-controller-manager/kubevirteps.go
+++ b/cmd/kubevirt-cloud-controller-manager/kubevirteps.go
@@ -101,7 +101,18 @@ func startKubevirtCloudController(
klog.Infof("Setting up kubevirtEPSController")
- kubevirtEPSController := kubevirteps.NewKubevirtEPSController(tenantClient, infraClient, infraDynamic, kubevirtCloud.Namespace())
+ clusterName := ccmConfig.ComponentConfig.KubeCloudShared.ClusterName
+ if clusterName == "" {
+ klog.Fatalf("Required flag --cluster-name is missing")
+ }
+
+ kubevirtEPSController := kubevirteps.NewKubevirtEPSController(
+ tenantClient,
+ infraClient,
+ infraDynamic,
+ kubevirtCloud.Namespace(),
+ clusterName,
+ )
klog.Infof("Initializing kubevirtEPSController")
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 6f6e3d322..b56882c12 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -54,10 +54,10 @@ type Controller struct {
infraDynamic dynamic.Interface
infraFactory informers.SharedInformerFactory
- infraNamespace string
- queue workqueue.RateLimitingInterface
- maxRetries int
-
+ infraNamespace string
+ clusterName string
+ queue workqueue.RateLimitingInterface
+ maxRetries int
maxEndPointsPerSlice int
}
@@ -65,8 +65,9 @@ func NewKubevirtEPSController(
tenantClient kubernetes.Interface,
infraClient kubernetes.Interface,
infraDynamic dynamic.Interface,
- infraNamespace string) *Controller {
-
+ infraNamespace string,
+ clusterName string,
+) *Controller {
tenantFactory := informers.NewSharedInformerFactory(tenantClient, 0)
infraFactory := informers.NewSharedInformerFactoryWithOptions(infraClient, 0, informers.WithNamespace(infraNamespace))
queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
@@ -79,6 +80,7 @@ func NewKubevirtEPSController(
infraDynamic: infraDynamic,
infraFactory: infraFactory,
infraNamespace: infraNamespace,
+ clusterName: clusterName,
queue: queue,
maxRetries: 25,
maxEndPointsPerSlice: 100,
@@ -320,22 +322,30 @@ func (c *Controller) processNextItem(ctx context.Context) bool {
// getInfraServiceFromTenantEPS returns the Service in the infra cluster that is associated with the given tenant endpoint slice.
func (c *Controller) getInfraServiceFromTenantEPS(ctx context.Context, slice *discovery.EndpointSlice) (*v1.Service, error) {
- infraServices, err := c.infraClient.CoreV1().Services(c.infraNamespace).List(ctx,
- metav1.ListOptions{LabelSelector: fmt.Sprintf("%s=%s,%s=%s", kubevirt.TenantServiceNameLabelKey, slice.Labels["kubernetes.io/service-name"],
- kubevirt.TenantServiceNamespaceLabelKey, slice.Namespace)})
+ tenantServiceName := slice.Labels[discovery.LabelServiceName]
+ tenantServiceNamespace := slice.Namespace
+
+ labelSelector := fmt.Sprintf(
+ "%s=%s,%s=%s,%s=%s",
+ kubevirt.TenantServiceNameLabelKey, tenantServiceName,
+ kubevirt.TenantServiceNamespaceLabelKey, tenantServiceNamespace,
+ kubevirt.TenantClusterNameLabelKey, c.clusterName,
+ )
+
+ svcList, err := c.infraClient.CoreV1().Services(c.infraNamespace).List(ctx, metav1.ListOptions{
+ LabelSelector: labelSelector,
+ })
if err != nil {
- klog.Errorf("Failed to get Service in Infra for EndpointSlice %s in namespace %s: %v", slice.Name, slice.Namespace, err)
+ klog.Errorf("Failed to get Service in Infra for EndpointSlice %s in namespace %s: %v", slice.Name, tenantServiceNamespace, err)
return nil, err
}
- if len(infraServices.Items) > 1 {
- // This should never be possible, only one service should exist for a given tenant endpoint slice
- klog.Errorf("Multiple services found for tenant endpoint slice %s in namespace %s", slice.Name, slice.Namespace)
+ if len(svcList.Items) > 1 {
+ klog.Errorf("Multiple services found for tenant endpoint slice %s in namespace %s", slice.Name, tenantServiceNamespace)
return nil, errors.New("multiple services found for tenant endpoint slice")
}
- if len(infraServices.Items) == 1 {
- return &infraServices.Items[0], nil
+ if len(svcList.Items) == 1 {
+ return &svcList.Items[0], nil
}
- // No service found, possible if service is deleted.
return nil, nil
}
@@ -363,16 +373,27 @@ func (c *Controller) getTenantEPSFromInfraService(ctx context.Context, svc *v1.S
// getInfraEPSFromInfraService returns the EndpointSlices in the infra cluster that are associated with the given infra service.
func (c *Controller) getInfraEPSFromInfraService(ctx context.Context, svc *v1.Service) ([]*discovery.EndpointSlice, error) {
var infraEPSSlices []*discovery.EndpointSlice
- klog.Infof("Searching for endpoints on infra cluster for service %s in namespace %s.", svc.Name, svc.Namespace)
- result, err := c.infraClient.DiscoveryV1().EndpointSlices(svc.Namespace).List(ctx,
- metav1.ListOptions{LabelSelector: fmt.Sprintf("%s=%s", discovery.LabelServiceName, svc.Name)})
+
+ klog.Infof("Searching for EndpointSlices in infra cluster for service %s/%s", svc.Namespace, svc.Name)
+
+ labelSelector := fmt.Sprintf(
+ "%s=%s,%s=%s",
+ discovery.LabelServiceName, svc.Name,
+ kubevirt.TenantClusterNameLabelKey, c.clusterName,
+ )
+
+ result, err := c.infraClient.DiscoveryV1().EndpointSlices(svc.Namespace).List(ctx, metav1.ListOptions{
+ LabelSelector: labelSelector,
+ })
if err != nil {
klog.Errorf("Failed to get EndpointSlices for Service %s in namespace %s: %v", svc.Name, svc.Namespace, err)
return nil, err
}
+
for _, eps := range result.Items {
infraEPSSlices = append(infraEPSSlices, &eps)
}
+
return infraEPSSlices, nil
}
@@ -382,74 +403,117 @@ func (c *Controller) reconcile(ctx context.Context, r *Request) error {
return errors.New("could not cast object to service")
}
+ // Skip services not managed by this controller (missing required labels)
if service.Labels[kubevirt.TenantServiceNameLabelKey] == "" ||
service.Labels[kubevirt.TenantServiceNamespaceLabelKey] == "" ||
service.Labels[kubevirt.TenantClusterNameLabelKey] == "" {
- klog.Infof("This LoadBalancer Service: %s is not managed by the %s. Skipping.", service.Name, ControllerName)
+ klog.Infof("Service %s is not managed by this controller. Skipping.", service.Name)
+ return nil
+ }
+
+ // Skip services for other clusters
+ if service.Labels[kubevirt.TenantClusterNameLabelKey] != c.clusterName {
+ klog.Infof("Skipping Service %s: cluster label %q doesn't match our clusterName %q", service.Name, service.Labels[kubevirt.TenantClusterNameLabelKey], c.clusterName)
return nil
}
+
klog.Infof("Reconciling: %v", service.Name)
+ /*
+ 1) Check if Service in the infra cluster is actually present.
+ If it's not found, mark it as 'deleted' so that we don't create new slices.
+ */
serviceDeleted := false
- svc, err := c.infraFactory.Core().V1().Services().Lister().Services(c.infraNamespace).Get(service.Name)
+ infraSvc, err := c.infraFactory.Core().V1().Services().Lister().Services(c.infraNamespace).Get(service.Name)
if err != nil {
- klog.Infof("Service %s in namespace %s is deleted.", service.Name, service.Namespace)
+ // The Service is not present in the infra lister => treat as deleted
+ klog.Infof("Service %s in namespace %s is deleted (or not found).", service.Name, service.Namespace)
serviceDeleted = true
} else {
- service = svc
+ // Use the actual object from the lister, so we have the latest state
+ service = infraSvc
}
+ /*
+ 2) Get all existing EndpointSlices in the infra cluster that belong to this LB Service.
+ We'll decide which of them should be updated or deleted.
+ */
infraExistingEpSlices, err := c.getInfraEPSFromInfraService(ctx, service)
if err != nil {
return err
}
- // At this point we have the current state of the 3 main objects we are interested in:
- // 1. The Service in the infra cluster, the one created by the KubevirtCloudController.
- // 2. The EndpointSlices in the tenant cluster, created for the tenant cluster's Service.
- // 3. The EndpointSlices in the infra cluster, managed by this controller.
-
slicesToDelete := []*discovery.EndpointSlice{}
slicesByAddressType := make(map[discovery.AddressType][]*discovery.EndpointSlice)
+ // For example, if the service is single-stack IPv4 => only AddressTypeIPv4
+ // or if dual-stack => IPv4 and IPv6, etc.
serviceSupportedAddressesTypes := getAddressTypesForService(service)
- // If the services switched to a different address type, we need to delete the old ones, because it's immutable.
- // If the services switched to a different externalTrafficPolicy, we need to delete the old ones.
+
+ /*
+ 3) Determine which slices to delete, and which to pass on to the normal
+ "reconcileByAddressType" logic.
+
+ - If 'serviceDeleted' is true OR service.Spec.Selector != nil, we remove them.
+ - Also, if the slice's address type is unsupported by the Service, we remove it.
+ */
for _, eps := range infraExistingEpSlices {
- if service.Spec.Selector != nil || serviceDeleted {
- klog.Infof("Added for deletion EndpointSlice %s in namespace %s because it has a selector", eps.Name, eps.Namespace)
- // to be sure we don't delete any slice that is not managed by us
+ // If service is deleted or has a non-nil selector => remove slices
+ if serviceDeleted || service.Spec.Selector != nil {
+ /*
+ Only remove if it is clearly labeled as managed by us:
+ we do not want to accidentally remove slices that are not
+ created by this controller.
+ */
if c.managedByController(eps) {
+ klog.Infof("Added for deletion EndpointSlice %s in namespace %s because service is deleted or has a selector",
+ eps.Name, eps.Namespace)
slicesToDelete = append(slicesToDelete, eps)
}
continue
}
+
+ // If the Service does not support this slice's AddressType => remove
if !serviceSupportedAddressesTypes.Has(eps.AddressType) {
- klog.Infof("Added for deletion EndpointSlice %s in namespace %s because it has an unsupported address type: %v", eps.Name, eps.Namespace, eps.AddressType)
+ klog.Infof("Added for deletion EndpointSlice %s in namespace %s because it has an unsupported address type: %v",
+ eps.Name, eps.Namespace, eps.AddressType)
slicesToDelete = append(slicesToDelete, eps)
continue
}
+
+ /*
+ Otherwise, this slice is potentially still valid for the given AddressType,
+ we'll send it to reconcileByAddressType for final merging and updates.
+ */
slicesByAddressType[eps.AddressType] = append(slicesByAddressType[eps.AddressType], eps)
}
- if !serviceDeleted {
- // Get tenant's endpoint slices for this service
+ /*
+ 4) If the Service was NOT deleted and has NO selector (i.e., it's a "no-selector" LB Service),
+ we proceed to handle creation and updates. That means:
+ - Gather Tenant's EndpointSlices
+ - Reconcile them by each AddressType
+ */
+ if !serviceDeleted && service.Spec.Selector == nil {
tenantEpSlices, err := c.getTenantEPSFromInfraService(ctx, service)
if err != nil {
return err
}
- // Reconcile the EndpointSlices for each address type e.g. ipv4, ipv6
+ // For each addressType (ipv4, ipv6, etc.) reconcile the infra slices
for addressType := range serviceSupportedAddressesTypes {
existingSlices := slicesByAddressType[addressType]
- err := c.reconcileByAddressType(service, tenantEpSlices, existingSlices, addressType)
- if err != nil {
+ if err := c.reconcileByAddressType(service, tenantEpSlices, existingSlices, addressType); err != nil {
return err
}
}
}
- // Delete the EndpointSlices that are no longer needed
+ /*
+ 5) Perform the actual deletion of all slices we flagged.
+ In many cases (serviceDeleted or .Spec.Selector != nil),
+ we end up with only "delete" actions and no new slice creation.
+ */
for _, eps := range slicesToDelete {
err := c.infraClient.DiscoveryV1().EndpointSlices(eps.Namespace).Delete(context.TODO(), eps.Name, metav1.DeleteOptions{})
if err != nil {
@@ -474,11 +538,11 @@ func (c *Controller) reconcileByAddressType(service *v1.Service, tenantSlices []
// Create the desired port configuration
var desiredPorts []discovery.EndpointPort
- for _, port := range service.Spec.Ports {
+ for i := range service.Spec.Ports {
desiredPorts = append(desiredPorts, discovery.EndpointPort{
- Port: &port.TargetPort.IntVal,
- Protocol: &port.Protocol,
- Name: &port.Name,
+ Port: &service.Spec.Ports[i].TargetPort.IntVal,
+ Protocol: &service.Spec.Ports[i].Protocol,
+ Name: &service.Spec.Ports[i].Name,
})
}
@@ -588,55 +652,114 @@ func ownedBy(endpointSlice *discovery.EndpointSlice, svc *v1.Service) bool {
return false
}
-func (c *Controller) finalize(service *v1.Service, slicesToCreate []*discovery.EndpointSlice, slicesToUpdate []*discovery.EndpointSlice, slicesToDelete []*discovery.EndpointSlice) error {
- // If there are slices to delete and slices to create, make them as update
- for i := 0; i < len(slicesToDelete); {
+func (c *Controller) finalize(
+ service *v1.Service,
+ slicesToCreate []*discovery.EndpointSlice,
+ slicesToUpdate []*discovery.EndpointSlice,
+ slicesToDelete []*discovery.EndpointSlice,
+) error {
+ /*
+ We try to turn a "delete + create" pair into a single "update" operation
+ if the original slice (slicesToDelete[i]) has the same address type as
+ the first slice in slicesToCreate, and is owned by the same Service.
+
+ However, we must re-check the lengths of slicesToDelete and slicesToCreate
+ within the loop to avoid an out-of-bounds index in slicesToCreate.
+ */
+
+ i := 0
+ for i < len(slicesToDelete) {
+ // If there is nothing to create, break early
if len(slicesToCreate) == 0 {
break
}
- if slicesToDelete[i].AddressType == slicesToCreate[0].AddressType && ownedBy(slicesToDelete[i], service) {
- slicesToCreate[0].Name = slicesToDelete[i].Name
+
+ sd := slicesToDelete[i]
+ sc := slicesToCreate[0] // We can safely do this now, because len(slicesToCreate) > 0
+
+ // If the address type matches, and the slice is owned by the same Service,
+ // then instead of deleting sd and creating sc, we'll transform it into an update:
+ // we rename sc with sd's name, remove sd from the delete list, remove sc from the create list,
+ // and add sc to the update list.
+ if sd.AddressType == sc.AddressType && ownedBy(sd, service) {
+ sliceToUpdate := sc
+ sliceToUpdate.Name = sd.Name
+
+ // Remove the first element from slicesToCreate
slicesToCreate = slicesToCreate[1:]
- slicesToUpdate = append(slicesToUpdate, slicesToCreate[0])
+
+ // Remove the slice from slicesToDelete
slicesToDelete = append(slicesToDelete[:i], slicesToDelete[i+1:]...)
+
+ // Now add the renamed slice to the list of slices we want to update
+ slicesToUpdate = append(slicesToUpdate, sliceToUpdate)
+
+ /*
+ Do not increment i here, because we've just removed an element from
+ slicesToDelete. The next slice to examine is now at the same index i.
+ */
} else {
+ // If they don't match, move on to the next slice in slicesToDelete.
i++
}
}
- // Create the new slices if service is not marked for deletion
+ /*
+ If the Service is not being deleted, create all remaining slices in slicesToCreate.
+ (If the Service has a DeletionTimestamp, it means it is going away, so we do not
+ want to create new EndpointSlices.)
+ */
if service.DeletionTimestamp == nil {
for _, slice := range slicesToCreate {
- createdSlice, err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Create(context.TODO(), slice, metav1.CreateOptions{})
+ createdSlice, err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Create(
+ context.TODO(),
+ slice,
+ metav1.CreateOptions{},
+ )
if err != nil {
- klog.Errorf("Failed to create EndpointSlice %s in namespace %s: %v", slice.Name, slice.Namespace, err)
+ klog.Errorf("Failed to create EndpointSlice %s in namespace %s: %v",
+ slice.Name, slice.Namespace, err)
+ // If the namespace is terminating, it's safe to ignore the error.
if k8serrors.HasStatusCause(err, v1.NamespaceTerminatingCause) {
- return nil
+ continue
}
return err
}
- klog.Infof("Created EndpointSlice %s in namespace %s", createdSlice.Name, createdSlice.Namespace)
+ klog.Infof("Created EndpointSlice %s in namespace %s",
+ createdSlice.Name, createdSlice.Namespace)
}
}
- // Update slices
+ // Update slices that are in the slicesToUpdate list.
for _, slice := range slicesToUpdate {
- _, err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Update(context.TODO(), slice, metav1.UpdateOptions{})
+ _, err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Update(
+ context.TODO(),
+ slice,
+ metav1.UpdateOptions{},
+ )
if err != nil {
- klog.Errorf("Failed to update EndpointSlice %s in namespace %s: %v", slice.Name, slice.Namespace, err)
+ klog.Errorf("Failed to update EndpointSlice %s in namespace %s: %v",
+ slice.Name, slice.Namespace, err)
return err
}
- klog.Infof("Updated EndpointSlice %s in namespace %s", slice.Name, slice.Namespace)
+ klog.Infof("Updated EndpointSlice %s in namespace %s",
+ slice.Name, slice.Namespace)
}
- // Delete slices
+ // Finally, delete slices that are in slicesToDelete and are no longer needed.
for _, slice := range slicesToDelete {
- err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Delete(context.TODO(), slice.Name, metav1.DeleteOptions{})
+ err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Delete(
+ context.TODO(),
+ slice.Name,
+ metav1.DeleteOptions{},
+ )
if err != nil {
- klog.Errorf("Failed to delete EndpointSlice %s in namespace %s: %v", slice.Name, slice.Namespace, err)
+ klog.Errorf("Failed to delete EndpointSlice %s in namespace %s: %v",
+ slice.Name, slice.Namespace, err)
return err
}
- klog.Infof("Deleted EndpointSlice %s in namespace %s", slice.Name, slice.Namespace)
+ klog.Infof("Deleted EndpointSlice %s in namespace %s",
+ slice.Name, slice.Namespace)
}
return nil
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1fb86e25f..14d92d340 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
@@ -13,6 +13,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/intstr"
+ "k8s.io/apimachinery/pkg/util/sets"
dfake "k8s.io/client-go/dynamic/fake"
"k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/testing"
@@ -189,7 +190,7 @@ func setupTestKubevirtEPSController() *testKubevirtEPSController {
}: "VirtualMachineInstanceList",
})
- controller := NewKubevirtEPSController(tenantClient, infraClient, infraDynamic, "test")
+ controller := NewKubevirtEPSController(tenantClient, infraClient, infraDynamic, "test", "test-cluster")
err := controller.Init()
if err != nil {
@@ -686,5 +687,229 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
return false, err
}).Should(BeTrue(), "EndpointSlice in infra cluster should be recreated by the controller after deletion")
})
+
+ g.It("Should correctly handle multiple unique ports in EndpointSlice", func() {
+ // Create a VMI in the infra cluster
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+
+ // Create an EndpointSlice in the tenant cluster
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,
+ *createPort("http", 80, v1.ProtocolTCP),
+ []discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
+
+ // Define multiple ports for the Service
+ servicePorts := []v1.ServicePort{
+ {
+ Name: "client",
+ Protocol: v1.ProtocolTCP,
+ Port: 10001,
+ TargetPort: intstr.FromInt(30396),
+ NodePort: 30396,
+ },
+ {
+ Name: "dashboard",
+ Protocol: v1.ProtocolTCP,
+ Port: 8265,
+ TargetPort: intstr.FromInt(31003),
+ NodePort: 31003,
+ },
+ {
+ Name: "metrics",
+ Protocol: v1.ProtocolTCP,
+ Port: 8080,
+ TargetPort: intstr.FromInt(30452),
+ NodePort: 30452,
+ },
+ }
+
+ createAndAssertInfraServiceLB("infra-multiport-service", "tenant-service-name", "test-cluster",
+ servicePorts[0], v1.ServiceExternalTrafficPolicyLocal)
+
+ svc, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(context.TODO(), "infra-multiport-service", metav1.GetOptions{})
+ Expect(err).To(BeNil())
+
+ svc.Spec.Ports = servicePorts
+ _, err = testVals.infraClient.CoreV1().Services(infraNamespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
+ Expect(err).To(BeNil())
+
+ var epsListMultiPort *discoveryv1.EndpointSliceList
+
+ Eventually(func() (bool, error) {
+ epsListMultiPort, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
+ if len(epsListMultiPort.Items) != 1 {
+ return false, err
+ }
+
+ createdSlice := epsListMultiPort.Items[0]
+ expectedPortNames := []string{"client", "dashboard", "metrics"}
+ foundPortNames := []string{}
+
+ for _, port := range createdSlice.Ports {
+ if port.Name != nil {
+ foundPortNames = append(foundPortNames, *port.Name)
+ }
+ }
+
+ if len(foundPortNames) != len(expectedPortNames) {
+ return false, err
+ }
+
+ portSet := sets.NewString(foundPortNames...)
+ expectedPortSet := sets.NewString(expectedPortNames...)
+ return portSet.Equal(expectedPortSet), err
+ }).Should(BeTrue(), "EndpointSlice should contain all unique ports from the Service without duplicates")
+ })
+
+ g.It("Should not panic when Service changes to have a non-nil selector, causing EndpointSlice deletion with no new slices to create", func() {
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,
+ *createPort("http", 80, v1.ProtocolTCP),
+ []discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
+ createAndAssertInfraServiceLB("infra-service-no-selector", "tenant-service-name", "test-cluster",
+ v1.ServicePort{
+ Name: "web",
+ Port: 80,
+ NodePort: 31900,
+ Protocol: v1.ProtocolTCP,
+ TargetPort: intstr.IntOrString{IntVal: 30390},
+ },
+ v1.ServiceExternalTrafficPolicyLocal,
+ )
+
+ // Wait for the controller to create an EndpointSlice in the infra cluster.
+ var epsList *discoveryv1.EndpointSliceList
+ var err error
+ Eventually(func() (bool, error) {
+ epsList, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).
+ List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return false, err
+ }
+ // Wait exactly 1 slice
+ if len(epsList.Items) == 1 {
+ return true, nil
+ }
+ return false, nil
+ }).Should(BeTrue(), "Controller should create an EndpointSlice in infra cluster for the LB service")
+
+ svcWithSelector, err := testVals.infraClient.CoreV1().Services(infraNamespace).
+ Get(context.TODO(), "infra-service-no-selector", metav1.GetOptions{})
+ Expect(err).To(BeNil())
+
+ // Let's set any selector to run the slice deletion logic
+ svcWithSelector.Spec.Selector = map[string]string{"test": "selector-added"}
+ _, err = testVals.infraClient.CoreV1().Services(infraNamespace).
+ Update(context.TODO(), svcWithSelector, metav1.UpdateOptions{})
+ Expect(err).To(BeNil())
+
+ Eventually(func() (bool, error) {
+ epsList, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).
+ List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return false, err
+ }
+ // We expect that after the update service.EndpointSlice will become 0
+ if len(epsList.Items) == 0 {
+ return true, nil
+ }
+ return false, nil
+ }).Should(BeTrue(), "Existing EndpointSlice should be removed because Service now has a selector")
+ })
+
+ g.It("Should remove EndpointSlices and not recreate them when a previously no-selector Service obtains a selector", func() {
+ testVals.infraClient.Fake.PrependReactor("create", "endpointslices", func(action testing.Action) (bool, runtime.Object, error) {
+ createAction := action.(testing.CreateAction)
+ slice := createAction.GetObject().(*discoveryv1.EndpointSlice)
+ if slice.Name == "" && slice.GenerateName != "" {
+ slice.Name = slice.GenerateName + "-fake001"
+ }
+ return false, slice, nil
+ })
+
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,
+ *createPort("http", 80, v1.ProtocolTCP),
+ []discoveryv1.Endpoint{
+ *createEndpoint("123.45.67.89", "worker-0-test", true, true, false),
+ },
+ )
+
+ noSelectorSvcName := "svc-without-selector"
+ svc := &v1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: noSelectorSvcName,
+ Namespace: infraNamespace,
+ Labels: map[string]string{
+ kubevirt.TenantServiceNameLabelKey: "tenant-service-name",
+ kubevirt.TenantServiceNamespaceLabelKey: tenantNamespace,
+ kubevirt.TenantClusterNameLabelKey: "test-cluster",
+ },
+ },
+ Spec: v1.ServiceSpec{
+ Ports: []v1.ServicePort{
+ {
+ Name: "web",
+ Port: 80,
+ NodePort: 31900,
+ Protocol: v1.ProtocolTCP,
+ TargetPort: intstr.IntOrString{IntVal: 30390},
+ },
+ },
+ Type: v1.ServiceTypeLoadBalancer,
+ ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyLocal,
+ },
+ }
+
+ _, err := testVals.infraClient.CoreV1().Services(infraNamespace).Create(context.TODO(), svc, metav1.CreateOptions{})
+ Expect(err).To(BeNil())
+
+ Eventually(func() (bool, error) {
+ epsList, err := testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).
+ List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return false, err
+ }
+ return len(epsList.Items) == 1, nil
+ }).Should(BeTrue(), "Controller should create an EndpointSlice in infra cluster for the no-selector LB service")
+
+ svcWithSelector, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(
+ context.TODO(), noSelectorSvcName, metav1.GetOptions{})
+ Expect(err).To(BeNil())
+
+ svcWithSelector.Spec.Selector = map[string]string{"app": "test-value"}
+ _, err = testVals.infraClient.CoreV1().Services(infraNamespace).
+ Update(context.TODO(), svcWithSelector, metav1.UpdateOptions{})
+ Expect(err).To(BeNil())
+
+ Eventually(func() (bool, error) {
+ epsList, err := testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).
+ List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return false, err
+ }
+ return len(epsList.Items) == 0, nil
+ }).Should(BeTrue(), "All EndpointSlices should be removed after Service acquires a selector (no new slices created)")
+ })
+
+ g.It("Should ignore Services from a different cluster", func() {
+ // Create a Service with cluster label "other-cluster"
+ svc := createInfraServiceLB("infra-service-conflict", "tenant-service-name", "other-cluster",
+ v1.ServicePort{Name: "web", Port: 80, NodePort: 31900, Protocol: v1.ProtocolTCP, TargetPort: intstr.IntOrString{IntVal: 30390}},
+ v1.ServiceExternalTrafficPolicyLocal)
+ _, err := testVals.infraClient.CoreV1().Services(infraNamespace).Create(context.TODO(), svc, metav1.CreateOptions{})
+ Expect(err).To(BeNil())
+
+ // The controller should ignore this Service, so no EndpointSlice should be created.
+ Eventually(func() (bool, error) {
+ epsList, err := testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return false, err
+ }
+ // Expect zero slices since cluster label does not match "test-cluster"
+ return len(epsList.Items) == 0, nil
+ }).Should(BeTrue(), "Services with a different cluster label should be ignored")
+ })
+
})
})

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.15.2@sha256:cb4ab74099662f73e058f7c7495fb403488622c3425c06ad23b687bfa8bc805b
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.17.0@sha256:aacabc0e9e2d40ba620fb616df21cbac13a675dd0c8ede8bed93ba3c4c1daf37

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -26,6 +26,13 @@ spec:
containers:
- image: "{{ $.Files.Get "images/cluster-autoscaler.tag" | trim }}"
name: cluster-autoscaler
resources:
limits:
cpu: 512m
memory: 512Mi
requests:
cpu: 125m
memory: 128Mi
command:
- /cluster-autoscaler
args:

View File

@@ -102,12 +102,37 @@ metadata:
annotations:
kamaji.clastix.io/kubeconfig-secret-key: "super-admin.svc"
spec:
apiServer:
{{- if .Values.kamajiControlPlane.apiServer.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.apiServer.resources | nindent 6 }}
{{- else if ne .Values.kamajiControlPlane.apiServer.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.apiServer.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
controllerManager:
{{- if .Values.kamajiControlPlane.controllerManager.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.controllerManager.resources | nindent 6 }}
{{- else if ne .Values.kamajiControlPlane.controllerManager.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.controllerManager.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
scheduler:
{{- if .Values.kamajiControlPlane.scheduler.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.scheduler.resources | nindent 6 }}
{{- else if ne .Values.kamajiControlPlane.scheduler.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.scheduler.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
dataStoreName: "{{ $etcd }}"
addons:
coreDNS:
dnsServiceIPs:
- 10.95.0.10
konnectivity: {}
konnectivity:
server:
port: 8132
{{- if .Values.kamajiControlPlane.addons.konnectivity.server.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.addons.konnectivity.server.resources | nindent 10 }}
{{- else if ne .Values.kamajiControlPlane.addons.konnectivity.server.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.addons.konnectivity.server.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
kubelet:
cgroupfs: systemd
preferredAddressTypes:

View File

@@ -63,11 +63,21 @@ spec:
mountPath: /etc/kubernetes/kubeconfig
readOnly: true
resources:
limits:
cpu: 512m
memory: 512Mi
requests:
memory: 50Mi
cpu: 10m
cpu: 125m
memory: 128Mi
- name: csi-provisioner
image: quay.io/openshift/origin-csi-external-provisioner:latest
resources:
limits:
cpu: 512m
memory: 512Mi
requests:
cpu: 125m
memory: 128Mi
args:
- "--csi-address=$(ADDRESS)"
- "--default-fstype=ext4"
@@ -102,9 +112,12 @@ spec:
mountPath: /etc/kubernetes/kubeconfig
readOnly: true
resources:
limits:
cpu: 512m
memory: 512Mi
requests:
memory: 50Mi
cpu: 10m
cpu: 125m
memory: 128Mi
- name: csi-liveness-probe
image: quay.io/openshift/origin-csi-livenessprobe:latest
args:
@@ -115,9 +128,12 @@ spec:
- name: socket-dir
mountPath: /csi
resources:
limits:
cpu: 512m
memory: 512Mi
requests:
memory: 50Mi
cpu: 10m
cpu: 125m
memory: 128Mi
volumes:
- name: socket-dir
emptyDir: {}

View File

@@ -18,7 +18,8 @@ spec:
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-cert-manager-crds
storageNamespace: cozy-cert-manager-crds
install:

View File

@@ -19,7 +19,8 @@ spec:
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-cert-manager
storageNamespace: cozy-cert-manager
install:

View File

@@ -18,7 +18,8 @@ spec:
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-cilium
storageNamespace: cozy-cilium
install:

View File

@@ -18,7 +18,8 @@ spec:
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-csi
storageNamespace: cozy-csi
install:

View File

@@ -19,7 +19,8 @@ spec:
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-fluxcd
storageNamespace: cozy-fluxcd
install:

View File

@@ -19,7 +19,8 @@ spec:
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-ingress-nginx
storageNamespace: cozy-ingress-nginx
install:

View File

@@ -21,7 +21,8 @@ spec:
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-monitoring-agents
storageNamespace: cozy-monitoring-agents
install:

View File

@@ -19,7 +19,8 @@ spec:
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-victoria-metrics-operator
storageNamespace: cozy-victoria-metrics-operator
install:

View File

@@ -36,8 +36,12 @@ spec:
#securityContext:
# privileged: true
resources:
limits:
cpu: 512m
memory: 512Mi
requests:
cpu: 100m
cpu: 125m
memory: 128Mi
volumeMounts:
- mountPath: /etc/kubernetes/kubeconfig
name: kubeconfig

View File

@@ -69,3 +69,63 @@ addons:
##
enabled: false
valuesOverride: {}
## @section Kamaji control plane
##
kamajiControlPlane:
apiServer:
## @param kamajiControlPlane.apiServer.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.apiServer.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"
controllerManager:
## @param kamajiControlPlane.controllerManager.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.controllerManager.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"
scheduler:
## @param kamajiControlPlane.scheduler.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.scheduler.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"
addons:
konnectivity:
server:
## @param kamajiControlPlane.addons.konnectivity.server.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.addons.konnectivity.server.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.3
version: 0.6.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -83,14 +83,16 @@ more details:
### Backup parameters
| Name | Description | Value |
| ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/mariadb-backup:0.5.3@sha256:8ca1fb01e880d351ee7d984a0b437c1142836963cd079986156ed28750067138
ghcr.io/cozystack/cozystack/mariadb-backup:0.6.0@sha256:8ca1fb01e880d351ee7d984a0b437c1142836963cd079986156ed28750067138

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -72,3 +72,9 @@ spec:
#secondaryService:
# type: LoadBalancer
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}

View File

@@ -66,6 +66,16 @@
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}

View File

@@ -54,3 +54,16 @@ backup:
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.1
version: 0.5.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -4,13 +4,15 @@
### Common parameters
| Name | Description | Value |
| ------------------- | -------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `replicas` | Persistent Volume size for NATS | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `users` | Users configuration | `{}` |
| `jetstream.size` | Jetstream persistent storage size | `10Gi` |
| `jetstream.enabled` | Enable or disable Jetstream | `true` |
| `config.merge` | Additional configuration to merge into NATS config | `{}` |
| `config.resolver` | Additional configuration to merge into NATS config | `{}` |
| Name | Description | Value |
| ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `replicas` | Persistent Volume size for NATS | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `users` | Users configuration | `{}` |
| `jetstream.size` | Jetstream persistent storage size | `10Gi` |
| `jetstream.enabled` | Enable or disable Jetstream | `true` |
| `config.merge` | Additional configuration to merge into NATS config | `{}` |
| `config.resolver` | Additional configuration to merge into NATS config | `{}` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -38,6 +38,17 @@ spec:
timeout: 5m0s
values:
nats:
podTemplate:
merge:
spec:
containers:
- name: nats
image: nats:2.10.17-alpine
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 22 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 22 }}
{{- end }}
fullnameOverride: {{ .Release.Name }}
config:
{{- if or (gt (len $passwords) 0) (gt (len .Values.config.merge) 0) }}

View File

@@ -46,6 +46,16 @@
"default": {}
}
}
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}

View File

@@ -61,3 +61,16 @@ config:
## Default: {}
## Example see: https://github.com/nats-io/k8s/blob/main/helm/charts/nats/values.yaml#L247
resolver: {}
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.9.0
version: 0.10.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -58,13 +58,15 @@ more details:
### Backup parameters
| Name | Description | Value |
| ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.9.0@sha256:2b6ba87f5688a439bd2ac12835a5ab9e601feb15c0c44ed0d9ca48cec7c52521
ghcr.io/cozystack/cozystack/postgres-backup:0.10.0@sha256:2b6ba87f5688a439bd2ac12835a5ab9e601feb15c0c44ed0d9ca48cec7c52521

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -5,6 +5,12 @@ metadata:
name: {{ .Release.Name }}
spec:
instances: {{ .Values.replicas }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}
enableSuperuserAccess: true
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}

View File

@@ -101,6 +101,16 @@
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}

View File

@@ -76,3 +76,16 @@ backup:
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.4
version: 0.5.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -22,7 +22,9 @@ The service utilizes official RabbitMQ operator. This ensures the reliability an
### Configuration parameters
| Name | Description | Value |
| -------- | --------------------------- | ----- |
| `users` | Users configuration | `{}` |
| `vhosts` | Virtual Hosts configuration | `{}` |
| Name | Description | Value |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ |
| `users` | Users configuration | `{}` |
| `vhosts` | Virtual Hosts configuration | `{}` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -11,7 +11,11 @@ spec:
service:
type: LoadBalancer
{{- end }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}
override:
statefulSet:
spec:

View File

@@ -26,6 +26,16 @@
"type": "object",
"description": "Virtual Hosts configuration",
"default": {}
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}

View File

@@ -39,3 +39,16 @@ users: {}
## admin:
## - user3
vhosts: {}
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0
version: 0.6.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -13,12 +13,14 @@ Service utilizes the Spotahome Redis Operator for efficient management and orche
### Common parameters
| Name | Description | Value |
| -------------- | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `1Gi` |
| `replicas` | Number of Redis replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `authEnabled` | Enable password generation | `true` |
| Name | Description | Value |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `1Gi` |
| `replicas` | Number of Redis replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `authEnabled` | Enable password generation | `true` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -25,19 +25,18 @@ metadata:
spec:
sentinel:
replicas: 3
resources:
requests:
cpu: 100m
limits:
memory: 100Mi
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 6 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
redis:
replicas: {{ .Values.replicas }}
resources:
requests:
cpu: 150m
memory: 400Mi
limits:
memory: 1000Mi
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 6 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
{{- with .Values.size }}
storage:
persistentVolumeClaim:

View File

@@ -26,6 +26,16 @@
"type": "boolean",
"description": "Enable password generation",
"default": true
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}

View File

@@ -11,3 +11,16 @@ size: 1Gi
replicas: 2
storageClass: ""
authEnabled: true
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0
version: 0.3.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -19,11 +19,13 @@ Managed TCP Load Balancer Service efficiently utilizes HAProxy for load balancin
### Configuration parameters
| Name | Description | Value |
| -------------------------------- | ------------------------------------------------------------- | ------- |
| `httpAndHttps.mode` | Mode for balancer. Allowed values: `tcp` and `tcp-with-proxy` | `tcp` |
| `httpAndHttps.targetPorts.http` | HTTP port number. | `80` |
| `httpAndHttps.targetPorts.https` | HTTPS port number. | `443` |
| `httpAndHttps.endpoints` | Endpoint addresses list | `[]` |
| `whitelistHTTP` | Secure HTTP by enabling client networks whitelisting | `false` |
| `whitelist` | List of client networks | `[]` |
| Name | Description | Value |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `httpAndHttps.mode` | Mode for balancer. Allowed values: `tcp` and `tcp-with-proxy` | `tcp` |
| `httpAndHttps.targetPorts.http` | HTTP port number. | `80` |
| `httpAndHttps.targetPorts.https` | HTTPS port number. | `443` |
| `httpAndHttps.endpoints` | Endpoint addresses list | `[]` |
| `whitelistHTTP` | Secure HTTP by enabling client networks whitelisting | `false` |
| `whitelist` | List of client networks | `[]` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -0,0 +1,50 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -33,6 +33,11 @@ spec:
containers:
- image: haproxy:latest
name: haproxy
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 10 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
ports:
{{- with .Values.httpAndHttps }}
- containerPort: 8080

View File

@@ -57,6 +57,16 @@
"description": "List of client networks",
"default": [],
"items": {}
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}

View File

@@ -43,3 +43,16 @@ httpAndHttps:
##
whitelistHTTP: false
whitelist: []
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -4,4 +4,4 @@ description: Separated tenant namespace
icon: /logos/tenant.svg
type: application
version: 1.9.0
version: 1.9.1

View File

@@ -41,6 +41,7 @@ metadata:
{{- end }}
{{- end }}
{{- include "cozystack.namespace-anotations" (list $ $existingNS) | nindent 4 }}
alpha.kubevirt.io/auto-memory-limits-ratio: "1.0"
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: true

View File

@@ -1,142 +1,156 @@
bucket 0.1.0 HEAD
clickhouse 0.1.0 ca79f72
clickhouse 0.2.0 7cd7de73
clickhouse 0.2.1 5ca8823
clickhouse 0.3.0 b00621e
clickhouse 0.4.0 320fc32
clickhouse 0.5.0 2a4768a5
clickhouse 0.6.0 18bbdb67
clickhouse 0.6.1 b7375f73
clickhouse 0.6.2 HEAD
ferretdb 0.1.0 4ffa8615
ferretdb 0.1.1 5ca8823
ferretdb 0.2.0 adaf603
ferretdb 0.3.0 aa2f553
ferretdb 0.4.0 def2eb0f
ferretdb 0.4.1 a9555210
ferretdb 0.4.2 HEAD
http-cache 0.1.0 a956713
http-cache 0.2.0 5ca8823
http-cache 0.3.0 fab5940
http-cache 0.3.1 HEAD
kafka 0.1.0 760f86d2
kafka 0.2.0 a2cc83d
kafka 0.2.1 3ac17018
kafka 0.2.2 d0758692
kafka 0.2.3 5ca8823
kafka 0.3.0 c07c4bbd
kafka 0.3.1 b7375f73
kafka 0.3.2 b75aaf17
kafka 0.3.3 HEAD
kubernetes 0.1.0 f642698
kubernetes 0.2.0 7cd7de73
kubernetes 0.3.0 7caccec1
kubernetes 0.4.0 6cae6ce8
kubernetes 0.5.0 6bd2d455
kubernetes 0.6.0 4cbc8a2c
kubernetes 0.7.0 ceefae03
clickhouse 0.1.0 f7eaab0a
clickhouse 0.2.0 53f2365e
clickhouse 0.2.1 dfbc210b
clickhouse 0.3.0 6c5cf5bf
clickhouse 0.4.0 b40e1b09
clickhouse 0.5.0 0f312d5c
clickhouse 0.6.0 1ec10165
clickhouse 0.6.1 c62a83a7
clickhouse 0.6.2 8267072d
clickhouse 0.7.0 HEAD
ferretdb 0.1.0 e9716091
ferretdb 0.1.1 91b0499a
ferretdb 0.2.0 6c5cf5bf
ferretdb 0.3.0 b8e33d19
ferretdb 0.4.0 b40e1b09
ferretdb 0.4.1 1ec10165
ferretdb 0.4.2 8267072d
ferretdb 0.5.0 HEAD
http-cache 0.1.0 263e47be
http-cache 0.2.0 53f2365e
http-cache 0.3.0 6c5cf5bf
http-cache 0.3.1 0f312d5c
http-cache 0.4.0 HEAD
kafka 0.1.0 f7eaab0a
kafka 0.2.0 c0685f43
kafka 0.2.1 dfbc210b
kafka 0.2.2 e9716091
kafka 0.2.3 91b0499a
kafka 0.3.0 6c5cf5bf
kafka 0.3.1 c62a83a7
kafka 0.3.2 93c46161
kafka 0.3.3 8267072d
kafka 0.4.0 85ec09b8
kafka 0.5.0 HEAD
kubernetes 0.1.0 263e47be
kubernetes 0.2.0 53f2365e
kubernetes 0.3.0 007d414f
kubernetes 0.4.0 d7cfa53c
kubernetes 0.5.0 dfbc210b
kubernetes 0.6.0 5bbc488e
kubernetes 0.7.0 e9716091
kubernetes 0.8.0 ac11056e
kubernetes 0.8.1 e54608d8
kubernetes 0.8.2 5ca8823
kubernetes 0.9.0 9b6dd19
kubernetes 0.10.0 ac5c38b
kubernetes 0.11.0 4eaca42
kubernetes 0.11.1 4f430a90
kubernetes 0.12.0 74649f8
kubernetes 0.12.1 28fca4e
kubernetes 0.13.0 ced8e5b9
kubernetes 0.8.1 366bcafc
kubernetes 0.8.2 f81be075
kubernetes 0.9.0 6c5cf5bf
kubernetes 0.10.0 b8e33d19
kubernetes 0.11.0 4b90bf5a
kubernetes 0.11.1 5fb9cfe3
kubernetes 0.12.0 bb985806
kubernetes 0.12.1 28fca4ef
kubernetes 0.13.0 1ec10165
kubernetes 0.14.0 bfbde07c
kubernetes 0.14.1 fde4bcfa
kubernetes 0.15.0 cb7b8158
kubernetes 0.15.1 43e593c7
kubernetes 0.15.2 HEAD
mysql 0.1.0 f642698
mysql 0.2.0 8b975ff0
mysql 0.3.0 5ca8823
mysql 0.4.0 93018c4
mysql 0.5.0 4b84798
mysql 0.5.1 fab5940b
mysql 0.5.2 d8a92aa3
mysql 0.5.3 HEAD
nats 0.1.0 5ca8823
nats 0.2.0 c07c4bbd
kubernetes 0.14.1 898374b5
kubernetes 0.15.0 4e68e65c
kubernetes 0.15.1 160e4e2a
kubernetes 0.15.2 8267072d
kubernetes 0.16.0 077045b0
kubernetes 0.17.0 HEAD
mysql 0.1.0 263e47be
mysql 0.2.0 c24a103f
mysql 0.3.0 53f2365e
mysql 0.4.0 6c5cf5bf
mysql 0.5.0 b40e1b09
mysql 0.5.1 0f312d5c
mysql 0.5.2 1ec10165
mysql 0.5.3 8267072d
mysql 0.6.0 HEAD
nats 0.1.0 e9716091
nats 0.2.0 6c5cf5bf
nats 0.3.0 78366f19
nats 0.3.1 b7375f73
nats 0.4.0 da1e705a
nats 0.4.1 HEAD
postgres 0.1.0 f642698
postgres 0.2.0 7cd7de73
postgres 0.2.1 4a97e297
postgres 0.3.0 995dea6f
postgres 0.4.0 ec283c33
postgres 0.4.1 5ca8823
postgres 0.5.0 c07c4bbd
postgres 0.6.0 2a4768a
postgres 0.6.2 54fd61c
postgres 0.7.0 dc9d8bb
postgres 0.7.1 175a65f
postgres 0.8.0 cb7b8158
postgres 0.9.0 HEAD
rabbitmq 0.1.0 f642698
rabbitmq 0.2.0 5ca8823
rabbitmq 0.3.0 9e33dc0
rabbitmq 0.4.0 36d8855
rabbitmq 0.4.1 35536bb
rabbitmq 0.4.2 00b2834e
rabbitmq 0.4.3 d8a92aa3
rabbitmq 0.4.4 HEAD
redis 0.1.1 f642698
redis 0.2.0 5ca8823
redis 0.3.0 c07c4bbd
redis 0.3.1 b7375f73
redis 0.4.0 abc8f082
redis 0.5.0 HEAD
tcp-balancer 0.1.0 f642698
tcp-balancer 0.2.0 HEAD
tenant 0.1.3 3d1b86c
tenant 0.1.4 d200480
tenant 0.1.5 e3ab858
tenant 1.0.0 7cd7de7
tenant 1.1.0 4da8ac3b
tenant 1.2.0 15478a88
tenant 1.3.0 ceefae03
tenant 1.3.1 c56e5769
tenant 1.4.0 94c688f7
tenant 1.5.0 48128743
nats 0.3.1 c62a83a7
nats 0.4.0 898374b5
nats 0.4.1 8267072d
nats 0.5.0 HEAD
postgres 0.1.0 263e47be
postgres 0.2.0 53f2365e
postgres 0.2.1 d7cfa53c
postgres 0.3.0 dfbc210b
postgres 0.4.0 e9716091
postgres 0.4.1 91b0499a
postgres 0.5.0 6c5cf5bf
postgres 0.6.0 b40e1b09
postgres 0.6.2 0f312d5c
postgres 0.7.0 4b90bf5a
postgres 0.7.1 1ec10165
postgres 0.8.0 4e68e65c
postgres 0.9.0 8267072d
postgres 0.10.0 HEAD
rabbitmq 0.1.0 263e47be
rabbitmq 0.2.0 53f2365e
rabbitmq 0.3.0 6c5cf5bf
rabbitmq 0.4.0 b40e1b09
rabbitmq 0.4.1 1128d0cb
rabbitmq 0.4.2 4b90bf5a
rabbitmq 0.4.3 1ec10165
rabbitmq 0.4.4 8267072d
rabbitmq 0.5.0 HEAD
redis 0.1.1 263e47be
redis 0.2.0 53f2365e
redis 0.3.0 6c5cf5bf
redis 0.3.1 c62a83a7
redis 0.4.0 84f3ccc0
redis 0.5.0 4e68e65c
redis 0.6.0 HEAD
tcp-balancer 0.1.0 263e47be
tcp-balancer 0.2.0 53f2365e
tcp-balancer 0.3.0 HEAD
tenant 0.1.4 afc997ef
tenant 0.1.5 e3ab858a
tenant 1.0.0 263e47be
tenant 1.1.0 c0685f43
tenant 1.2.0 dfbc210b
tenant 1.3.0 e9716091
tenant 1.3.1 91b0499a
tenant 1.4.0 71514249
tenant 1.5.0 1ec10165
tenant 1.6.0 df448b99
tenant 1.6.1 edbbb9be
tenant 1.6.2 ccedc5fe
tenant 1.6.1 c62a83a7
tenant 1.6.2 898374b5
tenant 1.6.3 2057bb96
tenant 1.6.4 3c9e50a4
tenant 1.6.5 f1e11451
tenant 1.6.6 d4634797
tenant 1.6.7 06afcf27
tenant 1.6.8 4cc48e6f
tenant 1.7.0 6c73e3f3
tenant 1.8.0 e2369ba
tenant 1.9.0 HEAD
virtual-machine 0.1.4 f2015d6
virtual-machine 0.1.5 7cd7de7
virtual-machine 0.2.0 5ca8823
virtual-machine 0.3.0 b908400
virtual-machine 0.4.0 4746d51
virtual-machine 0.5.0 cad9cde
virtual-machine 0.6.0 0e728870
virtual-machine 0.6.1 af58018a
virtual-machine 0.7.0 af58018a
virtual-machine 0.7.1 05857b95
virtual-machine 0.8.0 3fa4dd3
virtual-machine 0.8.1 3fa4dd3a
tenant 1.6.4 84f3ccc0
tenant 1.6.5 fde4bcfa
tenant 1.6.6 4e68e65c
tenant 1.6.7 0ab39f20
tenant 1.6.8 bc95159a
tenant 1.7.0 24fa7222
tenant 1.8.0 160e4e2a
tenant 1.9.0 728743db
tenant 1.9.1 HEAD
virtual-machine 0.1.4 f2015d65
virtual-machine 0.1.5 263e47be
virtual-machine 0.2.0 c0685f43
virtual-machine 0.3.0 6c5cf5bf
virtual-machine 0.4.0 b8e33d19
virtual-machine 0.5.0 1ec10165
virtual-machine 0.6.0 4e68e65c
virtual-machine 0.7.0 e23286a3
virtual-machine 0.7.1 0ab39f20
virtual-machine 0.8.0 3fa4dd3a
virtual-machine 0.8.1 93c46161
virtual-machine 0.8.2 HEAD
vm-disk 0.1.0 HEAD
vm-instance 0.1.0 ced8e5b9
vm-instance 0.2.0 4f767ee3
vm-instance 0.3.0 0e728870
vm-instance 0.4.0 af58018a
vm-instance 0.4.1 05857b95
vm-instance 0.5.0 3fa4dd3
vm-disk 0.1.0 d971f2ff
vm-disk 0.1.1 HEAD
vm-instance 0.1.0 1ec10165
vm-instance 0.2.0 84f3ccc0
vm-instance 0.3.0 4e68e65c
vm-instance 0.4.0 e23286a3
vm-instance 0.4.1 0ab39f20
vm-instance 0.5.0 3fa4dd3a
vm-instance 0.5.1 HEAD
vpn 0.1.0 f642698
vpn 0.2.0 7151424
vpn 0.3.0 a2bcf100
vpn 0.3.1 HEAD
vpn 0.1.0 263e47be
vpn 0.2.0 53f2365e
vpn 0.3.0 6c5cf5bf
vpn 0.3.1 1ec10165
vpn 0.4.0 HEAD

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.1.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: 0.1.0
appVersion: 0.1.1

View File

@@ -3,7 +3,9 @@ apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
annotations:
{{- if hasKey .Values.source "upload" }}
cdi.kubevirt.io/storage.bind.immediate.requested: ""
{{- end }}
vm-disk.cozystack.io/optical: "{{ .Values.optical }}"
name: {{ .Release.Name }}
spec:

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.1
version: 0.4.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -22,8 +22,10 @@ The VPN Service is powered by the Outline Server, an advanced and user-friendly
### Configuration parameters
| Name | Description | Value |
| ------------- | ------------------------------------------- | ----- |
| `host` | Host used to substitute into generated URLs | `""` |
| `users` | Users configuration | `{}` |
| `externalIPs` | List of externalIPs for service. | `[]` |
| Name | Description | Value |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ |
| `host` | Host used to substitute into generated URLs | `""` |
| `users` | Users configuration | `{}` |
| `externalIPs` | List of externalIPs for service. | `[]` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

Some files were not shown because too many files have changed in this diff Show More