Compare commits

..

3 Commits

Author SHA1 Message Date
Timofei Larkin
dcacb2b79a Prepare release v0.28.1
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-03-26 15:52:09 +03:00
Timofei Larkin
82cf60ac26 Merge pull request #710 from cozystack/709-update-ingress-nginx
Update ingress-nginx to mitigate CVE-2025-1974

(cherry picked from commit c66eb9f94c)
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-03-26 14:52:31 +03:00
Nick Volynkin
934874aabf Fix typo in VirtualPodAutoscaler Makefile
Makefile was copied from VictoriaMetrics Operator, some lines were not
changed.

Follow-up to #676
Resolves #705

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
(cherry picked from commit 92e2173fa5)
2025-03-26 14:51:54 +03:00
248 changed files with 1358 additions and 4663 deletions

View File

@@ -1,91 +0,0 @@
# Cozystack Governance
This document defines the governance structure of the Cozystack community, outlining how members collaborate to achieve shared goals.
## Overview
**Cozystack**, a Cloud Native Computing Foundation (CNCF) project, is committed
to building an open, inclusive, productive, and self-governing open source
community focused on building a high-quality open source PaaS and framework for building clouds.
## Code Repositories
The following code repositories are governed by the Cozystack community and
maintained under the `cozystack` namespace:
* **[Cozystack](https://github.com/cozystack/cozystack):** Main Cozystack codebase
* **[website](https://github.com/cozystack/website):** Cozystack website and documentation sources
* **[Talm](https://github.com/cozystack/talm):** Tool for managing Talos Linux the GitOps way
* **[cozy-proxy](https://github.com/cozystack/cozy-proxy):** A simple kube-proxy addon for 1:1 NAT services in Kubernetes with NFT backend
* **[cozystack-telemetry-server](https://github.com/cozystack/cozystack-telemetry-server):** Cozystack telemetry
* **[talos-bootstrap](https://github.com/cozystack/talos-bootstrap):** An interactive Talos Linux installer
* **[talos-meta-tool](https://github.com/cozystack/talos-meta-tool):** Tool for writing network metadata into META partition
## Community Roles
* **Users:** Members that engage with the Cozystack community via any medium, including Slack, Telegram, GitHub, and mailing lists.
* **Contributors:** Members contributing to the projects by contributing and reviewing code, writing documentation,
responding to issues, participating in proposal discussions, and so on.
* **Directors:** Non-technical project leaders.
* **Maintainers**: Technical project leaders.
## Contributors
Cozystack is for everyone. Anyone can become a Cozystack contributor simply by
contributing to the project, whether through code, documentation, blog posts,
community management, or other means.
As with all Cozystack community members, contributors are expected to follow the
[Cozystack Code of Conduct](https://github.com/cozystack/cozystack/blob/main/CODE_OF_CONDUCT.md).
All contributions to Cozystack code, documentation, or other components in the
Cozystack GitHub organisation must follow the
[contributing guidelines](https://github.com/cozystack/cozystack/blob/main/CONTRIBUTING.md).
Whether these contributions are merged into the project is the prerogative of the maintainers.
## Directors
Directors are responsible for non-technical leadership functions within the project.
This includes representing Cozystack and its maintainers to the community, to the press,
and to the outside world; interfacing with CNCF and other governance entities;
and participating in project decision-making processes when appropriate.
Directors are elected by a majority vote of the maintainers.
## Maintainers
Maintainers have the right to merge code into the project.
Anyone can become a Cozystack maintainer (see "Becoming a maintainer" below).
### Expectations
Cozystack maintainers are expected to:
* Review pull requests, triage issues, and fix bugs in their areas of
expertise, ensuring that all changes go through the project's code review
and integration processes.
* Monitor cncf-cozystack-* emails, the Cozystack Slack channels in Kubernetes
and CNCF Slack workspaces, Telegram groups, and help out when possible.
* Rapidly respond to any time-sensitive security release processes.
* Attend Cozystack community meetings.
If a maintainer is no longer interested in or cannot perform the duties
listed above, they should move themselves to emeritus status.
If necessary, this can also occur through the decision-making process outlined below.
### Becoming a Maintainer
Anyone can become a Cozystack maintainer. Maintainers should be extremely
proficient in cloud native technologies and/or Go; have relevant domain expertise;
have the time and ability to meet the maintainer's expectations above;
and demonstrate the ability to work with the existing maintainers and project processes.
To become a maintainer, start by expressing interest to existing maintainers.
Existing maintainers will then ask you to demonstrate the qualifications above
by contributing PRs, doing code reviews, and other such tasks under their guidance.
After several months of working together, maintainers will decide whether to grant maintainer status.
## Project Decision-making Process
Ideally, all project decisions are resolved by consensus of maintainers and directors.
If this is not possible, a vote will be called.
The voting process is a simple majority in which each maintainer and director receives one vote.

View File

@@ -20,8 +20,8 @@ build:
make manifests make manifests
manifests: manifests:
mkdir -p _out/assets (cd packages/core/installer/; helm template -n cozy-installer installer .) > manifests/cozystack-installer.yaml
(cd packages/core/installer/; helm template -n cozy-installer installer .) > _out/assets/cozystack-installer.yaml sed -i 's|@sha256:[^"]\+||' manifests/cozystack-installer.yaml
repos: repos:
rm -rf _out rm -rf _out
@@ -44,6 +44,3 @@ test:
generate: generate:
hack/update-codegen.sh hack/update-codegen.sh
upload_assets: assets
hack/upload-assets.sh

View File

@@ -1,13 +1,12 @@
#!/bin/sh #!/bin/sh
set -e set -e
file=versions_map file=versions_map
charts=$(find . -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")') charts=$(find . -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")')
# <chart> <version> <commit>
new_map=$( new_map=$(
for chart in $charts; do for chart in $charts; do
awk '/^name:/ {chart=$2} /^version:/ {version=$2} END{printf "%s %s %s\n", chart, version, "HEAD"}' "$chart/Chart.yaml" awk '/^name:/ {chart=$2} /^version:/ {version=$2} END{printf "%s %s %s\n", chart, version, "HEAD"}' $chart/Chart.yaml
done done
) )
@@ -16,48 +15,47 @@ if [ ! -f "$file" ] || [ ! -s "$file" ]; then
exit 0 exit 0
fi fi
miss_map=$(echo "$new_map" | awk 'NR==FNR { nm[$1 " " $2] = $3; next } { if (!($1 " " $2 in nm)) print $1, $2, $3}' - "$file") miss_map=$(echo "$new_map" | awk 'NR==FNR { new_map[$1 " " $2] = $3; next } { if (!($1 " " $2 in new_map)) print $1, $2, $3}' - $file)
# search accross all tags sorted by version
search_commits=$(git ls-remote --tags origin | grep 'refs/tags/v' | sort -k2,2 -rV | awk '{print $1}')
# add latest main commit to search
search_commits="${search_commits} $(git rev-parse "origin/main")"
resolved_miss_map=$( resolved_miss_map=$(
echo "$miss_map" | while read -r chart version commit; do echo "$miss_map" | while read chart version commit; do
# if version is found in HEAD, it's HEAD if [ "$commit" = HEAD ]; then
if grep -q "^version: $version$" ./${chart}/Chart.yaml; then line=$(awk '/^version:/ {print NR; exit}' "./$chart/Chart.yaml")
echo "$chart $version HEAD" change_commit=$(git --no-pager blame -L"$line",+1 -- "$chart/Chart.yaml" | awk '{print $1}')
continue
fi if [ "$change_commit" = "00000000" ]; then
# Not committed yet, use previous commit
# if commit is not HEAD, check if it's valid line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
if [ $commit != "HEAD" ]; then commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
if ! git show "${commit}:./${chart}/Chart.yaml" 2>/dev/null | grep -q "^version: $version$"; then if [ $(echo $commit | cut -c1) = "^" ]; then
echo "Commit $commit for $chart $version is not valid" >&2 # Previous commit not exists
exit 1 commit=$(echo $commit | cut -c2-)
fi
else
# Committed, but version_map wasn't updated
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
change_commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
if [ $(echo $change_commit | cut -c1) = "^" ]; then
# Previous commit not exists
commit=$(echo $change_commit | cut -c2-)
else
commit=$(git describe --always "$change_commit~1")
fi
fi fi
commit=$(git rev-parse --short "$commit") # Check if the commit belongs to the main branch
echo "$chart $version $commit" if ! git merge-base --is-ancestor "$commit" main; then
continue # Find the closest parent commit that belongs to main
fi commit_in_main=$(git log --pretty=format:"%h" main -- "$chart" | head -n 1)
if [ -n "$commit_in_main" ]; then
# if commit is HEAD, but version is not found in HEAD, check all tags commit="$commit_in_main"
found_tag="" else
for tag in $search_commits; do # No valid commit found in main branch for $chart, skipping..."
if git show "${tag}:./${chart}/Chart.yaml" 2>/dev/null | grep -q "^version: $version$"; then continue
found_tag=$(git rev-parse --short "${tag}") fi
break
fi fi
done
if [ -z "$found_tag" ]; then
echo "Can't find $chart $version in any version tag or in the latest main commit" >&2
exit 1
fi fi
echo "$chart $version $commit"
echo "$chart $version $found_tag"
done done
) )

View File

@@ -1,8 +0,0 @@
#!/bin/bash
set -xe
version=$(git describe --tags)
gh release upload $version _out/assets/cozystack-installer.yaml
gh release upload $version _out/assets/metal-amd64.iso
gh release upload $version _out/assets/metal-amd64.raw.xz
gh release upload $version _out/assets/nocloud-amd64.raw.xz

View File

@@ -0,0 +1,105 @@
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: Namespace
metadata:
name: cozy-system
labels:
cozystack.io/system: "true"
pod-security.kubernetes.io/enforce: privileged
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cozystack
namespace: cozy-system
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cozystack
subjects:
- kind: ServiceAccount
name: cozystack
namespace: cozy-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: Service
metadata:
name: cozystack
namespace: cozy-system
spec:
ports:
- name: http
port: 80
targetPort: 8123
selector:
app: cozystack
type: ClusterIP
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cozystack
namespace: cozy-system
spec:
replicas: 1
selector:
matchLabels:
app: cozystack
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
template:
metadata:
labels:
app: cozystack
spec:
hostNetwork: true
serviceAccountName: cozystack
containers:
- name: cozystack
image: "ghcr.io/cozystack/cozystack/installer:v0.28.1"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
- name: KUBERNETES_SERVICE_PORT
value: "7445"
- name: K8S_AWAIT_ELECTION_ENABLED
value: "1"
- name: K8S_AWAIT_ELECTION_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAMESPACE
value: cozy-system
- name: K8S_AWAIT_ELECTION_IDENTITY
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: assets
image: "ghcr.io/cozystack/cozystack/installer:v0.28.1"
command:
- /usr/bin/cozystack-assets-server
- "-dir=/cozystack/assets"
- "-address=:8123"
ports:
- name: http
containerPort: 8123
tolerations:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
- key: "node.cilium.io/agent-not-ready"
operator: "Exists"
effect: "NoSchedule"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.0 version: 0.6.2
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -36,15 +36,13 @@ more details:
### Backup parameters ### Backup parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ | | ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` | | `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` | | `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` | | `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` | | `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` | | `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` | | `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` | | `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` | | `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -121,11 +121,6 @@ spec:
containers: containers:
- name: clickhouse - name: clickhouse
image: clickhouse/clickhouse-server:24.9.2.42 image: clickhouse/clickhouse-server:24.9.2.42
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 16 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 16 }}
{{- end }}
volumeMounts: volumeMounts:
- name: data-volume-template - name: data-volume-template
mountPath: /var/lib/clickhouse mountPath: /var/lib/clickhouse

View File

@@ -76,16 +76,6 @@
"default": "ChaXoveekoh6eigh4siesheeda2quai0" "default": "ChaXoveekoh6eigh4siesheeda2quai0"
} }
} }
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
} }

View File

@@ -46,16 +46,3 @@ backup:
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0 resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0 version: 0.4.2
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -21,17 +21,15 @@
### Backup parameters ### Backup parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ | | ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` | | `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` | | `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` | | `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` | | `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` | | `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` | | `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` | | `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` | | `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -15,11 +15,7 @@ spec:
{{- end }} {{- end }}
minSyncReplicas: {{ .Values.quorum.minSyncReplicas }} minSyncReplicas: {{ .Values.quorum.minSyncReplicas }}
maxSyncReplicas: {{ .Values.quorum.maxSyncReplicas }} maxSyncReplicas: {{ .Values.quorum.maxSyncReplicas }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}
monitoring: monitoring:
enablePodMonitor: true enablePodMonitor: true

View File

@@ -81,16 +81,6 @@
"default": "ChaXoveekoh6eigh4siesheeda2quai0" "default": "ChaXoveekoh6eigh4siesheeda2quai0"
} }
} }
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
} }

View File

@@ -48,16 +48,3 @@ backup:
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0 resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0 version: 0.3.1
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -60,17 +60,13 @@ VTS module shows wrong upstream resonse time
### Common parameters ### Common parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | ------------------ | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` | | `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` | | `size` | Persistent Volume size | `10Gi` |
| `storageClass` | StorageClass used to store the data | `""` | | `storageClass` | StorageClass used to store the data | `""` |
| `haproxy.replicas` | Number of HAProxy replicas | `2` | | `haproxy.replicas` | Number of HAProxy replicas | `2` |
| `nginx.replicas` | Number of Nginx replicas | `2` | | `nginx.replicas` | Number of Nginx replicas | `2` |
| `haproxy.resources` | Resources | `{}` |
| `haproxy.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| `nginx.resources` | Resources | `{}` |
| `nginx.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
### Configuration parameters ### Configuration parameters

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -33,11 +33,6 @@ spec:
containers: containers:
- image: haproxy:latest - image: haproxy:latest
name: haproxy name: haproxy
{{- if .Values.haproxy.resources }}
resources: {{- toYaml .Values.haproxy.resources | nindent 10 }}
{{- else if ne .Values.haproxy.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.haproxy.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
ports: ports:
- containerPort: 8080 - containerPort: 8080
name: http name: http

View File

@@ -52,11 +52,6 @@ spec:
shareProcessNamespace: true shareProcessNamespace: true
containers: containers:
- name: nginx - name: nginx
{{- if $.Values.nginx.resources }}
resources: {{- toYaml $.Values.nginx.resources | nindent 10 }}
{{- else if ne $.Values.nginx.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" $.Values.nginx.resourcesPreset "Release" $.Release) | nindent 10 }}
{{- end }}
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}" image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}"
readinessProbe: readinessProbe:
httpGet: httpGet:
@@ -88,13 +83,6 @@ spec:
- name: reloader - name: reloader
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}" image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}"
command: ["/usr/bin/nginx-reloader.sh"] command: ["/usr/bin/nginx-reloader.sh"]
resources:
limits:
cpu: 50m
memory: 50Mi
requests:
cpu: 50m
memory: 50Mi
#command: ["sleep", "infinity"] #command: ["sleep", "infinity"]
volumeMounts: volumeMounts:
- mountPath: /etc/nginx/nginx.conf - mountPath: /etc/nginx/nginx.conf

View File

@@ -24,16 +24,6 @@
"type": "number", "type": "number",
"description": "Number of HAProxy replicas", "description": "Number of HAProxy replicas",
"default": 2 "default": 2
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
}, },
@@ -44,16 +34,6 @@
"type": "number", "type": "number",
"description": "Number of Nginx replicas", "description": "Number of Nginx replicas",
"default": 2 "default": 2
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
}, },

View File

@@ -12,32 +12,8 @@ size: 10Gi
storageClass: "" storageClass: ""
haproxy: haproxy:
replicas: 2 replicas: 2
## @param haproxy.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param haproxy.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
nginx: nginx:
replicas: 2 replicas: 2
## @param nginx.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param nginx.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
## @section Configuration parameters ## @section Configuration parameters

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0 version: 0.3.3
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -4,19 +4,15 @@
### Common parameters ### Common parameters
| Name | Description | Value | | Name | Description | Value |
| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | ------------------------ | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` | | `external` | Enable external access from outside the cluster | `false` |
| `kafka.size` | Persistent Volume size for Kafka | `10Gi` | | `kafka.size` | Persistent Volume size for Kafka | `10Gi` |
| `kafka.replicas` | Number of Kafka replicas | `3` | | `kafka.replicas` | Number of Kafka replicas | `3` |
| `kafka.storageClass` | StorageClass used to store the Kafka data | `""` | | `kafka.storageClass` | StorageClass used to store the Kafka data | `""` |
| `zookeeper.size` | Persistent Volume size for ZooKeeper | `5Gi` | | `zookeeper.size` | Persistent Volume size for ZooKeeper | `5Gi` |
| `zookeeper.replicas` | Number of ZooKeeper replicas | `3` | | `zookeeper.replicas` | Number of ZooKeeper replicas | `3` |
| `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data | `""` | | `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data | `""` |
| `kafka.resources` | Resources | `{}` |
| `kafka.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| `zookeeper.resources` | Resources | `{}` |
| `zookeeper.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
### Configuration parameters ### Configuration parameters

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -8,11 +8,6 @@ metadata:
spec: spec:
kafka: kafka:
replicas: {{ .Values.kafka.replicas }} replicas: {{ .Values.kafka.replicas }}
{{- if .Values.kafka.resources }}
resources: {{- toYaml .Values.kafka.resources | nindent 6 }}
{{- else if ne .Values.kafka.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kafka.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
listeners: listeners:
- name: plain - name: plain
port: 9092 port: 9092
@@ -70,11 +65,6 @@ spec:
key: kafka-metrics-config.yml key: kafka-metrics-config.yml
zookeeper: zookeeper:
replicas: {{ .Values.zookeeper.replicas }} replicas: {{ .Values.zookeeper.replicas }}
{{- if .Values.zookeeper.resources }}
resources: {{- toYaml .Values.zookeeper.resources | nindent 6 }}
{{- else if ne .Values.zookeeper.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.zookeeper.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
storage: storage:
type: persistent-claim type: persistent-claim
{{- with .Values.zookeeper.size }} {{- with .Values.zookeeper.size }}

View File

@@ -24,16 +24,6 @@
"type": "string", "type": "string",
"description": "StorageClass used to store the Kafka data", "description": "StorageClass used to store the Kafka data",
"default": "" "default": ""
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
}, },
@@ -54,16 +44,6 @@
"type": "string", "type": "string",
"description": "StorageClass used to store the ZooKeeper data", "description": "StorageClass used to store the ZooKeeper data",
"default": "" "default": ""
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
}, },

View File

@@ -14,35 +14,10 @@ kafka:
size: 10Gi size: 10Gi
replicas: 3 replicas: 3
storageClass: "" storageClass: ""
## @param kafka.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kafka.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
zookeeper: zookeeper:
size: 5Gi size: 5Gi
replicas: 3 replicas: 3
storageClass: "" storageClass: ""
## @param zookeeper.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param zookeeper.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
## @section Configuration parameters ## @section Configuration parameters

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.17.0 version: 0.15.2
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.15.2@sha256:967e51702102d0dbd97f9847de4159d62681b31eb606322d2c29755393c2236e ghcr.io/cozystack/cozystack/cluster-autoscaler:0.15.2@sha256:ea5cd225dbd1233afe2bfd727b9f90847f198f5d231871141d494d491fdee795

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:latest@sha256:47ad85a2bb2b11818df85e80cbc6e07021e97e429d5bb020ce8db002b37a77f1 ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.15.2@sha256:de98b18691cbd1e0d7d886c57873c2ecdae7a5ab2e3c4c59f9a24bdc321622a9

View File

@@ -3,11 +3,12 @@ FROM --platform=linux/amd64 golang:1.20.6 AS builder
RUN git clone https://github.com/kubevirt/cloud-provider-kubevirt /go/src/kubevirt.io/cloud-provider-kubevirt \ RUN git clone https://github.com/kubevirt/cloud-provider-kubevirt /go/src/kubevirt.io/cloud-provider-kubevirt \
&& cd /go/src/kubevirt.io/cloud-provider-kubevirt \ && cd /go/src/kubevirt.io/cloud-provider-kubevirt \
&& git checkout 443a1fe && git checkout da9e0cf
WORKDIR /go/src/kubevirt.io/cloud-provider-kubevirt WORKDIR /go/src/kubevirt.io/cloud-provider-kubevirt
# see: https://github.com/kubevirt/cloud-provider-kubevirt/pull/335 # see: https://github.com/kubevirt/cloud-provider-kubevirt/pull/335
# see: https://github.com/kubevirt/cloud-provider-kubevirt/pull/336
ADD patches /patches ADD patches /patches
RUN git apply /patches/*.diff RUN git apply /patches/*.diff
RUN go get 'k8s.io/endpointslice/util@v0.28' 'k8s.io/apiserver@v0.28' RUN go get 'k8s.io/endpointslice/util@v0.28' 'k8s.io/apiserver@v0.28'

View File

@@ -0,0 +1,20 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index a3c1aa33..95c31438 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -412,11 +412,11 @@ func (c *Controller) reconcileByAddressType(service *v1.Service, tenantSlices []
// Create the desired port configuration
var desiredPorts []discovery.EndpointPort
- for _, port := range service.Spec.Ports {
+ for i := range service.Spec.Ports {
desiredPorts = append(desiredPorts, discovery.EndpointPort{
- Port: &port.TargetPort.IntVal,
- Protocol: &port.Protocol,
- Name: &port.Name,
+ Port: &service.Spec.Ports[i].TargetPort.IntVal,
+ Protocol: &service.Spec.Ports[i].Protocol,
+ Name: &service.Spec.Ports[i].Name,
})
}

View File

@@ -0,0 +1,129 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index a3c1aa33..6f6e3d32 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -108,32 +108,24 @@ func newRequest(reqType ReqType, obj interface{}, oldObj interface{}) *Request {
}
func (c *Controller) Init() error {
-
- // Act on events from Services on the infra cluster. These are created by the EnsureLoadBalancer function.
- // We need to watch for these events so that we can update the EndpointSlices in the infra cluster accordingly.
+ // Existing Service event handlers...
_, err := c.infraFactory.Core().V1().Services().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
- // cast obj to Service
svc := obj.(*v1.Service)
- // Only act on Services of type LoadBalancer
if svc.Spec.Type == v1.ServiceTypeLoadBalancer {
klog.Infof("Service added: %v/%v", svc.Namespace, svc.Name)
c.queue.Add(newRequest(AddReq, obj, nil))
}
},
UpdateFunc: func(oldObj, newObj interface{}) {
- // cast obj to Service
newSvc := newObj.(*v1.Service)
- // Only act on Services of type LoadBalancer
if newSvc.Spec.Type == v1.ServiceTypeLoadBalancer {
klog.Infof("Service updated: %v/%v", newSvc.Namespace, newSvc.Name)
c.queue.Add(newRequest(UpdateReq, newObj, oldObj))
}
},
DeleteFunc: func(obj interface{}) {
- // cast obj to Service
svc := obj.(*v1.Service)
- // Only act on Services of type LoadBalancer
if svc.Spec.Type == v1.ServiceTypeLoadBalancer {
klog.Infof("Service deleted: %v/%v", svc.Namespace, svc.Name)
c.queue.Add(newRequest(DeleteReq, obj, nil))
@@ -144,7 +136,7 @@ func (c *Controller) Init() error {
return err
}
- // Monitor endpoint slices that we are interested in based on known services in the infra cluster
+ // Existing EndpointSlice event handlers in tenant cluster...
_, err = c.tenantFactory.Discovery().V1().EndpointSlices().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
eps := obj.(*discovery.EndpointSlice)
@@ -194,10 +186,80 @@ func (c *Controller) Init() error {
return err
}
- //TODO: Add informer for EndpointSlices in the infra cluster to watch for (unwanted) changes
+ // Add an informer for EndpointSlices in the infra cluster
+ _, err = c.infraFactory.Discovery().V1().EndpointSlices().Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
+ AddFunc: func(obj interface{}) {
+ eps := obj.(*discovery.EndpointSlice)
+ if c.managedByController(eps) {
+ svc, svcErr := c.getInfraServiceForEPS(context.TODO(), eps)
+ if svcErr != nil {
+ klog.Errorf("Failed to get infra Service for EndpointSlice %s/%s: %v", eps.Namespace, eps.Name, svcErr)
+ return
+ }
+ if svc != nil {
+ klog.Infof("Infra EndpointSlice added: %v/%v, requeuing Service: %v/%v", eps.Namespace, eps.Name, svc.Namespace, svc.Name)
+ c.queue.Add(newRequest(AddReq, svc, nil))
+ }
+ }
+ },
+ UpdateFunc: func(oldObj, newObj interface{}) {
+ eps := newObj.(*discovery.EndpointSlice)
+ if c.managedByController(eps) {
+ svc, svcErr := c.getInfraServiceForEPS(context.TODO(), eps)
+ if svcErr != nil {
+ klog.Errorf("Failed to get infra Service for EndpointSlice %s/%s: %v", eps.Namespace, eps.Name, svcErr)
+ return
+ }
+ if svc != nil {
+ klog.Infof("Infra EndpointSlice updated: %v/%v, requeuing Service: %v/%v", eps.Namespace, eps.Name, svc.Namespace, svc.Name)
+ c.queue.Add(newRequest(UpdateReq, svc, nil))
+ }
+ }
+ },
+ DeleteFunc: func(obj interface{}) {
+ eps := obj.(*discovery.EndpointSlice)
+ if c.managedByController(eps) {
+ svc, svcErr := c.getInfraServiceForEPS(context.TODO(), eps)
+ if svcErr != nil {
+ klog.Errorf("Failed to get infra Service for EndpointSlice %s/%s on delete: %v", eps.Namespace, eps.Name, svcErr)
+ return
+ }
+ if svc != nil {
+ klog.Infof("Infra EndpointSlice deleted: %v/%v, requeuing Service: %v/%v", eps.Namespace, eps.Name, svc.Namespace, svc.Name)
+ c.queue.Add(newRequest(DeleteReq, svc, nil))
+ }
+ }
+ },
+ })
+ if err != nil {
+ return err
+ }
+
return nil
}
+// getInfraServiceForEPS returns the Service in the infra cluster associated with the given EndpointSlice.
+// It does this by reading the "kubernetes.io/service-name" label from the EndpointSlice, which should correspond
+// to the Service name. If not found or if the Service doesn't exist, it returns nil.
+func (c *Controller) getInfraServiceForEPS(ctx context.Context, eps *discovery.EndpointSlice) (*v1.Service, error) {
+ svcName := eps.Labels[discovery.LabelServiceName]
+ if svcName == "" {
+ // No service name label found, can't determine infra service.
+ return nil, nil
+ }
+
+ svc, err := c.infraClient.CoreV1().Services(c.infraNamespace).Get(ctx, svcName, metav1.GetOptions{})
+ if err != nil {
+ if k8serrors.IsNotFound(err) {
+ // Service doesn't exist
+ return nil, nil
+ }
+ return nil, err
+ }
+
+ return svc, nil
+}
+
// Run starts an asynchronous loop that monitors and updates GKENetworkParamSet in the cluster.
func (c *Controller) Run(numWorkers int, stopCh <-chan struct{}, controllerManagerMetrics *controllersmetrics.ControllerManagerMetrics) {
defer utilruntime.HandleCrash()

View File

@@ -1,689 +0,0 @@
diff --git a/.golangci.yml b/.golangci.yml
index cf72a41a2..1c9237e83 100644
--- a/.golangci.yml
+++ b/.golangci.yml
@@ -122,3 +122,9 @@ linters:
# - testpackage
# - revive
# - wsl
+issues:
+ exclude-rules:
+ - filename: "kubevirteps_controller_test.go"
+ linters:
+ - govet
+ text: "declaration of \"err\" shadows"
diff --git a/cmd/kubevirt-cloud-controller-manager/kubevirteps.go b/cmd/kubevirt-cloud-controller-manager/kubevirteps.go
index 74166b5d9..4e744f8de 100644
--- a/cmd/kubevirt-cloud-controller-manager/kubevirteps.go
+++ b/cmd/kubevirt-cloud-controller-manager/kubevirteps.go
@@ -101,7 +101,18 @@ func startKubevirtCloudController(
klog.Infof("Setting up kubevirtEPSController")
- kubevirtEPSController := kubevirteps.NewKubevirtEPSController(tenantClient, infraClient, infraDynamic, kubevirtCloud.Namespace())
+ clusterName := ccmConfig.ComponentConfig.KubeCloudShared.ClusterName
+ if clusterName == "" {
+ klog.Fatalf("Required flag --cluster-name is missing")
+ }
+
+ kubevirtEPSController := kubevirteps.NewKubevirtEPSController(
+ tenantClient,
+ infraClient,
+ infraDynamic,
+ kubevirtCloud.Namespace(),
+ clusterName,
+ )
klog.Infof("Initializing kubevirtEPSController")
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 6f6e3d322..b56882c12 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -54,10 +54,10 @@ type Controller struct {
infraDynamic dynamic.Interface
infraFactory informers.SharedInformerFactory
- infraNamespace string
- queue workqueue.RateLimitingInterface
- maxRetries int
-
+ infraNamespace string
+ clusterName string
+ queue workqueue.RateLimitingInterface
+ maxRetries int
maxEndPointsPerSlice int
}
@@ -65,8 +65,9 @@ func NewKubevirtEPSController(
tenantClient kubernetes.Interface,
infraClient kubernetes.Interface,
infraDynamic dynamic.Interface,
- infraNamespace string) *Controller {
-
+ infraNamespace string,
+ clusterName string,
+) *Controller {
tenantFactory := informers.NewSharedInformerFactory(tenantClient, 0)
infraFactory := informers.NewSharedInformerFactoryWithOptions(infraClient, 0, informers.WithNamespace(infraNamespace))
queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
@@ -79,6 +80,7 @@ func NewKubevirtEPSController(
infraDynamic: infraDynamic,
infraFactory: infraFactory,
infraNamespace: infraNamespace,
+ clusterName: clusterName,
queue: queue,
maxRetries: 25,
maxEndPointsPerSlice: 100,
@@ -320,22 +322,30 @@ func (c *Controller) processNextItem(ctx context.Context) bool {
// getInfraServiceFromTenantEPS returns the Service in the infra cluster that is associated with the given tenant endpoint slice.
func (c *Controller) getInfraServiceFromTenantEPS(ctx context.Context, slice *discovery.EndpointSlice) (*v1.Service, error) {
- infraServices, err := c.infraClient.CoreV1().Services(c.infraNamespace).List(ctx,
- metav1.ListOptions{LabelSelector: fmt.Sprintf("%s=%s,%s=%s", kubevirt.TenantServiceNameLabelKey, slice.Labels["kubernetes.io/service-name"],
- kubevirt.TenantServiceNamespaceLabelKey, slice.Namespace)})
+ tenantServiceName := slice.Labels[discovery.LabelServiceName]
+ tenantServiceNamespace := slice.Namespace
+
+ labelSelector := fmt.Sprintf(
+ "%s=%s,%s=%s,%s=%s",
+ kubevirt.TenantServiceNameLabelKey, tenantServiceName,
+ kubevirt.TenantServiceNamespaceLabelKey, tenantServiceNamespace,
+ kubevirt.TenantClusterNameLabelKey, c.clusterName,
+ )
+
+ svcList, err := c.infraClient.CoreV1().Services(c.infraNamespace).List(ctx, metav1.ListOptions{
+ LabelSelector: labelSelector,
+ })
if err != nil {
- klog.Errorf("Failed to get Service in Infra for EndpointSlice %s in namespace %s: %v", slice.Name, slice.Namespace, err)
+ klog.Errorf("Failed to get Service in Infra for EndpointSlice %s in namespace %s: %v", slice.Name, tenantServiceNamespace, err)
return nil, err
}
- if len(infraServices.Items) > 1 {
- // This should never be possible, only one service should exist for a given tenant endpoint slice
- klog.Errorf("Multiple services found for tenant endpoint slice %s in namespace %s", slice.Name, slice.Namespace)
+ if len(svcList.Items) > 1 {
+ klog.Errorf("Multiple services found for tenant endpoint slice %s in namespace %s", slice.Name, tenantServiceNamespace)
return nil, errors.New("multiple services found for tenant endpoint slice")
}
- if len(infraServices.Items) == 1 {
- return &infraServices.Items[0], nil
+ if len(svcList.Items) == 1 {
+ return &svcList.Items[0], nil
}
- // No service found, possible if service is deleted.
return nil, nil
}
@@ -363,16 +373,27 @@ func (c *Controller) getTenantEPSFromInfraService(ctx context.Context, svc *v1.S
// getInfraEPSFromInfraService returns the EndpointSlices in the infra cluster that are associated with the given infra service.
func (c *Controller) getInfraEPSFromInfraService(ctx context.Context, svc *v1.Service) ([]*discovery.EndpointSlice, error) {
var infraEPSSlices []*discovery.EndpointSlice
- klog.Infof("Searching for endpoints on infra cluster for service %s in namespace %s.", svc.Name, svc.Namespace)
- result, err := c.infraClient.DiscoveryV1().EndpointSlices(svc.Namespace).List(ctx,
- metav1.ListOptions{LabelSelector: fmt.Sprintf("%s=%s", discovery.LabelServiceName, svc.Name)})
+
+ klog.Infof("Searching for EndpointSlices in infra cluster for service %s/%s", svc.Namespace, svc.Name)
+
+ labelSelector := fmt.Sprintf(
+ "%s=%s,%s=%s",
+ discovery.LabelServiceName, svc.Name,
+ kubevirt.TenantClusterNameLabelKey, c.clusterName,
+ )
+
+ result, err := c.infraClient.DiscoveryV1().EndpointSlices(svc.Namespace).List(ctx, metav1.ListOptions{
+ LabelSelector: labelSelector,
+ })
if err != nil {
klog.Errorf("Failed to get EndpointSlices for Service %s in namespace %s: %v", svc.Name, svc.Namespace, err)
return nil, err
}
+
for _, eps := range result.Items {
infraEPSSlices = append(infraEPSSlices, &eps)
}
+
return infraEPSSlices, nil
}
@@ -382,74 +403,117 @@ func (c *Controller) reconcile(ctx context.Context, r *Request) error {
return errors.New("could not cast object to service")
}
+ // Skip services not managed by this controller (missing required labels)
if service.Labels[kubevirt.TenantServiceNameLabelKey] == "" ||
service.Labels[kubevirt.TenantServiceNamespaceLabelKey] == "" ||
service.Labels[kubevirt.TenantClusterNameLabelKey] == "" {
- klog.Infof("This LoadBalancer Service: %s is not managed by the %s. Skipping.", service.Name, ControllerName)
+ klog.Infof("Service %s is not managed by this controller. Skipping.", service.Name)
+ return nil
+ }
+
+ // Skip services for other clusters
+ if service.Labels[kubevirt.TenantClusterNameLabelKey] != c.clusterName {
+ klog.Infof("Skipping Service %s: cluster label %q doesn't match our clusterName %q", service.Name, service.Labels[kubevirt.TenantClusterNameLabelKey], c.clusterName)
return nil
}
+
klog.Infof("Reconciling: %v", service.Name)
+ /*
+ 1) Check if Service in the infra cluster is actually present.
+ If it's not found, mark it as 'deleted' so that we don't create new slices.
+ */
serviceDeleted := false
- svc, err := c.infraFactory.Core().V1().Services().Lister().Services(c.infraNamespace).Get(service.Name)
+ infraSvc, err := c.infraFactory.Core().V1().Services().Lister().Services(c.infraNamespace).Get(service.Name)
if err != nil {
- klog.Infof("Service %s in namespace %s is deleted.", service.Name, service.Namespace)
+ // The Service is not present in the infra lister => treat as deleted
+ klog.Infof("Service %s in namespace %s is deleted (or not found).", service.Name, service.Namespace)
serviceDeleted = true
} else {
- service = svc
+ // Use the actual object from the lister, so we have the latest state
+ service = infraSvc
}
+ /*
+ 2) Get all existing EndpointSlices in the infra cluster that belong to this LB Service.
+ We'll decide which of them should be updated or deleted.
+ */
infraExistingEpSlices, err := c.getInfraEPSFromInfraService(ctx, service)
if err != nil {
return err
}
- // At this point we have the current state of the 3 main objects we are interested in:
- // 1. The Service in the infra cluster, the one created by the KubevirtCloudController.
- // 2. The EndpointSlices in the tenant cluster, created for the tenant cluster's Service.
- // 3. The EndpointSlices in the infra cluster, managed by this controller.
-
slicesToDelete := []*discovery.EndpointSlice{}
slicesByAddressType := make(map[discovery.AddressType][]*discovery.EndpointSlice)
+ // For example, if the service is single-stack IPv4 => only AddressTypeIPv4
+ // or if dual-stack => IPv4 and IPv6, etc.
serviceSupportedAddressesTypes := getAddressTypesForService(service)
- // If the services switched to a different address type, we need to delete the old ones, because it's immutable.
- // If the services switched to a different externalTrafficPolicy, we need to delete the old ones.
+
+ /*
+ 3) Determine which slices to delete, and which to pass on to the normal
+ "reconcileByAddressType" logic.
+
+ - If 'serviceDeleted' is true OR service.Spec.Selector != nil, we remove them.
+ - Also, if the slice's address type is unsupported by the Service, we remove it.
+ */
for _, eps := range infraExistingEpSlices {
- if service.Spec.Selector != nil || serviceDeleted {
- klog.Infof("Added for deletion EndpointSlice %s in namespace %s because it has a selector", eps.Name, eps.Namespace)
- // to be sure we don't delete any slice that is not managed by us
+ // If service is deleted or has a non-nil selector => remove slices
+ if serviceDeleted || service.Spec.Selector != nil {
+ /*
+ Only remove if it is clearly labeled as managed by us:
+ we do not want to accidentally remove slices that are not
+ created by this controller.
+ */
if c.managedByController(eps) {
+ klog.Infof("Added for deletion EndpointSlice %s in namespace %s because service is deleted or has a selector",
+ eps.Name, eps.Namespace)
slicesToDelete = append(slicesToDelete, eps)
}
continue
}
+
+ // If the Service does not support this slice's AddressType => remove
if !serviceSupportedAddressesTypes.Has(eps.AddressType) {
- klog.Infof("Added for deletion EndpointSlice %s in namespace %s because it has an unsupported address type: %v", eps.Name, eps.Namespace, eps.AddressType)
+ klog.Infof("Added for deletion EndpointSlice %s in namespace %s because it has an unsupported address type: %v",
+ eps.Name, eps.Namespace, eps.AddressType)
slicesToDelete = append(slicesToDelete, eps)
continue
}
+
+ /*
+ Otherwise, this slice is potentially still valid for the given AddressType,
+ we'll send it to reconcileByAddressType for final merging and updates.
+ */
slicesByAddressType[eps.AddressType] = append(slicesByAddressType[eps.AddressType], eps)
}
- if !serviceDeleted {
- // Get tenant's endpoint slices for this service
+ /*
+ 4) If the Service was NOT deleted and has NO selector (i.e., it's a "no-selector" LB Service),
+ we proceed to handle creation and updates. That means:
+ - Gather Tenant's EndpointSlices
+ - Reconcile them by each AddressType
+ */
+ if !serviceDeleted && service.Spec.Selector == nil {
tenantEpSlices, err := c.getTenantEPSFromInfraService(ctx, service)
if err != nil {
return err
}
- // Reconcile the EndpointSlices for each address type e.g. ipv4, ipv6
+ // For each addressType (ipv4, ipv6, etc.) reconcile the infra slices
for addressType := range serviceSupportedAddressesTypes {
existingSlices := slicesByAddressType[addressType]
- err := c.reconcileByAddressType(service, tenantEpSlices, existingSlices, addressType)
- if err != nil {
+ if err := c.reconcileByAddressType(service, tenantEpSlices, existingSlices, addressType); err != nil {
return err
}
}
}
- // Delete the EndpointSlices that are no longer needed
+ /*
+ 5) Perform the actual deletion of all slices we flagged.
+ In many cases (serviceDeleted or .Spec.Selector != nil),
+ we end up with only "delete" actions and no new slice creation.
+ */
for _, eps := range slicesToDelete {
err := c.infraClient.DiscoveryV1().EndpointSlices(eps.Namespace).Delete(context.TODO(), eps.Name, metav1.DeleteOptions{})
if err != nil {
@@ -474,11 +538,11 @@ func (c *Controller) reconcileByAddressType(service *v1.Service, tenantSlices []
// Create the desired port configuration
var desiredPorts []discovery.EndpointPort
- for _, port := range service.Spec.Ports {
+ for i := range service.Spec.Ports {
desiredPorts = append(desiredPorts, discovery.EndpointPort{
- Port: &port.TargetPort.IntVal,
- Protocol: &port.Protocol,
- Name: &port.Name,
+ Port: &service.Spec.Ports[i].TargetPort.IntVal,
+ Protocol: &service.Spec.Ports[i].Protocol,
+ Name: &service.Spec.Ports[i].Name,
})
}
@@ -588,55 +652,114 @@ func ownedBy(endpointSlice *discovery.EndpointSlice, svc *v1.Service) bool {
return false
}
-func (c *Controller) finalize(service *v1.Service, slicesToCreate []*discovery.EndpointSlice, slicesToUpdate []*discovery.EndpointSlice, slicesToDelete []*discovery.EndpointSlice) error {
- // If there are slices to delete and slices to create, make them as update
- for i := 0; i < len(slicesToDelete); {
+func (c *Controller) finalize(
+ service *v1.Service,
+ slicesToCreate []*discovery.EndpointSlice,
+ slicesToUpdate []*discovery.EndpointSlice,
+ slicesToDelete []*discovery.EndpointSlice,
+) error {
+ /*
+ We try to turn a "delete + create" pair into a single "update" operation
+ if the original slice (slicesToDelete[i]) has the same address type as
+ the first slice in slicesToCreate, and is owned by the same Service.
+
+ However, we must re-check the lengths of slicesToDelete and slicesToCreate
+ within the loop to avoid an out-of-bounds index in slicesToCreate.
+ */
+
+ i := 0
+ for i < len(slicesToDelete) {
+ // If there is nothing to create, break early
if len(slicesToCreate) == 0 {
break
}
- if slicesToDelete[i].AddressType == slicesToCreate[0].AddressType && ownedBy(slicesToDelete[i], service) {
- slicesToCreate[0].Name = slicesToDelete[i].Name
+
+ sd := slicesToDelete[i]
+ sc := slicesToCreate[0] // We can safely do this now, because len(slicesToCreate) > 0
+
+ // If the address type matches, and the slice is owned by the same Service,
+ // then instead of deleting sd and creating sc, we'll transform it into an update:
+ // we rename sc with sd's name, remove sd from the delete list, remove sc from the create list,
+ // and add sc to the update list.
+ if sd.AddressType == sc.AddressType && ownedBy(sd, service) {
+ sliceToUpdate := sc
+ sliceToUpdate.Name = sd.Name
+
+ // Remove the first element from slicesToCreate
slicesToCreate = slicesToCreate[1:]
- slicesToUpdate = append(slicesToUpdate, slicesToCreate[0])
+
+ // Remove the slice from slicesToDelete
slicesToDelete = append(slicesToDelete[:i], slicesToDelete[i+1:]...)
+
+ // Now add the renamed slice to the list of slices we want to update
+ slicesToUpdate = append(slicesToUpdate, sliceToUpdate)
+
+ /*
+ Do not increment i here, because we've just removed an element from
+ slicesToDelete. The next slice to examine is now at the same index i.
+ */
} else {
+ // If they don't match, move on to the next slice in slicesToDelete.
i++
}
}
- // Create the new slices if service is not marked for deletion
+ /*
+ If the Service is not being deleted, create all remaining slices in slicesToCreate.
+ (If the Service has a DeletionTimestamp, it means it is going away, so we do not
+ want to create new EndpointSlices.)
+ */
if service.DeletionTimestamp == nil {
for _, slice := range slicesToCreate {
- createdSlice, err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Create(context.TODO(), slice, metav1.CreateOptions{})
+ createdSlice, err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Create(
+ context.TODO(),
+ slice,
+ metav1.CreateOptions{},
+ )
if err != nil {
- klog.Errorf("Failed to create EndpointSlice %s in namespace %s: %v", slice.Name, slice.Namespace, err)
+ klog.Errorf("Failed to create EndpointSlice %s in namespace %s: %v",
+ slice.Name, slice.Namespace, err)
+ // If the namespace is terminating, it's safe to ignore the error.
if k8serrors.HasStatusCause(err, v1.NamespaceTerminatingCause) {
- return nil
+ continue
}
return err
}
- klog.Infof("Created EndpointSlice %s in namespace %s", createdSlice.Name, createdSlice.Namespace)
+ klog.Infof("Created EndpointSlice %s in namespace %s",
+ createdSlice.Name, createdSlice.Namespace)
}
}
- // Update slices
+ // Update slices that are in the slicesToUpdate list.
for _, slice := range slicesToUpdate {
- _, err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Update(context.TODO(), slice, metav1.UpdateOptions{})
+ _, err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Update(
+ context.TODO(),
+ slice,
+ metav1.UpdateOptions{},
+ )
if err != nil {
- klog.Errorf("Failed to update EndpointSlice %s in namespace %s: %v", slice.Name, slice.Namespace, err)
+ klog.Errorf("Failed to update EndpointSlice %s in namespace %s: %v",
+ slice.Name, slice.Namespace, err)
return err
}
- klog.Infof("Updated EndpointSlice %s in namespace %s", slice.Name, slice.Namespace)
+ klog.Infof("Updated EndpointSlice %s in namespace %s",
+ slice.Name, slice.Namespace)
}
- // Delete slices
+ // Finally, delete slices that are in slicesToDelete and are no longer needed.
for _, slice := range slicesToDelete {
- err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Delete(context.TODO(), slice.Name, metav1.DeleteOptions{})
+ err := c.infraClient.DiscoveryV1().EndpointSlices(slice.Namespace).Delete(
+ context.TODO(),
+ slice.Name,
+ metav1.DeleteOptions{},
+ )
if err != nil {
- klog.Errorf("Failed to delete EndpointSlice %s in namespace %s: %v", slice.Name, slice.Namespace, err)
+ klog.Errorf("Failed to delete EndpointSlice %s in namespace %s: %v",
+ slice.Name, slice.Namespace, err)
return err
}
- klog.Infof("Deleted EndpointSlice %s in namespace %s", slice.Name, slice.Namespace)
+ klog.Infof("Deleted EndpointSlice %s in namespace %s",
+ slice.Name, slice.Namespace)
}
return nil
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1fb86e25f..14d92d340 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
@@ -13,6 +13,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/intstr"
+ "k8s.io/apimachinery/pkg/util/sets"
dfake "k8s.io/client-go/dynamic/fake"
"k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/testing"
@@ -189,7 +190,7 @@ func setupTestKubevirtEPSController() *testKubevirtEPSController {
}: "VirtualMachineInstanceList",
})
- controller := NewKubevirtEPSController(tenantClient, infraClient, infraDynamic, "test")
+ controller := NewKubevirtEPSController(tenantClient, infraClient, infraDynamic, "test", "test-cluster")
err := controller.Init()
if err != nil {
@@ -686,5 +687,229 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
return false, err
}).Should(BeTrue(), "EndpointSlice in infra cluster should be recreated by the controller after deletion")
})
+
+ g.It("Should correctly handle multiple unique ports in EndpointSlice", func() {
+ // Create a VMI in the infra cluster
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+
+ // Create an EndpointSlice in the tenant cluster
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,
+ *createPort("http", 80, v1.ProtocolTCP),
+ []discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
+
+ // Define multiple ports for the Service
+ servicePorts := []v1.ServicePort{
+ {
+ Name: "client",
+ Protocol: v1.ProtocolTCP,
+ Port: 10001,
+ TargetPort: intstr.FromInt(30396),
+ NodePort: 30396,
+ },
+ {
+ Name: "dashboard",
+ Protocol: v1.ProtocolTCP,
+ Port: 8265,
+ TargetPort: intstr.FromInt(31003),
+ NodePort: 31003,
+ },
+ {
+ Name: "metrics",
+ Protocol: v1.ProtocolTCP,
+ Port: 8080,
+ TargetPort: intstr.FromInt(30452),
+ NodePort: 30452,
+ },
+ }
+
+ createAndAssertInfraServiceLB("infra-multiport-service", "tenant-service-name", "test-cluster",
+ servicePorts[0], v1.ServiceExternalTrafficPolicyLocal)
+
+ svc, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(context.TODO(), "infra-multiport-service", metav1.GetOptions{})
+ Expect(err).To(BeNil())
+
+ svc.Spec.Ports = servicePorts
+ _, err = testVals.infraClient.CoreV1().Services(infraNamespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
+ Expect(err).To(BeNil())
+
+ var epsListMultiPort *discoveryv1.EndpointSliceList
+
+ Eventually(func() (bool, error) {
+ epsListMultiPort, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
+ if len(epsListMultiPort.Items) != 1 {
+ return false, err
+ }
+
+ createdSlice := epsListMultiPort.Items[0]
+ expectedPortNames := []string{"client", "dashboard", "metrics"}
+ foundPortNames := []string{}
+
+ for _, port := range createdSlice.Ports {
+ if port.Name != nil {
+ foundPortNames = append(foundPortNames, *port.Name)
+ }
+ }
+
+ if len(foundPortNames) != len(expectedPortNames) {
+ return false, err
+ }
+
+ portSet := sets.NewString(foundPortNames...)
+ expectedPortSet := sets.NewString(expectedPortNames...)
+ return portSet.Equal(expectedPortSet), err
+ }).Should(BeTrue(), "EndpointSlice should contain all unique ports from the Service without duplicates")
+ })
+
+ g.It("Should not panic when Service changes to have a non-nil selector, causing EndpointSlice deletion with no new slices to create", func() {
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,
+ *createPort("http", 80, v1.ProtocolTCP),
+ []discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
+ createAndAssertInfraServiceLB("infra-service-no-selector", "tenant-service-name", "test-cluster",
+ v1.ServicePort{
+ Name: "web",
+ Port: 80,
+ NodePort: 31900,
+ Protocol: v1.ProtocolTCP,
+ TargetPort: intstr.IntOrString{IntVal: 30390},
+ },
+ v1.ServiceExternalTrafficPolicyLocal,
+ )
+
+ // Wait for the controller to create an EndpointSlice in the infra cluster.
+ var epsList *discoveryv1.EndpointSliceList
+ var err error
+ Eventually(func() (bool, error) {
+ epsList, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).
+ List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return false, err
+ }
+ // Wait exactly 1 slice
+ if len(epsList.Items) == 1 {
+ return true, nil
+ }
+ return false, nil
+ }).Should(BeTrue(), "Controller should create an EndpointSlice in infra cluster for the LB service")
+
+ svcWithSelector, err := testVals.infraClient.CoreV1().Services(infraNamespace).
+ Get(context.TODO(), "infra-service-no-selector", metav1.GetOptions{})
+ Expect(err).To(BeNil())
+
+ // Let's set any selector to run the slice deletion logic
+ svcWithSelector.Spec.Selector = map[string]string{"test": "selector-added"}
+ _, err = testVals.infraClient.CoreV1().Services(infraNamespace).
+ Update(context.TODO(), svcWithSelector, metav1.UpdateOptions{})
+ Expect(err).To(BeNil())
+
+ Eventually(func() (bool, error) {
+ epsList, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).
+ List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return false, err
+ }
+ // We expect that after the update service.EndpointSlice will become 0
+ if len(epsList.Items) == 0 {
+ return true, nil
+ }
+ return false, nil
+ }).Should(BeTrue(), "Existing EndpointSlice should be removed because Service now has a selector")
+ })
+
+ g.It("Should remove EndpointSlices and not recreate them when a previously no-selector Service obtains a selector", func() {
+ testVals.infraClient.Fake.PrependReactor("create", "endpointslices", func(action testing.Action) (bool, runtime.Object, error) {
+ createAction := action.(testing.CreateAction)
+ slice := createAction.GetObject().(*discoveryv1.EndpointSlice)
+ if slice.Name == "" && slice.GenerateName != "" {
+ slice.Name = slice.GenerateName + "-fake001"
+ }
+ return false, slice, nil
+ })
+
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,
+ *createPort("http", 80, v1.ProtocolTCP),
+ []discoveryv1.Endpoint{
+ *createEndpoint("123.45.67.89", "worker-0-test", true, true, false),
+ },
+ )
+
+ noSelectorSvcName := "svc-without-selector"
+ svc := &v1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: noSelectorSvcName,
+ Namespace: infraNamespace,
+ Labels: map[string]string{
+ kubevirt.TenantServiceNameLabelKey: "tenant-service-name",
+ kubevirt.TenantServiceNamespaceLabelKey: tenantNamespace,
+ kubevirt.TenantClusterNameLabelKey: "test-cluster",
+ },
+ },
+ Spec: v1.ServiceSpec{
+ Ports: []v1.ServicePort{
+ {
+ Name: "web",
+ Port: 80,
+ NodePort: 31900,
+ Protocol: v1.ProtocolTCP,
+ TargetPort: intstr.IntOrString{IntVal: 30390},
+ },
+ },
+ Type: v1.ServiceTypeLoadBalancer,
+ ExternalTrafficPolicy: v1.ServiceExternalTrafficPolicyLocal,
+ },
+ }
+
+ _, err := testVals.infraClient.CoreV1().Services(infraNamespace).Create(context.TODO(), svc, metav1.CreateOptions{})
+ Expect(err).To(BeNil())
+
+ Eventually(func() (bool, error) {
+ epsList, err := testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).
+ List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return false, err
+ }
+ return len(epsList.Items) == 1, nil
+ }).Should(BeTrue(), "Controller should create an EndpointSlice in infra cluster for the no-selector LB service")
+
+ svcWithSelector, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(
+ context.TODO(), noSelectorSvcName, metav1.GetOptions{})
+ Expect(err).To(BeNil())
+
+ svcWithSelector.Spec.Selector = map[string]string{"app": "test-value"}
+ _, err = testVals.infraClient.CoreV1().Services(infraNamespace).
+ Update(context.TODO(), svcWithSelector, metav1.UpdateOptions{})
+ Expect(err).To(BeNil())
+
+ Eventually(func() (bool, error) {
+ epsList, err := testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).
+ List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return false, err
+ }
+ return len(epsList.Items) == 0, nil
+ }).Should(BeTrue(), "All EndpointSlices should be removed after Service acquires a selector (no new slices created)")
+ })
+
+ g.It("Should ignore Services from a different cluster", func() {
+ // Create a Service with cluster label "other-cluster"
+ svc := createInfraServiceLB("infra-service-conflict", "tenant-service-name", "other-cluster",
+ v1.ServicePort{Name: "web", Port: 80, NodePort: 31900, Protocol: v1.ProtocolTCP, TargetPort: intstr.IntOrString{IntVal: 30390}},
+ v1.ServiceExternalTrafficPolicyLocal)
+ _, err := testVals.infraClient.CoreV1().Services(infraNamespace).Create(context.TODO(), svc, metav1.CreateOptions{})
+ Expect(err).To(BeNil())
+
+ // The controller should ignore this Service, so no EndpointSlice should be created.
+ Eventually(func() (bool, error) {
+ epsList, err := testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return false, err
+ }
+ // Expect zero slices since cluster label does not match "test-cluster"
+ return len(epsList.Items) == 0, nil
+ }).Should(BeTrue(), "Services with a different cluster label should be ignored")
+ })
+
})
})

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.15.2@sha256:cb4ab74099662f73e058f7c7495fb403488622c3425c06ad23b687bfa8bc805b ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.15.2@sha256:fdfa71edcb8a9f537926963fa11ad959fa2a20c08ba757c253b9587e8625b700

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -26,13 +26,6 @@ spec:
containers: containers:
- image: "{{ $.Files.Get "images/cluster-autoscaler.tag" | trim }}" - image: "{{ $.Files.Get "images/cluster-autoscaler.tag" | trim }}"
name: cluster-autoscaler name: cluster-autoscaler
resources:
limits:
cpu: 512m
memory: 512Mi
requests:
cpu: 125m
memory: 128Mi
command: command:
- /cluster-autoscaler - /cluster-autoscaler
args: args:

View File

@@ -102,37 +102,12 @@ metadata:
annotations: annotations:
kamaji.clastix.io/kubeconfig-secret-key: "super-admin.svc" kamaji.clastix.io/kubeconfig-secret-key: "super-admin.svc"
spec: spec:
apiServer:
{{- if .Values.kamajiControlPlane.apiServer.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.apiServer.resources | nindent 6 }}
{{- else if ne .Values.kamajiControlPlane.apiServer.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.apiServer.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
controllerManager:
{{- if .Values.kamajiControlPlane.controllerManager.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.controllerManager.resources | nindent 6 }}
{{- else if ne .Values.kamajiControlPlane.controllerManager.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.controllerManager.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
scheduler:
{{- if .Values.kamajiControlPlane.scheduler.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.scheduler.resources | nindent 6 }}
{{- else if ne .Values.kamajiControlPlane.scheduler.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.scheduler.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
dataStoreName: "{{ $etcd }}" dataStoreName: "{{ $etcd }}"
addons: addons:
coreDNS: coreDNS:
dnsServiceIPs: dnsServiceIPs:
- 10.95.0.10 - 10.95.0.10
konnectivity: konnectivity: {}
server:
port: 8132
{{- if .Values.kamajiControlPlane.addons.konnectivity.server.resources }}
resources: {{- toYaml .Values.kamajiControlPlane.addons.konnectivity.server.resources | nindent 10 }}
{{- else if ne .Values.kamajiControlPlane.addons.konnectivity.server.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kamajiControlPlane.addons.konnectivity.server.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
kubelet: kubelet:
cgroupfs: systemd cgroupfs: systemd
preferredAddressTypes: preferredAddressTypes:

View File

@@ -63,21 +63,11 @@ spec:
mountPath: /etc/kubernetes/kubeconfig mountPath: /etc/kubernetes/kubeconfig
readOnly: true readOnly: true
resources: resources:
limits:
cpu: 512m
memory: 512Mi
requests: requests:
cpu: 125m memory: 50Mi
memory: 128Mi cpu: 10m
- name: csi-provisioner - name: csi-provisioner
image: quay.io/openshift/origin-csi-external-provisioner:latest image: quay.io/openshift/origin-csi-external-provisioner:latest
resources:
limits:
cpu: 512m
memory: 512Mi
requests:
cpu: 125m
memory: 128Mi
args: args:
- "--csi-address=$(ADDRESS)" - "--csi-address=$(ADDRESS)"
- "--default-fstype=ext4" - "--default-fstype=ext4"
@@ -112,12 +102,9 @@ spec:
mountPath: /etc/kubernetes/kubeconfig mountPath: /etc/kubernetes/kubeconfig
readOnly: true readOnly: true
resources: resources:
limits:
cpu: 512m
memory: 512Mi
requests: requests:
cpu: 125m memory: 50Mi
memory: 128Mi cpu: 10m
- name: csi-liveness-probe - name: csi-liveness-probe
image: quay.io/openshift/origin-csi-livenessprobe:latest image: quay.io/openshift/origin-csi-livenessprobe:latest
args: args:
@@ -128,12 +115,9 @@ spec:
- name: socket-dir - name: socket-dir
mountPath: /csi mountPath: /csi
resources: resources:
limits:
cpu: 512m
memory: 512Mi
requests: requests:
cpu: 125m memory: 50Mi
memory: 128Mi cpu: 10m
volumes: volumes:
- name: socket-dir - name: socket-dir
emptyDir: {} emptyDir: {}

View File

@@ -18,8 +18,7 @@ spec:
namespace: cozy-system namespace: cozy-system
kubeConfig: kubeConfig:
secretRef: secretRef:
name: {{ .Release.Name }}-admin-kubeconfig name: {{ .Release.Name }}-kubeconfig
key: super-admin.svc
targetNamespace: cozy-cert-manager-crds targetNamespace: cozy-cert-manager-crds
storageNamespace: cozy-cert-manager-crds storageNamespace: cozy-cert-manager-crds
install: install:

View File

@@ -19,8 +19,7 @@ spec:
namespace: cozy-system namespace: cozy-system
kubeConfig: kubeConfig:
secretRef: secretRef:
name: {{ .Release.Name }}-admin-kubeconfig name: {{ .Release.Name }}-kubeconfig
key: super-admin.svc
targetNamespace: cozy-cert-manager targetNamespace: cozy-cert-manager
storageNamespace: cozy-cert-manager storageNamespace: cozy-cert-manager
install: install:

View File

@@ -18,8 +18,7 @@ spec:
namespace: cozy-system namespace: cozy-system
kubeConfig: kubeConfig:
secretRef: secretRef:
name: {{ .Release.Name }}-admin-kubeconfig name: {{ .Release.Name }}-kubeconfig
key: super-admin.svc
targetNamespace: cozy-cilium targetNamespace: cozy-cilium
storageNamespace: cozy-cilium storageNamespace: cozy-cilium
install: install:

View File

@@ -18,8 +18,7 @@ spec:
namespace: cozy-system namespace: cozy-system
kubeConfig: kubeConfig:
secretRef: secretRef:
name: {{ .Release.Name }}-admin-kubeconfig name: {{ .Release.Name }}-kubeconfig
key: super-admin.svc
targetNamespace: cozy-csi targetNamespace: cozy-csi
storageNamespace: cozy-csi storageNamespace: cozy-csi
install: install:

View File

@@ -19,8 +19,7 @@ spec:
namespace: cozy-system namespace: cozy-system
kubeConfig: kubeConfig:
secretRef: secretRef:
name: {{ .Release.Name }}-admin-kubeconfig name: {{ .Release.Name }}-kubeconfig
key: super-admin.svc
targetNamespace: cozy-fluxcd targetNamespace: cozy-fluxcd
storageNamespace: cozy-fluxcd storageNamespace: cozy-fluxcd
install: install:

View File

@@ -19,8 +19,7 @@ spec:
namespace: cozy-system namespace: cozy-system
kubeConfig: kubeConfig:
secretRef: secretRef:
name: {{ .Release.Name }}-admin-kubeconfig name: {{ .Release.Name }}-kubeconfig
key: super-admin.svc
targetNamespace: cozy-ingress-nginx targetNamespace: cozy-ingress-nginx
storageNamespace: cozy-ingress-nginx storageNamespace: cozy-ingress-nginx
install: install:

View File

@@ -21,8 +21,7 @@ spec:
namespace: cozy-system namespace: cozy-system
kubeConfig: kubeConfig:
secretRef: secretRef:
name: {{ .Release.Name }}-admin-kubeconfig name: {{ .Release.Name }}-kubeconfig
key: super-admin.svc
targetNamespace: cozy-monitoring-agents targetNamespace: cozy-monitoring-agents
storageNamespace: cozy-monitoring-agents storageNamespace: cozy-monitoring-agents
install: install:

View File

@@ -19,8 +19,7 @@ spec:
namespace: cozy-system namespace: cozy-system
kubeConfig: kubeConfig:
secretRef: secretRef:
name: {{ .Release.Name }}-admin-kubeconfig name: {{ .Release.Name }}-kubeconfig
key: super-admin.svc
targetNamespace: cozy-victoria-metrics-operator targetNamespace: cozy-victoria-metrics-operator
storageNamespace: cozy-victoria-metrics-operator storageNamespace: cozy-victoria-metrics-operator
install: install:

View File

@@ -36,12 +36,8 @@ spec:
#securityContext: #securityContext:
# privileged: true # privileged: true
resources: resources:
limits:
cpu: 512m
memory: 512Mi
requests: requests:
cpu: 125m cpu: 100m
memory: 128Mi
volumeMounts: volumeMounts:
- mountPath: /etc/kubernetes/kubeconfig - mountPath: /etc/kubernetes/kubeconfig
name: kubeconfig name: kubeconfig

View File

@@ -69,63 +69,3 @@ addons:
## ##
enabled: false enabled: false
valuesOverride: {} valuesOverride: {}
## @section Kamaji control plane
##
kamajiControlPlane:
apiServer:
## @param kamajiControlPlane.apiServer.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.apiServer.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"
controllerManager:
## @param kamajiControlPlane.controllerManager.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.controllerManager.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"
scheduler:
## @param kamajiControlPlane.scheduler.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.scheduler.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"
addons:
konnectivity:
server:
## @param kamajiControlPlane.addons.konnectivity.server.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kamajiControlPlane.addons.konnectivity.server.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0 version: 0.5.3
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -83,16 +83,14 @@ more details:
### Backup parameters ### Backup parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ | | ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` | | `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` | | `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` | | `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` | | `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` | | `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` | | `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` | | `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` | | `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -72,9 +72,3 @@ spec:
#secondaryService: #secondaryService:
# type: LoadBalancer # type: LoadBalancer
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}

View File

@@ -66,16 +66,6 @@
"default": "ChaXoveekoh6eigh4siesheeda2quai0" "default": "ChaXoveekoh6eigh4siesheeda2quai0"
} }
} }
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
} }

View File

@@ -54,16 +54,3 @@ backup:
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0 resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0 version: 0.4.1
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -4,15 +4,13 @@
### Common parameters ### Common parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | ------------------- | -------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` | | `external` | Enable external access from outside the cluster | `false` |
| `replicas` | Persistent Volume size for NATS | `2` | | `replicas` | Persistent Volume size for NATS | `2` |
| `storageClass` | StorageClass used to store the data | `""` | | `storageClass` | StorageClass used to store the data | `""` |
| `users` | Users configuration | `{}` | | `users` | Users configuration | `{}` |
| `jetstream.size` | Jetstream persistent storage size | `10Gi` | | `jetstream.size` | Jetstream persistent storage size | `10Gi` |
| `jetstream.enabled` | Enable or disable Jetstream | `true` | | `jetstream.enabled` | Enable or disable Jetstream | `true` |
| `config.merge` | Additional configuration to merge into NATS config | `{}` | | `config.merge` | Additional configuration to merge into NATS config | `{}` |
| `config.resolver` | Additional configuration to merge into NATS config | `{}` | | `config.resolver` | Additional configuration to merge into NATS config | `{}` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -38,17 +38,6 @@ spec:
timeout: 5m0s timeout: 5m0s
values: values:
nats: nats:
podTemplate:
merge:
spec:
containers:
- name: nats
image: nats:2.10.17-alpine
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 22 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 22 }}
{{- end }}
fullnameOverride: {{ .Release.Name }} fullnameOverride: {{ .Release.Name }}
config: config:
{{- if or (gt (len $passwords) 0) (gt (len .Values.config.merge) 0) }} {{- if or (gt (len $passwords) 0) (gt (len .Values.config.merge) 0) }}

View File

@@ -46,16 +46,6 @@
"default": {} "default": {}
} }
} }
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
} }

View File

@@ -61,16 +61,3 @@ config:
## Default: {} ## Default: {}
## Example see: https://github.com/nats-io/k8s/blob/main/helm/charts/nats/values.yaml#L247 ## Example see: https://github.com/nats-io/k8s/blob/main/helm/charts/nats/values.yaml#L247
resolver: {} resolver: {}
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.10.0 version: 0.9.0
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -58,15 +58,13 @@ more details:
### Backup parameters ### Backup parameters
| Name | Description | Value | | Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ | | ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` | | `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` | | `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` | | `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` | | `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` | | `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` | | `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` | | `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` | | `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -5,12 +5,6 @@ metadata:
name: {{ .Release.Name }} name: {{ .Release.Name }}
spec: spec:
instances: {{ .Values.replicas }} instances: {{ .Values.replicas }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}
enableSuperuserAccess: true enableSuperuserAccess: true
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }} {{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }} {{- if $configMap }}

View File

@@ -101,16 +101,6 @@
"default": "ChaXoveekoh6eigh4siesheeda2quai0" "default": "ChaXoveekoh6eigh4siesheeda2quai0"
} }
} }
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
} }

View File

@@ -76,16 +76,3 @@ backup:
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0 resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0 version: 0.4.4
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -22,9 +22,7 @@ The service utilizes official RabbitMQ operator. This ensures the reliability an
### Configuration parameters ### Configuration parameters
| Name | Description | Value | | Name | Description | Value |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | | -------- | --------------------------- | ----- |
| `users` | Users configuration | `{}` | | `users` | Users configuration | `{}` |
| `vhosts` | Virtual Hosts configuration | `{}` | | `vhosts` | Virtual Hosts configuration | `{}` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -11,11 +11,7 @@ spec:
service: service:
type: LoadBalancer type: LoadBalancer
{{- end }} {{- end }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}
override: override:
statefulSet: statefulSet:
spec: spec:

View File

@@ -26,16 +26,6 @@
"type": "object", "type": "object",
"description": "Virtual Hosts configuration", "description": "Virtual Hosts configuration",
"default": {} "default": {}
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
} }

View File

@@ -39,16 +39,3 @@ users: {}
## admin: ## admin:
## - user3 ## - user3
vhosts: {} vhosts: {}
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0 version: 0.5.0
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -13,14 +13,12 @@ Service utilizes the Spotahome Redis Operator for efficient management and orche
### Common parameters ### Common parameters
| Name | Description | Value | | Name | Description | Value |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | -------------- | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` | | `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `1Gi` | | `size` | Persistent Volume size | `1Gi` |
| `replicas` | Number of Redis replicas | `2` | | `replicas` | Number of Redis replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` | | `storageClass` | StorageClass used to store the data | `""` |
| `authEnabled` | Enable password generation | `true` | | `authEnabled` | Enable password generation | `true` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -25,18 +25,19 @@ metadata:
spec: spec:
sentinel: sentinel:
replicas: 3 replicas: 3
{{- if .Values.resources }} resources:
resources: {{- toYaml .Values.resources | nindent 6 }} requests:
{{- else if ne .Values.resourcesPreset "none" }} cpu: 100m
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 6 }} limits:
{{- end }} memory: 100Mi
redis: redis:
replicas: {{ .Values.replicas }} replicas: {{ .Values.replicas }}
{{- if .Values.resources }} resources:
resources: {{- toYaml .Values.resources | nindent 6 }} requests:
{{- else if ne .Values.resourcesPreset "none" }} cpu: 150m
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 6 }} memory: 400Mi
{{- end }} limits:
memory: 1000Mi
{{- with .Values.size }} {{- with .Values.size }}
storage: storage:
persistentVolumeClaim: persistentVolumeClaim:

View File

@@ -26,16 +26,6 @@
"type": "boolean", "type": "boolean",
"description": "Enable password generation", "description": "Enable password generation",
"default": true "default": true
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
} }

View File

@@ -11,16 +11,3 @@ size: 1Gi
replicas: 2 replicas: 2
storageClass: "" storageClass: ""
authEnabled: true authEnabled: true
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0 version: 0.2.0
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -19,13 +19,11 @@ Managed TCP Load Balancer Service efficiently utilizes HAProxy for load balancin
### Configuration parameters ### Configuration parameters
| Name | Description | Value | | Name | Description | Value |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | -------------------------------- | ------------------------------------------------------------- | ------- |
| `httpAndHttps.mode` | Mode for balancer. Allowed values: `tcp` and `tcp-with-proxy` | `tcp` | | `httpAndHttps.mode` | Mode for balancer. Allowed values: `tcp` and `tcp-with-proxy` | `tcp` |
| `httpAndHttps.targetPorts.http` | HTTP port number. | `80` | | `httpAndHttps.targetPorts.http` | HTTP port number. | `80` |
| `httpAndHttps.targetPorts.https` | HTTPS port number. | `443` | | `httpAndHttps.targetPorts.https` | HTTPS port number. | `443` |
| `httpAndHttps.endpoints` | Endpoint addresses list | `[]` | | `httpAndHttps.endpoints` | Endpoint addresses list | `[]` |
| `whitelistHTTP` | Secure HTTP by enabling client networks whitelisting | `false` | | `whitelistHTTP` | Secure HTTP by enabling client networks whitelisting | `false` |
| `whitelist` | List of client networks | `[]` | | `whitelist` | List of client networks | `[]` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -33,11 +33,6 @@ spec:
containers: containers:
- image: haproxy:latest - image: haproxy:latest
name: haproxy name: haproxy
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 10 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
ports: ports:
{{- with .Values.httpAndHttps }} {{- with .Values.httpAndHttps }}
- containerPort: 8080 - containerPort: 8080

View File

@@ -57,16 +57,6 @@
"description": "List of client networks", "description": "List of client networks",
"default": [], "default": [],
"items": {} "items": {}
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
} }

View File

@@ -43,16 +43,3 @@ httpAndHttps:
## ##
whitelistHTTP: false whitelistHTTP: false
whitelist: [] whitelist: []
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -4,4 +4,4 @@ description: Separated tenant namespace
icon: /logos/tenant.svg icon: /logos/tenant.svg
type: application type: application
version: 1.9.1 version: 1.9.0

View File

@@ -41,7 +41,6 @@ metadata:
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- include "cozystack.namespace-anotations" (list $ $existingNS) | nindent 4 }} {{- include "cozystack.namespace-anotations" (list $ $existingNS) | nindent 4 }}
alpha.kubevirt.io/auto-memory-limits-ratio: "1.0"
ownerReferences: ownerReferences:
- apiVersion: v1 - apiVersion: v1
blockOwnerDeletion: true blockOwnerDeletion: true

View File

@@ -1,155 +1,142 @@
bucket 0.1.0 HEAD bucket 0.1.0 HEAD
clickhouse 0.1.0 f7eaab0a clickhouse 0.1.0 ca79f72
clickhouse 0.2.0 53f2365e clickhouse 0.2.0 7cd7de73
clickhouse 0.2.1 dfbc210b clickhouse 0.2.1 5ca8823
clickhouse 0.3.0 6c5cf5bf clickhouse 0.3.0 b00621e
clickhouse 0.4.0 b40e1b09 clickhouse 0.4.0 320fc32
clickhouse 0.5.0 0f312d5c clickhouse 0.5.0 2a4768a5
clickhouse 0.6.0 1ec10165 clickhouse 0.6.0 18bbdb67
clickhouse 0.6.1 c62a83a7 clickhouse 0.6.1 b7375f73
clickhouse 0.6.2 8267072d clickhouse 0.6.2 HEAD
clickhouse 0.7.0 HEAD ferretdb 0.1.0 4ffa8615
ferretdb 0.1.0 e9716091 ferretdb 0.1.1 5ca8823
ferretdb 0.1.1 91b0499a ferretdb 0.2.0 adaf603
ferretdb 0.2.0 6c5cf5bf ferretdb 0.3.0 aa2f553
ferretdb 0.3.0 b8e33d19 ferretdb 0.4.0 def2eb0f
ferretdb 0.4.0 b40e1b09 ferretdb 0.4.1 a9555210
ferretdb 0.4.1 1ec10165 ferretdb 0.4.2 HEAD
ferretdb 0.4.2 8267072d http-cache 0.1.0 a956713
ferretdb 0.5.0 HEAD http-cache 0.2.0 5ca8823
http-cache 0.1.0 263e47be http-cache 0.3.0 fab5940
http-cache 0.2.0 53f2365e http-cache 0.3.1 HEAD
http-cache 0.3.0 6c5cf5bf kafka 0.1.0 760f86d2
http-cache 0.3.1 0f312d5c kafka 0.2.0 a2cc83d
http-cache 0.4.0 HEAD kafka 0.2.1 3ac17018
kafka 0.1.0 f7eaab0a kafka 0.2.2 d0758692
kafka 0.2.0 c0685f43 kafka 0.2.3 5ca8823
kafka 0.2.1 dfbc210b kafka 0.3.0 c07c4bbd
kafka 0.2.2 e9716091 kafka 0.3.1 b7375f73
kafka 0.2.3 91b0499a kafka 0.3.2 b75aaf17
kafka 0.3.0 6c5cf5bf kafka 0.3.3 HEAD
kafka 0.3.1 c62a83a7 kubernetes 0.1.0 f642698
kafka 0.3.2 93c46161 kubernetes 0.2.0 7cd7de73
kafka 0.3.3 8267072d kubernetes 0.3.0 7caccec1
kafka 0.4.0 85ec09b8 kubernetes 0.4.0 6cae6ce8
kafka 0.5.0 HEAD kubernetes 0.5.0 6bd2d455
kubernetes 0.1.0 263e47be kubernetes 0.6.0 4cbc8a2c
kubernetes 0.2.0 53f2365e kubernetes 0.7.0 ceefae03
kubernetes 0.3.0 007d414f
kubernetes 0.4.0 d7cfa53c
kubernetes 0.5.0 dfbc210b
kubernetes 0.6.0 5bbc488e
kubernetes 0.7.0 e9716091
kubernetes 0.8.0 ac11056e kubernetes 0.8.0 ac11056e
kubernetes 0.8.1 366bcafc kubernetes 0.8.1 e54608d8
kubernetes 0.8.2 f81be075 kubernetes 0.8.2 5ca8823
kubernetes 0.9.0 6c5cf5bf kubernetes 0.9.0 9b6dd19
kubernetes 0.10.0 b8e33d19 kubernetes 0.10.0 ac5c38b
kubernetes 0.11.0 4b90bf5a kubernetes 0.11.0 4eaca42
kubernetes 0.11.1 5fb9cfe3 kubernetes 0.11.1 4f430a90
kubernetes 0.12.0 bb985806 kubernetes 0.12.0 74649f8
kubernetes 0.12.1 28fca4ef kubernetes 0.12.1 28fca4e
kubernetes 0.13.0 1ec10165 kubernetes 0.13.0 ced8e5b9
kubernetes 0.14.0 bfbde07c kubernetes 0.14.0 bfbde07c
kubernetes 0.14.1 898374b5 kubernetes 0.14.1 fde4bcfa
kubernetes 0.15.0 4e68e65c kubernetes 0.15.0 cb7b8158
kubernetes 0.15.1 160e4e2a kubernetes 0.15.1 43e593c7
kubernetes 0.15.2 8267072d kubernetes 0.15.2 HEAD
kubernetes 0.16.0 077045b0 mysql 0.1.0 f642698
kubernetes 0.17.0 HEAD mysql 0.2.0 8b975ff0
mysql 0.1.0 263e47be mysql 0.3.0 5ca8823
mysql 0.2.0 c24a103f mysql 0.4.0 93018c4
mysql 0.3.0 53f2365e mysql 0.5.0 4b84798
mysql 0.4.0 6c5cf5bf mysql 0.5.1 fab5940b
mysql 0.5.0 b40e1b09 mysql 0.5.2 d8a92aa3
mysql 0.5.1 0f312d5c mysql 0.5.3 HEAD
mysql 0.5.2 1ec10165 nats 0.1.0 5ca8823
mysql 0.5.3 8267072d nats 0.2.0 c07c4bbd
mysql 0.6.0 HEAD
nats 0.1.0 e9716091
nats 0.2.0 6c5cf5bf
nats 0.3.0 78366f19 nats 0.3.0 78366f19
nats 0.3.1 c62a83a7 nats 0.3.1 b7375f73
nats 0.4.0 898374b5 nats 0.4.0 da1e705a
nats 0.4.1 8267072d nats 0.4.1 HEAD
nats 0.5.0 HEAD postgres 0.1.0 f642698
postgres 0.1.0 263e47be postgres 0.2.0 7cd7de73
postgres 0.2.0 53f2365e postgres 0.2.1 4a97e297
postgres 0.2.1 d7cfa53c postgres 0.3.0 995dea6f
postgres 0.3.0 dfbc210b postgres 0.4.0 ec283c33
postgres 0.4.0 e9716091 postgres 0.4.1 5ca8823
postgres 0.4.1 91b0499a postgres 0.5.0 c07c4bbd
postgres 0.5.0 6c5cf5bf postgres 0.6.0 2a4768a
postgres 0.6.0 b40e1b09 postgres 0.6.2 54fd61c
postgres 0.6.2 0f312d5c postgres 0.7.0 dc9d8bb
postgres 0.7.0 4b90bf5a postgres 0.7.1 175a65f
postgres 0.7.1 1ec10165 postgres 0.8.0 cb7b8158
postgres 0.8.0 4e68e65c postgres 0.9.0 HEAD
postgres 0.9.0 8267072d rabbitmq 0.1.0 f642698
postgres 0.10.0 HEAD rabbitmq 0.2.0 5ca8823
rabbitmq 0.1.0 263e47be rabbitmq 0.3.0 9e33dc0
rabbitmq 0.2.0 53f2365e rabbitmq 0.4.0 36d8855
rabbitmq 0.3.0 6c5cf5bf rabbitmq 0.4.1 35536bb
rabbitmq 0.4.0 b40e1b09 rabbitmq 0.4.2 00b2834e
rabbitmq 0.4.1 1128d0cb rabbitmq 0.4.3 d8a92aa3
rabbitmq 0.4.2 4b90bf5a rabbitmq 0.4.4 HEAD
rabbitmq 0.4.3 1ec10165 redis 0.1.1 f642698
rabbitmq 0.4.4 8267072d redis 0.2.0 5ca8823
rabbitmq 0.5.0 HEAD redis 0.3.0 c07c4bbd
redis 0.1.1 263e47be redis 0.3.1 b7375f73
redis 0.2.0 53f2365e redis 0.4.0 abc8f082
redis 0.3.0 6c5cf5bf redis 0.5.0 HEAD
redis 0.3.1 c62a83a7 tcp-balancer 0.1.0 f642698
redis 0.4.0 84f3ccc0 tcp-balancer 0.2.0 HEAD
redis 0.5.0 4e68e65c tenant 0.1.3 3d1b86c
redis 0.6.0 HEAD tenant 0.1.4 d200480
tcp-balancer 0.1.0 263e47be tenant 0.1.5 e3ab858
tcp-balancer 0.2.0 53f2365e tenant 1.0.0 7cd7de7
tcp-balancer 0.3.0 HEAD tenant 1.1.0 4da8ac3b
tenant 0.1.4 afc997ef tenant 1.2.0 15478a88
tenant 0.1.5 e3ab858a tenant 1.3.0 ceefae03
tenant 1.0.0 263e47be tenant 1.3.1 c56e5769
tenant 1.1.0 c0685f43 tenant 1.4.0 94c688f7
tenant 1.2.0 dfbc210b tenant 1.5.0 48128743
tenant 1.3.0 e9716091
tenant 1.3.1 91b0499a
tenant 1.4.0 71514249
tenant 1.5.0 1ec10165
tenant 1.6.0 df448b99 tenant 1.6.0 df448b99
tenant 1.6.1 c62a83a7 tenant 1.6.1 edbbb9be
tenant 1.6.2 898374b5 tenant 1.6.2 ccedc5fe
tenant 1.6.3 2057bb96 tenant 1.6.3 2057bb96
tenant 1.6.4 84f3ccc0 tenant 1.6.4 3c9e50a4
tenant 1.6.5 fde4bcfa tenant 1.6.5 f1e11451
tenant 1.6.6 4e68e65c tenant 1.6.6 d4634797
tenant 1.6.7 0ab39f20 tenant 1.6.7 06afcf27
tenant 1.6.8 bc95159a tenant 1.6.8 4cc48e6f
tenant 1.7.0 24fa7222 tenant 1.7.0 6c73e3f3
tenant 1.8.0 160e4e2a tenant 1.8.0 e2369ba
tenant 1.9.0 728743db tenant 1.9.0 HEAD
tenant 1.9.1 HEAD virtual-machine 0.1.4 f2015d6
virtual-machine 0.1.4 f2015d65 virtual-machine 0.1.5 7cd7de7
virtual-machine 0.1.5 263e47be virtual-machine 0.2.0 5ca8823
virtual-machine 0.2.0 c0685f43 virtual-machine 0.3.0 b908400
virtual-machine 0.3.0 6c5cf5bf virtual-machine 0.4.0 4746d51
virtual-machine 0.4.0 b8e33d19 virtual-machine 0.5.0 cad9cde
virtual-machine 0.5.0 1ec10165 virtual-machine 0.6.0 0e728870
virtual-machine 0.6.0 4e68e65c virtual-machine 0.6.1 af58018a
virtual-machine 0.7.0 e23286a3 virtual-machine 0.7.0 af58018a
virtual-machine 0.7.1 0ab39f20 virtual-machine 0.7.1 05857b95
virtual-machine 0.8.0 3fa4dd3a virtual-machine 0.8.0 3fa4dd3
virtual-machine 0.8.1 93c46161 virtual-machine 0.8.1 3fa4dd3a
virtual-machine 0.8.2 HEAD virtual-machine 0.8.2 HEAD
vm-disk 0.1.0 HEAD vm-disk 0.1.0 HEAD
vm-instance 0.1.0 1ec10165 vm-instance 0.1.0 ced8e5b9
vm-instance 0.2.0 84f3ccc0 vm-instance 0.2.0 4f767ee3
vm-instance 0.3.0 4e68e65c vm-instance 0.3.0 0e728870
vm-instance 0.4.0 e23286a3 vm-instance 0.4.0 af58018a
vm-instance 0.4.1 0ab39f20 vm-instance 0.4.1 05857b95
vm-instance 0.5.0 3fa4dd3a vm-instance 0.5.0 3fa4dd3
vm-instance 0.5.1 HEAD vm-instance 0.5.1 HEAD
vpn 0.1.0 263e47be vpn 0.1.0 f642698
vpn 0.2.0 53f2365e vpn 0.2.0 7151424
vpn 0.3.0 6c5cf5bf vpn 0.3.0 a2bcf100
vpn 0.3.1 1ec10165 vpn 0.3.1 HEAD
vpn 0.4.0 HEAD

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0 version: 0.3.1
# This is the version number of the application being deployed. This version number should be # This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to # incremented each time you make changes to the application. Versions are not expected to

View File

@@ -22,10 +22,8 @@ The VPN Service is powered by the Outline Server, an advanced and user-friendly
### Configuration parameters ### Configuration parameters
| Name | Description | Value | | Name | Description | Value |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | | ------------- | ------------------------------------------- | ----- |
| `host` | Host used to substitute into generated URLs | `""` | | `host` | Host used to substitute into generated URLs | `""` |
| `users` | Users configuration | `{}` | | `users` | Users configuration | `{}` |
| `externalIPs` | List of externalIPs for service. | `[]` | | `externalIPs` | List of externalIPs for service. | `[]` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1,50 +0,0 @@
{{/*
Copyright Broadcom, Inc. All Rights Reserved.
SPDX-License-Identifier: APACHE-2.0
*/}}
{{/* vim: set filetype=mustache: */}}
{{/*
Return a resource request/limit object based on a given preset.
These presets are for basic testing and not meant to be used in production
{{ include "resources.preset" (dict "type" "nano") -}}
*/}}
{{- define "resources.preset" -}}
{{/* The limits are the requests increased by 50% (except ephemeral-storage and xlarge/2xlarge sizes)*/}}
{{- $presets := dict
"nano" (dict
"requests" (dict "cpu" "100m" "memory" "128Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "150m" "memory" "192Mi" "ephemeral-storage" "2Gi")
)
"micro" (dict
"requests" (dict "cpu" "250m" "memory" "256Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "375m" "memory" "384Mi" "ephemeral-storage" "2Gi")
)
"small" (dict
"requests" (dict "cpu" "500m" "memory" "512Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "768Mi" "ephemeral-storage" "2Gi")
)
"medium" (dict
"requests" (dict "cpu" "500m" "memory" "1024Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "750m" "memory" "1536Mi" "ephemeral-storage" "2Gi")
)
"large" (dict
"requests" (dict "cpu" "1.0" "memory" "2048Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "1.5" "memory" "3072Mi" "ephemeral-storage" "2Gi")
)
"xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "3.0" "memory" "6144Mi" "ephemeral-storage" "2Gi")
)
"2xlarge" (dict
"requests" (dict "cpu" "1.0" "memory" "3072Mi" "ephemeral-storage" "50Mi")
"limits" (dict "cpu" "6.0" "memory" "12288Mi" "ephemeral-storage" "2Gi")
)
}}
{{- if hasKey $presets .type -}}
{{- index $presets .type | toYaml -}}
{{- else -}}
{{- printf "ERROR: Preset key '%s' invalid. Allowed values are %s" .type (join "," (keys $presets)) | fail -}}
{{- end -}}
{{- end -}}

View File

@@ -42,11 +42,6 @@ spec:
containers: containers:
- name: outline-vpn - name: outline-vpn
image: quay.io/outline/shadowbox:stable image: quay.io/outline/shadowbox:stable
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 10 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
ports: ports:
- containerPort: 40000 - containerPort: 40000
protocol: TCP protocol: TCP

View File

@@ -24,16 +24,6 @@
"items": { "items": {
"type": "string" "type": "string"
} }
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
} }
} }
} }

View File

@@ -29,16 +29,3 @@ users: {}
## - "11.22.33.46" ## - "11.22.33.46"
## ##
externalIPs: [] externalIPs: []
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -1,2 +1,2 @@
cozystack: cozystack:
image: ghcr.io/cozystack/cozystack/installer:v0.28.0@sha256:71ae2037ca44d49bbcf8be56c127ee92f2486089a8ea1cdd6508af49705956ac image: ghcr.io/cozystack/cozystack/installer:v0.28.1@sha256:2e7c6cb288500e59768fccfe76d7750a7a3a44187ae388dd2cea0a0193b791b7

View File

@@ -218,8 +218,3 @@ releases:
privileged: true privileged: true
optional: true optional: true
dependsOn: [cilium] dependsOn: [cilium]
- name: reloader
releaseName: reloader
chart: cozy-reloader
namespace: cozy-reloader

View File

@@ -205,7 +205,7 @@ releases:
releaseName: piraeus-operator releaseName: piraeus-operator
chart: cozy-piraeus-operator chart: cozy-piraeus-operator
namespace: cozy-linstor namespace: cozy-linstor
dependsOn: [cilium,kubeovn,cert-manager,victoria-metrics-operator] dependsOn: [cilium,kubeovn,cert-manager]
- name: linstor - name: linstor
releaseName: linstor releaseName: linstor
@@ -380,8 +380,3 @@ releases:
namespace: cozy-vertical-pod-autoscaler namespace: cozy-vertical-pod-autoscaler
privileged: true privileged: true
dependsOn: [monitoring-agents] dependsOn: [monitoring-agents]
- name: reloader
releaseName: reloader
chart: cozy-reloader
namespace: cozy-reloader

Some files were not shown because too many files have changed in this diff Show More