Compare commits

..

16 Commits

Author SHA1 Message Date
Andrei Kvapil
e98fa9cd72 Release v0.31.2 (#1071)
This PR prepares the release `v0.31.2`.
2025-06-17 02:27:18 +02:00
Andrei Kvapil
01d90bb736 [Backport release-0.31] Refactor roles and permissions for tenants
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 21:31:41 +02:00
github-actions
e04cfaaa58 Prepare release v0.31.2
Signed-off-by: github-actions <github-actions@github.com>
2025-06-16 19:26:34 +00:00
Andrei Kvapil
8c86905b22 [Backport release-0.31] [cozystack-controller] Fix RBAC for annotating namespaces (#1037)
# Description
Backport of #1031 to `release-0.31`.
2025-06-16 18:21:55 +02:00
Andrei Kvapil
84955d13ac [Backport release-0.31] [kafka] specify mimimal working resource presets (#1041)
# Description
Backport of #1040 to `release-0.31`.
2025-06-16 18:21:42 +02:00
Andrei Kvapil
46e5044851 [Backport release-0.31] Dashboard update and fixes (#1066)
- [dashboard] Cumulative update (#1042)
- [dashboard] Remove dependency on linsting secrets (#1066)
2025-06-16 18:21:17 +02:00
Andrei Kvapil
3a3f44a427 [dashboard] Remove dependency on linsting secrets
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 18:20:46 +02:00
Andrei Kvapil
0cc35a212c [dashboard] Cumulative update
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 18:20:39 +02:00
Andrei Kvapil
0bb79adec0 [Backport release-0.31] [docs] Review the Clickhouse app docs (#1065)
# Description
Backport of #1059 to `release-0.31`.
2025-06-16 18:19:29 +02:00
Andrei Kvapil
9e89a9d3ad [Backport release-0.31] [bugfix] fix distro full bundle (#1064)
# Description
Backport of #1056 to `release-0.31`.
2025-06-16 18:19:19 +02:00
Andrei Kvapil
ddfb1d65e3 [Backport release-0.31] [platform] decrease resources for system applications (#1058)
# Description
Backport of #1054 to `release-0.31`.
2025-06-16 18:18:52 +02:00
Nick Volynkin
efafe16d3b [docs] Review the Clickhouse app docs
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
(cherry picked from commit 980185ca2b)
2025-06-16 16:14:48 +00:00
kklinch0
e1b4861c8a [bugfix] fix distro full bundle
Signed-off-by: kklinch0 <kklinch0@gmail.com>
(cherry picked from commit 6a713e5eb4)
2025-06-16 16:14:37 +00:00
kklinch0
4d0bf14fc3 [platform] cut resources
Signed-off-by: kklinch0 <kklinch0@gmail.com>
(cherry picked from commit 0fa70d9d38)
2025-06-14 06:33:03 +00:00
Andrei Kvapil
35069ff3e9 [kafka] specify mimimal working resource presets
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit ba97a4593c)
2025-06-09 22:34:37 +00:00
Andrei Kvapil
b9afd69df0 [cozystack-controller] Fix RBAC for annotating namespaces
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit ac5145be87)
2025-06-09 08:16:37 +00:00
303 changed files with 1889 additions and 52206 deletions

View File

@@ -1,24 +0,0 @@
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium], [kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres], [virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats, even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported to a previous version.
-->
## What this PR does
### Release note
<!-- Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->
```release-note
[]
```

View File

@@ -30,13 +30,12 @@ jobs:
run: |
sudo apt update
sudo apt install curl -y
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt install nodejs -y
sudo apt install npm -y
git clone --branch 2.7.0 --depth 1 https://github.com/bitnami/readme-generator-for-helm.git
git clone https://github.com/bitnami/readme-generator-for-helm
cd ./readme-generator-for-helm
npm install
npm install -g @yao-pkg/pkg
npm install -g pkg
pkg . -o /usr/local/bin/readme-generator
- name: Run pre-commit hooks

View File

@@ -1,15 +1,100 @@
name: "Releasing PR"
name: Releasing PR
on:
pull_request:
types: [closed]
types: [labeled, opened, synchronize, reopened, closed]
# Cancel inflight runs for the same PR when a new push arrives.
concurrency:
group: pr-${{ github.workflow }}-${{ github.event.pull_request.number }}
group: pull-requests-release-${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
verify:
name: Test Release
runs-on: [self-hosted]
permissions:
contents: read
packages: write
if: |
contains(github.event.pull_request.labels.*.name, 'release') &&
github.event.action != 'closed'
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
- name: Extract tag from PR branch
id: get_tag
uses: actions/github-script@v7
with:
script: |
const branch = context.payload.pull_request.head.ref;
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!m) {
core.setFailed(`❌ Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
return;
}
const tag = `v${m[1]}`;
core.setOutput('tag', tag);
- name: Find draft release and get asset IDs
id: fetch_assets
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GH_PAT }}
script: |
const tag = '${{ steps.get_tag.outputs.tag }}';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo,
per_page: 100
});
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) {
core.setFailed(`Draft release '${tag}' not found`);
return;
}
const findAssetId = (name) =>
draft.assets.find(a => a.name === name)?.id;
const installerId = findAssetId("cozystack-installer.yaml");
const diskId = findAssetId("nocloud-amd64.raw.xz");
if (!installerId || !diskId) {
core.setFailed("Missing required assets");
return;
}
core.setOutput("installer_id", installerId);
core.setOutput("disk_id", diskId);
- name: Download assets from GitHub API
run: |
mkdir -p _out/assets
curl -sSL \
-H "Authorization: token ${GH_PAT}" \
-H "Accept: application/octet-stream" \
-o _out/assets/cozystack-installer.yaml \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ steps.fetch_assets.outputs.installer_id }}"
curl -sSL \
-H "Authorization: token ${GH_PAT}" \
-H "Accept: application/octet-stream" \
-o _out/assets/nocloud-amd64.raw.xz \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ steps.fetch_assets.outputs.disk_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
- name: Run tests
run: make test
finalize:
name: Finalize Release
runs-on: [self-hosted]

View File

@@ -4,9 +4,8 @@ on:
pull_request:
types: [labeled, opened, synchronize, reopened]
# Cancel inflight runs for the same PR when a new push arrives.
concurrency:
group: pr-${{ github.workflow }}-${{ github.event.pull_request.number }}
group: pull-requests-${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
@@ -34,215 +33,29 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
- name: Build
run: make build
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
- name: Build Talos image
run: make -C packages/core/installer talos-nocloud
- name: Upload installer
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: cozystack-installer
path: _out/assets/cozystack-installer.yaml
name: cozystack-artefacts
path: |
_out/assets/nocloud-amd64.raw.xz
_out/assets/cozystack-installer.yaml
- name: Upload Talos image
uses: actions/upload-artifact@v4
with:
name: talos-image
path: _out/assets/nocloud-amd64.raw.xz
resolve_assets:
name: "Resolve assets"
runs-on: ubuntu-latest
if: contains(github.event.pull_request.labels.*.name, 'release')
outputs:
installer_id: ${{ steps.fetch_assets.outputs.installer_id }}
disk_id: ${{ steps.fetch_assets.outputs.disk_id }}
steps:
- name: Checkout code
if: contains(github.event.pull_request.labels.*.name, 'release')
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Extract tag from PR branch (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
id: get_tag
uses: actions/github-script@v7
with:
script: |
const branch = context.payload.pull_request.head.ref;
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!m) {
core.setFailed(`❌ Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
return;
}
core.setOutput('tag', `v${m[1]}`);
- name: Find draft release & asset IDs (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
id: fetch_assets
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GH_PAT }}
script: |
const tag = '${{ steps.get_tag.outputs.tag }}';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo,
per_page: 100
});
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) {
core.setFailed(`Draft release '${tag}' not found`);
return;
}
const find = (n) => draft.assets.find(a => a.name === n)?.id;
const installerId = find('cozystack-installer.yaml');
const diskId = find('nocloud-amd64.raw.xz');
if (!installerId || !diskId) {
core.setFailed('Required assets missing in draft release');
return;
}
core.setOutput('installer_id', installerId);
core.setOutput('disk_id', diskId);
prepare_env:
name: "Prepare environment"
test:
name: Test
runs-on: [self-hosted]
permissions:
contents: read
packages: read
needs: ["build", "resolve_assets"]
if: ${{ always() && (needs.build.result == 'success' || needs.resolve_assets.result == 'success') }}
needs: build
steps:
# ▸ Regular PR path download artefacts produced by the *build* job
- name: "Download Talos image (regular PR)"
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
with:
name: talos-image
path: _out/assets
# ▸ Release PR path fetch artefacts from the corresponding draft release
- name: Download assets from draft release (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
run: |
curl -sSL -H "Authorization: token ${GH_PAT}" -H "Accept: application/octet-stream" \
-o _out/assets/nocloud-amd64.raw.xz \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ needs.resolve_assets.outputs.disk_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
# ▸ Start actual job steps
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Prepare workspace
run: |
cd ..
rm -rf /tmp/$SANDBOX_NAME
cp -r cozystack /tmp/$SANDBOX_NAME
sudo systemctl stop "rm-workspace-$SANDBOX_NAME.timer" "rm-workspace-$SANDBOX_NAME.service" 2>/dev/null || true
sudo systemctl reset-failed "rm-workspace-$SANDBOX_NAME.timer" "rm-workspace-$SANDBOX_NAME.service" 2>/dev/null || true
sudo systemctl daemon-reexec
sudo systemd-run \
--on-calendar="$(date -d 'now + 24 hours' '+%Y-%m-%d %H:%M:%S')" \
--unit=rm-workspace-$SANDBOX_NAME \
rm -rf /tmp/$SANDBOX_NAME
- name: Prepare environment
run: |
cd /tmp/$SANDBOX_NAME
make SANDBOX_NAME=$SANDBOX_NAME prepare-env
install_cozystack:
name: "Install Cozystack"
runs-on: [self-hosted]
permissions:
contents: read
packages: read
needs: ["prepare_env", "resolve_assets"]
if: ${{ always() && needs.prepare_env.result == 'success' }}
steps:
- name: Prepare _out/assets directory
run: mkdir -p _out/assets
# ▸ Regular PR path download artefacts produced by the *build* job
- name: "Download installer (regular PR)"
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
with:
name: cozystack-installer
path: _out/assets
# ▸ Release PR path fetch artefacts from the corresponding draft release
- name: Download assets from draft release (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
run: |
curl -sSL -H "Authorization: token ${GH_PAT}" -H "Accept: application/octet-stream" \
-o _out/assets/cozystack-installer.yaml \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ needs.resolve_assets.outputs.installer_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
# ▸ Start actual job steps
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Install Cozystack into sandbox
run: |
cd /tmp/$SANDBOX_NAME
make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME install-cozystack
detect_test_matrix:
name: "Detect e2e test matrix"
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set.outputs.matrix }}
steps:
- uses: actions/checkout@v4
- id: set
run: |
apps=$(find hack/e2e-apps -maxdepth 1 -mindepth 1 -name '*.bats' | \
awk -F/ '{sub(/\..+/, "", $NF); print $NF}' | jq -R . | jq -cs .)
echo "matrix={\"app\":$apps}" >> "$GITHUB_OUTPUT"
test_apps:
strategy:
matrix: ${{ fromJson(needs.detect_test_matrix.outputs.matrix) }}
name: Test ${{ matrix.app }}
runs-on: [self-hosted]
needs: [install_cozystack,detect_test_matrix]
if: ${{ always() && (needs.install_cozystack.result == 'success' && needs.detect_test_matrix.result == 'success') }}
steps:
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: E2E Apps
run: |
cd /tmp/$SANDBOX_NAME
make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME test-apps-${{ matrix.app }}
cleanup:
name: Tear down environment
runs-on: [self-hosted]
needs: test_apps
if: ${{ always() && needs.test_apps.result == 'success' }}
# Never run when the PR carries the "release" label.
if: |
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
- name: Checkout code
@@ -251,19 +64,11 @@ jobs:
fetch-depth: 0
fetch-tags: true
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: cozystack-artefacts
path: _out/assets/
- name: Tear down sandbox
run: make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME delete
- name: Remove workspace
run: rm -rf /tmp/$SANDBOX_NAME
- name: Tear down timers
run: |
sudo systemctl stop "rm-workspace-$SANDBOX_NAME.timer" "rm-workspace-$SANDBOX_NAME.service" 2>/dev/null || true
sudo systemctl reset-failed "rm-workspace-$SANDBOX_NAME.timer" "rm-workspace-$SANDBOX_NAME.service" 2>/dev/null || true
sudo systemctl stop "teardown-$SANDBOX_NAME.timer" "teardown-$SANDBOX_NAME.service" 2>/dev/null || true
sudo systemctl reset-failed "teardown-$SANDBOX_NAME.timer" "teardown-$SANDBOX_NAME.service" 2>/dev/null || true
sudo systemctl daemon-reexec
- name: Test
run: make test

View File

@@ -99,15 +99,11 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
# Build project artifacts
- name: Build
if: steps.check_release.outputs.skip == 'false'
run: make build
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
# Commit built artifacts
- name: Commit release artifacts

View File

@@ -9,6 +9,7 @@ build-deps:
build: build-deps
make -C packages/apps/http-cache image
make -C packages/apps/postgres image
make -C packages/apps/mysql image
make -C packages/apps/clickhouse image
make -C packages/apps/kubernetes image
@@ -48,10 +49,6 @@ test:
make -C packages/core/testing apply
make -C packages/core/testing test
prepare-env:
make -C packages/core/testing apply
make -C packages/core/testing prepare-cluster
generate:
hack/update-codegen.sh

View File

@@ -12,15 +12,11 @@
**Cozystack** is a free PaaS platform and framework for building clouds.
Cozystack is a [CNCF Sandbox Level Project](https://www.cncf.io/sandbox-projects/) that was originally built and sponsored by [Ænix](https://aenix.io/).
With Cozystack, you can transform a bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters,
Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
Use Cozystack to build your own cloud or provide a cost-effective development environment.
![Cozystack user interface](https://cozystack.io/img/screenshot.png)
## Use-Cases
* [**Using Cozystack to build a public cloud**](https://cozystack.io/docs/guides/use-cases/public-cloud/)
@@ -32,6 +28,9 @@ You can use Cozystack as a platform to build a private cloud powered by Infrastr
* [**Using Cozystack as a Kubernetes distribution**](https://cozystack.io/docs/guides/use-cases/kubernetes-distribution/)
You can use Cozystack as a Kubernetes distribution for Bare Metal
## Screenshot
![Cozystack screenshot](https://cozystack.io/img/screenshot.png)
## Documentation
@@ -60,10 +59,7 @@ Commits are used to generate the changelog, and their author will be referenced
If you have **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/cozystack/cozystack/discussions/categories/feature-requests).
## Community
You are welcome to join our [Telegram group](https://t.me/cozystack) and come to our weekly community meetings.
Add them to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics) for convenience.
You are welcome to join our weekly community meetings (just add this events to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics)) or [Telegram group](https://t.me/cozystack).
## License

View File

@@ -194,15 +194,7 @@ func main() {
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "TenantHelmReconciler")
os.Exit(1)
}
if err = (&controller.CozystackConfigReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CozystackConfigReconciler")
setupLog.Error(err, "unable to create controller", "controller", "Workload")
os.Exit(1)
}

View File

@@ -1,71 +0,0 @@
Cozystack v0.32.0 is a significant release that brings new features, key fixes, and updates to underlying components.
## Major Features and Improvements
* [platform] Use `cozypkg` instead of Helm (@kvaps in https://github.com/cozystack/cozystack/pull/1057)
* [platform] Introduce the HelmRelease reconciler for system components. (@kvaps in https://github.com/cozystack/cozystack/pull/1033)
* [kubernetes] Enable using container registry mirrors by tenant Kubernetes clusters. Configure containerd for tenant Kubernetes clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/979, patched by @lllamnyp in https://github.com/cozystack/cozystack/pull/1032)
* [platform] Allow users to specify CPU requests in VCPUs. Use a library chart for resource management. (@lllamnyp in https://github.com/cozystack/cozystack/pull/972 and https://github.com/cozystack/cozystack/pull/1025)
* [platform] Annotate all child objects of apps with uniform labels for tracking by WorkloadMonitors. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1018 and https://github.com/cozystack/cozystack/pull/1024)
* [platform] Introduce `cluster-domain` option and un-hardcode `cozy.local`. (@kvaps in https://github.com/cozystack/cozystack/pull/1039)
* [platform] Get instance type when reconciling WorkloadMonitor (https://github.com/cozystack/cozystack/pull/1030)
* [virtual-machine] Add RBAC rules to allow port forwarding in KubeVirt for SSH via `virtctl`. (@mattia-eleuteri in https://github.com/cozystack/cozystack/pull/1027, patched by @klinch0 in https://github.com/cozystack/cozystack/pull/1028)
* [monitoring] Add events and audit inputs (@kevin880202 in https://github.com/cozystack/cozystack/pull/948)
## Security
* Resolve a security problem that allowed tenant administrator to gain enhanced privileges outside the tenant. (@kvaps in https://github.com/cozystack/cozystack/pull/1062)
## Fixes
* [dashboard] Fix a number of issues in the Cozystack Dashboard (@kvaps in https://github.com/cozystack/cozystack/pull/1042)
* [kafka] Specify minimal working resource presets. (@kvaps in https://github.com/cozystack/cozystack/pull/1040)
* [cilium] Fixed Gateway API manifest. (@zdenekjanda in https://github.com/cozystack/cozystack/pull/1016)
* [platform] Fix RBAC for annotating namespaces. (@kvaps in https://github.com/cozystack/cozystack/pull/1031)
* [platform] Fix dependencies for paas-hosted bundle. (@kvaps in https://github.com/cozystack/cozystack/pull/1034)
* [platform] Reduce system resource consumption by using lesser resource presets for VerticalPodAutoscaler, SeaweedFS, and KubeOVN. (@klinch0 in https://github.com/cozystack/cozystack/pull/1054)
* [virtual-machine] Fix handling of cloudinit and ssh-key input for `virtual-machine` and `vm-instance` applications. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1019 and https://github.com/cozystack/cozystack/pull/1020)
* [apps] Fix Clickhouse version parsing. (@kvaps in https://github.com/cozystack/cozystack/commit/28302e776e9d2bb8f424cf467619fa61d71ac49a)
* [apps] Add resource quotas for PostgreSQL jobs and fix application readme generation check in CI. (@klinch0 in https://github.com/cozystack/cozystack/pull/1051)
* [kube-ovn] Enable database health check. (@kvaps in https://github.com/cozystack/cozystack/pull/1047)
* [kubernetes] Fix upstream issue by updating Kubevirt-CCM. (@kvaps in https://github.com/cozystack/cozystack/pull/1052)
* [kubernetes] Fix resources and introduce a migration when upgrading tenant Kubernetes to v0.32.4. (@kvaps in https://github.com/cozystack/cozystack/pull/1073)
* [cluster-api] Add a missing migration for `capi-providers`. (@kvaps in https://github.com/cozystack/cozystack/pull/1072)
## Dependencies
* Introduce cozykpg, update to v1.1.0. (@kvaps in https://github.com/cozystack/cozystack/pull/1057 and https://github.com/cozystack/cozystack/pull/1063)
* Update flux-operator to 0.22.0, Flux to 2.6.x. (@kingdonb in https://github.com/cozystack/cozystack/pull/1035)
* Update Talos Linux to v1.10.3. (@kvaps in https://github.com/cozystack/cozystack/pull/1006)
* Update Cilium to v1.17.4. (@kvaps in https://github.com/cozystack/cozystack/pull/1046)
* Update MetalLB to v0.15.2. (@kvaps in https://github.com/cozystack/cozystack/pull/1045)
* Update Kube-OVN to v1.13.13. (@kvaps in https://github.com/cozystack/cozystack/pull/1047)
## Documentation
* [Oracle Cloud Infrastructure installation guide](https://cozystack.io/docs/operations/talos/installation/oracle-cloud/). (@kvaps, @lllamnyp, and @NickVolynkin in https://github.com/cozystack/website/pull/168)
* [Cluster configuration with `talosctl`](https://cozystack.io/docs/operations/talos/configuration/talosctl/). (@NickVolynkin in https://github.com/cozystack/website/pull/211)
* [Configuring container registry mirrors for tenant Kubernetes clusters](https://cozystack.io/docs/operations/talos/configuration/air-gapped/#5-configure-container-registry-mirrors-for-tenant-kubernetes). (@klinch0 in https://github.com/cozystack/website/pull/210)
* [Explain application management strategies and available versions for managed applications.](https://cozystack.io/docs/guides/applications/). (@NickVolynkin in https://github.com/cozystack/website/pull/219)
* [How to clean up etcd state](https://cozystack.io/docs/operations/faq/#how-to-clean-up-etcd-state). (@gwynbleidd2106 in https://github.com/cozystack/website/pull/214)
* [State that Cozystack is a CNCF Sandbox project](https://github.com/cozystack/cozystack?tab=readme-ov-file#cozystack). (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1055)
## Development, Testing, and CI/CD
* [tests] Add tests for applications `virtual-machine`, `vm-disk`, `vm-instance`, `postgresql`, `mysql`, and `clickhouse`. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1048, patched by @kvaps in https://github.com/cozystack/cozystack/pull/1074)
* [tests] Fix concurrency for the `docker login` action. (@kvaps in https://github.com/cozystack/cozystack/pull/1014)
* [tests] Increase QEMU system disk size in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1011)
* [tests] Increase the waiting timeout for VMs in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1038)
* [ci] Separate build and testing jobs in CI. (@kvaps in https://github.com/cozystack/cozystack/pull/1005 and https://github.com/cozystack/cozystack/pull/1010)
* [ci] Fix the release assets. (@kvaps in https://github.com/cozystack/cozystack/pull/1006 and https://github.com/cozystack/cozystack/pull/1009)
## New Contributors
* @kevin880202 made their first contribution in https://github.com/cozystack/cozystack/pull/948
* @mattia-eleuteri made their first contribution in https://github.com/cozystack/cozystack/pull/1027
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.31.0...v0.32.0
<!--
HEAD https://github.com/cozystack/cozystack/commit/3ce6dbe8
-->

13
go.mod
View File

@@ -37,7 +37,6 @@ require (
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fluxcd/pkg/apis/kustomize v1.6.1 // indirect
@@ -92,14 +91,14 @@ require (
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/crypto v0.28.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/net v0.30.0 // indirect
golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/sync v0.8.0 // indirect
golang.org/x/sys v0.26.0 // indirect
golang.org/x/term v0.25.0 // indirect
golang.org/x/text v0.19.0 // indirect
golang.org/x/time v0.7.0 // indirect
golang.org/x/tools v0.26.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect

28
go.sum
View File

@@ -26,8 +26,8 @@ github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkp
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v0.5.2 h1:xVCHIVMUu1wtM/VkR9jVZ45N3FhZfYMMYGorLCR8P3k=
github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
@@ -212,8 +212,8 @@ go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw=
golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@@ -222,26 +222,26 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24=
golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

27
hack/e2e-apps/kubernetes.bats → hack/e2e-apps.bats Normal file → Executable file
View File

@@ -1,7 +1,29 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Create tenant with isolated mode enabled" {
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Tenant
metadata:
name: test
namespace: tenant-root
spec:
etcd: false
host: ""
ingress: false
isolated: true
monitoring: false
resourceQuotas: {}
seaweedfs: false
EOF
kubectl wait hr/tenant-test -n tenant-root --timeout=1m --for=condition=ready
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
}
@test "Create a tenant Kubernetes control plane" {
kubectl -n tenant-test get kuberneteses.apps.cozystack.io test ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes
@@ -68,6 +90,5 @@ EOF
kubectl wait tcp -n tenant-test kubernetes-test --timeout=2m --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
kubectl wait deploy --timeout=4m --for=condition=available -n tenant-test kubernetes-test kubernetes-test-cluster-autoscaler kubernetes-test-kccm kubernetes-test-kcsi-controller
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=10m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
kubectl -n tenant-test delete kuberneteses.apps.cozystack.io test
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=5m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
}

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env bats
@test "Create DB ClickHouse" {
name='test'
kubectl -n tenant-test get clickhouses.apps.cozystack.io $name ||
kubectl create -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: $name
namespace: tenant-test
spec:
size: 10Gi
logStorageSize: 2Gi
shards: 1
replicas: 2
storageClass: ""
logTTL: 15
users:
testuser:
password: xai7Wepo
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/clickhouse-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr clickhouse-$name --timeout=20s --for=condition=ready
timeout 180 sh -ec "until kubectl -n tenant-test get svc chendpoint-clickhouse-$name -o jsonpath='{.spec.ports[*].port}' | grep -q '8123 9000'; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-0 --timeout=120s --for=jsonpath='{.status.replicas}'=1
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 100 sh -ec "until kubectl -n tenant-test get svc chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.spec.ports[*].port}' | grep -q '9000 8123 9009'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get sts chi-clickhouse-$name-clickhouse-0-1 ; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-1 --timeout=140s --for=jsonpath='{.status.replicas}'=1
}

View File

@@ -1,51 +0,0 @@
#!/usr/bin/env bats
@test "Create Kafka" {
name='test'
kubectl create -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kafka
metadata:
name: $name
namespace: tenant-test
spec:
external: false
kafka:
size: 10Gi
replicas: 2
storageClass: ""
resources: {}
resourcesPreset: "nano"
zookeeper:
size: 5Gi
replicas: 2
storageClass: ""
resources:
resourcesPreset: "nano"
topics:
- name: testResults
partitions: 1
replicas: 2
config:
min.insync.replicas: 2
- name: testOrders
config:
cleanup.policy: compact
segment.ms: 3600000
max.compaction.lag.ms: 5400000
min.insync.replicas: 2
partitions: 1
replicas: 2
EOF
sleep 5
kubectl -n tenant-test wait hr kafka-$name --timeout=30s --for=condition=ready
kubectl wait kafkas -n tenant-test test --timeout=60s --for=condition=ready
timeout 60 sh -ec "until kubectl -n tenant-test get pvc data-kafka-$name-zookeeper-0; do sleep 10; done"
kubectl -n tenant-test wait pvc data-kafka-$name-zookeeper-0 --timeout=50s --for=jsonpath='{.status.phase}'=Bound
timeout 40 sh -ec "until kubectl -n tenant-test get svc kafka-$name-zookeeper-client -o jsonpath='{.spec.ports[0].port}' | grep -q '2181'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc kafka-$name-zookeeper-nodes -o jsonpath='{.spec.ports[*].port}' | grep -q '2181 2888 3888'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints kafka-$name-zookeeper-nodes -o jsonpath='{.subsets[*].addresses[0].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete kafka.apps.cozystack.io $name
kubectl -n tenant-test delete pvc data-kafka-$name-zookeeper-0
kubectl -n tenant-test delete pvc data-kafka-$name-zookeeper-1
}

View File

@@ -1,47 +0,0 @@
#!/usr/bin/env bats
@test "Create DB MySQL" {
name='test'
kubectl -n tenant-test get mysqls.apps.cozystack.io $name ||
kubectl create -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: MySQL
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 2
storageClass: ""
users:
testuser:
maxUserConnections: 1000
password: xai7Wepo
databases:
testdb:
roles:
admin:
- testuser
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr mysql-$name --timeout=30s --for=condition=ready
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name -o jsonpath='{.spec.ports[0].port}' | grep -q '3306'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/mysql-$name --timeout=110s --for=jsonpath='{.status.replicas}'=2
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name-metrics -o jsonpath='{.spec.ports[0].port}' | grep -q '9104'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name-metrics -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test wait deployment.apps/mysql-$name-metrics --timeout=90s --for=jsonpath='{.status.replicas}'=1
kubectl -n tenant-test delete mysqls.apps.cozystack.io $name
}

View File

@@ -1,55 +0,0 @@
#!/usr/bin/env bats
@test "Create DB PostgreSQL" {
name='test'
kubectl -n tenant-test get postgreses.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Postgres
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 2
storageClass: ""
postgresql:
parameters:
max_connections: 100
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
users:
testuser:
password: xai7Wepo
databases:
testdb:
roles:
admin:
- testuser
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr postgres-$name --timeout=100s --for=condition=ready
kubectl -n tenant-test wait job.batch postgres-$name-init-job --timeout=50s --for=condition=Complete
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-r -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-ro -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-rw -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-r -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
# for some reason it takes longer for the read-only endpoint to be ready
#timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-ro -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-rw -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete postgreses.apps.cozystack.io $name
kubectl -n tenant-test delete job.batch/postgres-$name-init-job
}

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env bats
@test "Create Redis" {
name='test'
kubectl create -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Redis
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 1Gi
replicas: 2
storageClass: ""
authEnabled: true
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr redis-$name --timeout=20s --for=condition=ready
kubectl -n tenant-test wait pvc redisfailover-persistent-data-rfr-redis-$name-0 --timeout=50s --for=jsonpath='{.status.phase}'=Bound
kubectl -n tenant-test wait deploy rfs-redis-$name --timeout=90s --for=condition=available
kubectl -n tenant-test wait sts rfr-redis-$name --timeout=90s --for=jsonpath='{.status.replicas}'=2
kubectl -n tenant-test delete redis.apps.cozystack.io $name
}

View File

@@ -1,48 +0,0 @@
#!/usr/bin/env bats
@test "Create a Virtual Machine" {
name='test'
kubectl -n tenant-test get virtualmachines.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VirtualMachine
metadata:
name: $name
namespace: tenant-test
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
instanceType: "u1.medium"
instanceProfile: ubuntu
systemDisk:
image: ubuntu
storage: 5Gi
storageClass: replicated
gpus: []
resources:
cpu: ""
memory: ""
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
sleep 5
kubectl -n tenant-test wait hr virtual-machine-$name --timeout=10s --for=condition=ready
kubectl -n tenant-test wait dv virtual-machine-$name --timeout=150s --for=condition=ready
kubectl -n tenant-test wait pvc virtual-machine-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
kubectl -n tenant-test wait vm virtual-machine-$name --timeout=100s --for=condition=ready
timeout 120 sh -ec "until kubectl -n tenant-test get vmi virtual-machine-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete virtualmachines.apps.cozystack.io $name
}

View File

@@ -1,70 +0,0 @@
#!/usr/bin/env bats
@test "Create a VM Disk" {
name='test'
kubectl -n tenant-test get vmdisks.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VMDisk
metadata:
name: $name
namespace: tenant-test
spec:
source:
http:
url: https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
optical: false
storage: 5Gi
storageClass: replicated
EOF
sleep 5
kubectl -n tenant-test wait hr vm-disk-$name --timeout=5s --for=condition=ready
kubectl -n tenant-test wait dv vm-disk-$name --timeout=150s --for=condition=ready
kubectl -n tenant-test wait pvc vm-disk-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
}
@test "Create a VM Instance" {
diskName='test'
name='test'
kubectl -n tenant-test get vminstances.apps.cozystack.io $name ||
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VMInstance
metadata:
name: $name
namespace: tenant-test
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
running: true
instanceType: "u1.medium"
instanceProfile: ubuntu
disks:
- name: $diskName
gpus: []
resources:
cpu: ""
memory: ""
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
sleep 5
timeout 20 sh -ec "until kubectl -n tenant-test get vmi vm-instance-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 5; done"
kubectl -n tenant-test wait hr vm-instance-$name --timeout=5s --for=condition=ready
kubectl -n tenant-test wait vm vm-instance-$name --timeout=20s --for=condition=ready
kubectl -n tenant-test delete vminstances.apps.cozystack.io $name
kubectl -n tenant-test delete vmdisks.apps.cozystack.io $diskName
}

391
hack/e2e-cluster.bats Executable file
View File

@@ -0,0 +1,391 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Required installer assets exist" {
if [ ! -f _out/assets/cozystack-installer.yaml ]; then
echo "Missing: _out/assets/cozystack-installer.yaml" >&2
exit 1
fi
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing: _out/assets/nocloud-amd64.raw.xz" >&2
exit 1
fi
}
@test "IPv4 forwarding is enabled" {
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
echo "IPv4 forwarding is disabled!" >&2
echo >&2
echo "Enable it with:" >&2
echo " echo 1 > /proc/sys/net/ipv4/ip_forward" >&2
exit 1
fi
}
@test "Clean previous VMs" {
kill $(cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid 2>/dev/null) 2>/dev/null || true
rm -rf srv1 srv2 srv3
}
@test "Prepare networking and masquerading" {
ip link del cozy-br0 2>/dev/null || true
ip link add cozy-br0 type bridge
ip link set cozy-br0 up
ip address add 192.168.123.1/24 dev cozy-br0
# Masquerading rule idempotent (delete first, then add)
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
}
@test "Prepare cloudinit drive for VMs" {
mkdir -p srv1 srv2 srv3
# Generate cloudinit ISOs
for i in 1 2 3; do
echo "hostname: srv${i}" > "srv${i}/meta-data"
cat > "srv${i}/user-data" <<'EOF'
#cloud-config
EOF
cat > "srv${i}/network-config" <<EOF
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- "192.168.123.1${i}/26"
gateway4: "192.168.123.1"
nameservers:
search: [cluster.local]
addresses: [8.8.8.8]
EOF
( cd "srv${i}" && genisoimage \
-output seed.img \
-volid cidata -rational-rock -joliet \
user-data meta-data network-config )
done
}
@test "Use Talos NoCloud image from assets" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing _out/assets/nocloud-amd64.raw.xz" 2>&1
exit 1
fi
rm -f nocloud-amd64.raw
cp _out/assets/nocloud-amd64.raw.xz .
xz --decompress nocloud-amd64.raw.xz
}
@test "Prepare VM disks" {
for i in 1 2 3; do
cp nocloud-amd64.raw srv${i}/system.img
qemu-img resize srv${i}/system.img 20G
qemu-img create srv${i}/data.img 100G
done
}
@test "Create tap devices" {
for i in 1 2 3; do
ip link del cozy-srv${i} 2>/dev/null || true
ip tuntap add dev cozy-srv${i} mode tap
ip link set cozy-srv${i} up
ip link set cozy-srv${i} master cozy-br0
done
}
@test "Boot QEMU VMs" {
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 16384 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
-drive file=srv${i}/system.img,if=virtio,format=raw \
-drive file=srv${i}/seed.img,if=virtio,format=raw \
-drive file=srv${i}/data.img,if=virtio,format=raw \
-display none -daemonize -pidfile srv${i}/qemu.pid
done
# Give qemu a few seconds to start up networking
sleep 5
}
@test "Wait until Talos API port 50000 is reachable on all machines" {
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Generate Talos cluster configuration" {
# Clusterwide patches
cat > patch.yaml <<'EOF'
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.123.0/24
extraConfig:
maxPods: 512
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
- name: spl
registries:
mirrors:
docker.io:
endpoints:
- https://mirror.gcr.io
files:
- content: |
[plugins]
[plugins."io.containerd.cri.v1.runtime"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
apiServer:
extraArgs:
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
oidc-client-id: "kubernetes"
oidc-username-claim: "preferred_username"
oidc-groups-claim: "groups"
network:
cni:
name: none
dnsDomain: cozy.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOF
# Controlplaneonly patches
cat > patch-controlplane.yaml <<'EOF'
machine:
nodeLabels:
node.kubernetes.io/exclude-from-external-load-balancers:
$patch: delete
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.123.10
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.123.0/24
EOF
# Generate secrets once
if [ ! -f secrets.yaml ]; then
talosctl gen secrets
fi
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 \
--config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
}
@test "Apply Talos configuration to the node" {
# Apply the configuration to all three nodes
for node in 11 12 13; do
talosctl apply -f controlplane.yaml -n 192.168.123.${node} -e 192.168.123.${node} -i
done
# Wait for Talos services to come up again
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Bootstrap Talos cluster" {
# Bootstrap etcd on the first node
timeout 10 sh -ec 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
# Wait until etcd is healthy
timeout 180 sh -ec 'until talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 >/dev/null 2>&1; do sleep 1; done'
timeout 60 sh -ec 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep -q "rpc error"; do sleep 1; done'
# Retrieve kubeconfig
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
# Wait until all three nodes register in Kubernetes
timeout 60 sh -ec 'until [ $(kubectl get node --no-headers | wc -l) -eq 3 ]; do sleep 1; done'
}
@test "Install Cozystack" {
# Create namespace & configmap required by installer
kubectl create namespace cozy-system --dry-run=client -o yaml | kubectl apply -f -
kubectl create configmap cozystack -n cozy-system \
--from-literal=bundle-name=paas-full \
--from-literal=ipv4-pod-cidr=10.244.0.0/16 \
--from-literal=ipv4-pod-gateway=10.244.0.1 \
--from-literal=ipv4-svc-cidr=10.96.0.0/16 \
--from-literal=ipv4-join-cidr=100.64.0.0/16 \
--from-literal=root-host=example.org \
--from-literal=api-server-endpoint=https://192.168.123.10:6443 \
--dry-run=client -o yaml | kubectl apply -f -
# Apply installer manifests from file
kubectl apply -f _out/assets/cozystack-installer.yaml
# Wait for the installer deployment to become available
kubectl wait deployment/cozystack -n cozy-system --timeout=1m --for=condition=Available
# Wait until HelmReleases appear & reconcile them
timeout 60 sh -ec 'until kubectl get hr -A | grep -q cozys; do sleep 1; done'
sleep 5
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n "$1" hr/"$2" &"} END {print "wait"}' | sh -ex
# Fail the test if any HelmRelease is not Ready
if kubectl get hr -A | grep -v " True " | grep -v NAME; then
kubectl get hr -A
fail "Some HelmReleases failed to reconcile"
fi
}
@test "Wait for ClusterAPI provider deployments" {
# Wait for ClusterAPI provider deployments
timeout 60 sh -ec 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager >/dev/null 2>&1; do sleep 1; done'
kubectl wait deployment/capi-controller-manager deployment/capi-kamaji-controller-manager deployment/capi-kubeadm-bootstrap-controller-manager deployment/capi-operator-cluster-api-operator deployment/capk-controller-manager -n cozy-cluster-api --timeout=1m --for=condition=available
}
@test "Wait for LINSTOR and configure storage" {
# Linstor controller and nodes
kubectl wait deployment/linstor-controller -n cozy-linstor --timeout=5m --for=condition=available
timeout 60 sh -ec 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) -eq 3 ]; do sleep 1; done'
for node in srv1 srv2 srv3; do
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs ${node} /dev/vdc --pool-name data --storage-pool data
done
# Storage classes
kubectl apply -f - <<'EOF'
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOF
}
@test "Wait for MetalLB and configure address pool" {
# MetalLB address pool
kubectl apply -f - <<'EOF'
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools: [cozystack]
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses: [192.168.123.200-192.168.123.250]
autoAssign: true
avoidBuggyIPs: false
EOF
}
@test "Check Cozystack API service" {
kubectl wait --for=condition=Available apiservices/v1alpha1.apps.cozystack.io --timeout=2m
}
@test "Configure Tenant and wait for applications" {
# Patch root tenant and wait for its releases
kubectl patch tenants/root -n tenant-root --type merge -p '{"spec":{"host":"example.org","ingress":true,"monitoring":true,"etcd":true,"isolated":true}}'
timeout 60 sh -ec 'until kubectl get hr -n tenant-root etcd ingress monitoring tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/etcd hr/ingress hr/tenant-root -n tenant-root --timeout=2m --for=condition=ready
if ! kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready; then
flux reconcile hr monitoring -n tenant-root --force
kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready
fi
# Expose Cozystack services through ingress
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"expose-services":"api,dashboard,cdi-uploadproxy,vm-exportproxy,keycloak"}}'
# NGINX ingress controller
timeout 60 sh -ec 'until kubectl get deploy root-ingress-controller -n tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait deploy/root-ingress-controller -n tenant-root --timeout=5m --for=condition=available
# etcd statefulset
kubectl wait sts/etcd -n tenant-root --for=jsonpath='{.status.readyReplicas}'=3 --timeout=5m
# VictoriaMetrics components
kubectl wait vmalert/vmalert-shortterm vmalertmanager/alertmanager -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vlogs/generic -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vmcluster/shortterm vmcluster/longterm -n tenant-root --for=jsonpath='{.status.clusterStatus}'=operational --timeout=5m
# Grafana
kubectl wait clusters.postgresql.cnpg.io/grafana-db -n tenant-root --for=condition=ready --timeout=5m
kubectl wait deploy/grafana-deployment -n tenant-root --for=condition=available --timeout=5m
# Verify Grafana via ingress
ingress_ip=$(kubectl get svc root-ingress-controller -n tenant-root -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
if ! curl -sS -k "https://${ingress_ip}" -H 'Host: grafana.example.org' --max-time 30 | grep -q Found; then
echo "Failed to access Grafana via ingress at ${ingress_ip}" >&2
exit 1
fi
}
@test "Keycloak OIDC stack is healthy" {
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"oidc-enabled":"true"}}'
timeout 120 sh -ec 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/keycloak hr/keycloak-configure hr/keycloak-operator -n cozy-keycloak --timeout=10m --for=condition=ready
}

View File

@@ -1,182 +0,0 @@
#!/usr/bin/env bats
@test "Install Cozystack" {
# Create namespace & configmap required by installer
kubectl create namespace cozy-system --dry-run=client -o yaml | kubectl apply -f -
kubectl create configmap cozystack -n cozy-system \
--from-literal=bundle-name=paas-full \
--from-literal=ipv4-pod-cidr=10.244.0.0/16 \
--from-literal=ipv4-pod-gateway=10.244.0.1 \
--from-literal=ipv4-svc-cidr=10.96.0.0/16 \
--from-literal=ipv4-join-cidr=100.64.0.0/16 \
--from-literal=root-host=example.org \
--from-literal=api-server-endpoint=https://192.168.123.10:6443 \
--dry-run=client -o yaml | kubectl apply -f -
# Apply installer manifests from file
kubectl apply -f _out/assets/cozystack-installer.yaml
# Wait for the installer deployment to become available
kubectl wait deployment/cozystack -n cozy-system --timeout=1m --for=condition=Available
# Wait until HelmReleases appear & reconcile them
timeout 60 sh -ec 'until kubectl get hr -A -l cozystack.io/system-app=true | grep -q cozys; do sleep 1; done'
sleep 5
kubectl get hr -A -l cozystack.io/system-app=true | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n "$1" hr/"$2" &"} END {print "wait"}' | sh -ex
# Fail the test if any HelmRelease is not Ready
if kubectl get hr -A | grep -v " True " | grep -v NAME; then
kubectl get hr -A
fail "Some HelmReleases failed to reconcile"
fi
}
@test "Wait for ClusterAPI provider deployments" {
# Wait for ClusterAPI provider deployments
timeout 60 sh -ec 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager >/dev/null 2>&1; do sleep 1; done'
kubectl wait deployment/capi-controller-manager deployment/capi-kamaji-controller-manager deployment/capi-kubeadm-bootstrap-controller-manager deployment/capi-operator-cluster-api-operator deployment/capk-controller-manager -n cozy-cluster-api --timeout=1m --for=condition=available
}
@test "Wait for LINSTOR and configure storage" {
# Linstor controller and nodes
kubectl wait deployment/linstor-controller -n cozy-linstor --timeout=5m --for=condition=available
timeout 60 sh -ec 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) -eq 3 ]; do sleep 1; done'
created_pools=$(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor sp l -s data --pastable | awk '$2 == "data" {printf " " $4} END{printf " "}')
for node in srv1 srv2 srv3; do
case $created_pools in
*" $node "*) echo "Storage pool 'data' already exists on node $node"; continue;;
esac
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs ${node} /dev/vdc --pool-name data --storage-pool data
done
# Storage classes
kubectl apply -f - <<'EOF'
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: Immediate
allowVolumeExpansion: true
EOF
}
@test "Wait for MetalLB and configure address pool" {
# MetalLB address pool
kubectl apply -f - <<'EOF'
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools: [cozystack]
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses: [192.168.123.200-192.168.123.250]
autoAssign: true
avoidBuggyIPs: false
EOF
}
@test "Check Cozystack API service" {
kubectl wait --for=condition=Available apiservices/v1alpha1.apps.cozystack.io --timeout=2m
}
@test "Configure Tenant and wait for applications" {
# Patch root tenant and wait for its releases
kubectl patch tenants/root -n tenant-root --type merge -p '{"spec":{"host":"example.org","ingress":true,"monitoring":true,"etcd":true,"isolated":true}}'
timeout 60 sh -ec 'until kubectl get hr -n tenant-root etcd ingress monitoring tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/etcd hr/ingress hr/tenant-root -n tenant-root --timeout=2m --for=condition=ready
if ! kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready; then
flux reconcile hr monitoring -n tenant-root --force
kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready
fi
# Expose Cozystack services through ingress
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"expose-services":"api,dashboard,cdi-uploadproxy,vm-exportproxy,keycloak"}}'
# NGINX ingress controller
timeout 60 sh -ec 'until kubectl get deploy root-ingress-controller -n tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait deploy/root-ingress-controller -n tenant-root --timeout=5m --for=condition=available
# etcd statefulset
kubectl wait sts/etcd -n tenant-root --for=jsonpath='{.status.readyReplicas}'=3 --timeout=5m
# VictoriaMetrics components
kubectl wait vmalert/vmalert-shortterm vmalertmanager/alertmanager -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vlogs/generic -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vmcluster/shortterm vmcluster/longterm -n tenant-root --for=jsonpath='{.status.clusterStatus}'=operational --timeout=5m
# Grafana
kubectl wait clusters.postgresql.cnpg.io/grafana-db -n tenant-root --for=condition=ready --timeout=5m
kubectl wait deploy/grafana-deployment -n tenant-root --for=condition=available --timeout=5m
# Verify Grafana via ingress
ingress_ip=$(kubectl get svc root-ingress-controller -n tenant-root -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
if ! curl -sS -k "https://${ingress_ip}" -H 'Host: grafana.example.org' --max-time 30 | grep -q Found; then
echo "Failed to access Grafana via ingress at ${ingress_ip}" >&2
exit 1
fi
}
@test "Keycloak OIDC stack is healthy" {
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"oidc-enabled":"true"}}'
timeout 120 sh -ec 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/keycloak hr/keycloak-configure hr/keycloak-operator -n cozy-keycloak --timeout=10m --for=condition=ready
}
@test "Create tenant with isolated mode enabled" {
kubectl -n tenant-root get tenants.apps.cozystack.io test ||
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Tenant
metadata:
name: test
namespace: tenant-root
spec:
etcd: false
host: ""
ingress: false
isolated: true
monitoring: false
resourceQuotas: {}
seaweedfs: false
EOF
kubectl wait hr/tenant-test -n tenant-root --timeout=1m --for=condition=ready
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
}

View File

@@ -1,253 +0,0 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Required installer assets exist" {
if [ ! -f _out/assets/cozystack-installer.yaml ]; then
echo "Missing: _out/assets/cozystack-installer.yaml" >&2
exit 1
fi
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing: _out/assets/nocloud-amd64.raw.xz" >&2
exit 1
fi
}
@test "IPv4 forwarding is enabled" {
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
echo "IPv4 forwarding is disabled!" >&2
echo >&2
echo "Enable it with:" >&2
echo " echo 1 > /proc/sys/net/ipv4/ip_forward" >&2
exit 1
fi
}
@test "Clean previous VMs" {
kill $(cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid 2>/dev/null) 2>/dev/null || true
rm -rf srv1 srv2 srv3
}
@test "Prepare networking and masquerading" {
ip link del cozy-br0 2>/dev/null || true
ip link add cozy-br0 type bridge
ip link set cozy-br0 up
ip address add 192.168.123.1/24 dev cozy-br0
# Masquerading rule idempotent (delete first, then add)
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
}
@test "Prepare cloudinit drive for VMs" {
mkdir -p srv1 srv2 srv3
# Generate cloudinit ISOs
for i in 1 2 3; do
echo "hostname: srv${i}" > "srv${i}/meta-data"
cat > "srv${i}/user-data" <<'EOF'
#cloud-config
EOF
cat > "srv${i}/network-config" <<EOF
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- "192.168.123.1${i}/26"
gateway4: "192.168.123.1"
nameservers:
search: [cluster.local]
addresses: [8.8.8.8]
EOF
( cd "srv${i}" && genisoimage \
-output seed.img \
-volid cidata -rational-rock -joliet \
user-data meta-data network-config )
done
}
@test "Use Talos NoCloud image from assets" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing _out/assets/nocloud-amd64.raw.xz" 2>&1
exit 1
fi
rm -f nocloud-amd64.raw
cp _out/assets/nocloud-amd64.raw.xz .
xz --decompress nocloud-amd64.raw.xz
}
@test "Prepare VM disks" {
for i in 1 2 3; do
cp nocloud-amd64.raw srv${i}/system.img
qemu-img resize srv${i}/system.img 50G
qemu-img create srv${i}/data.img 100G
done
}
@test "Create tap devices" {
for i in 1 2 3; do
ip link del cozy-srv${i} 2>/dev/null || true
ip tuntap add dev cozy-srv${i} mode tap
ip link set cozy-srv${i} up
ip link set cozy-srv${i} master cozy-br0
done
}
@test "Boot QEMU VMs" {
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 24576 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
-drive file=srv${i}/system.img,if=virtio,format=raw \
-drive file=srv${i}/seed.img,if=virtio,format=raw \
-drive file=srv${i}/data.img,if=virtio,format=raw \
-display none -daemonize -pidfile srv${i}/qemu.pid
done
# Give qemu a few seconds to start up networking
sleep 5
}
@test "Wait until Talos API port 50000 is reachable on all machines" {
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Generate Talos cluster configuration" {
# Clusterwide patches
cat > patch.yaml <<'EOF'
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.123.0/24
extraConfig:
maxPods: 512
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
- name: spl
registries:
mirrors:
docker.io:
endpoints:
- https://dockerio.nexus.lllamnyp.su
cr.fluentbit.io:
endpoints:
- https://fluentbit.nexus.lllamnyp.su
docker-registry3.mariadb.com:
endpoints:
- https://mariadb.nexus.lllamnyp.su
gcr.io:
endpoints:
- https://gcr.nexus.lllamnyp.su
ghcr.io:
endpoints:
- https://ghcr.nexus.lllamnyp.su
quay.io:
endpoints:
- https://quay.nexus.lllamnyp.su
registry.k8s.io:
endpoints:
- https://k8s.nexus.lllamnyp.su
files:
- content: |
[plugins]
[plugins."io.containerd.cri.v1.runtime"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
apiServer:
extraArgs:
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
oidc-client-id: "kubernetes"
oidc-username-claim: "preferred_username"
oidc-groups-claim: "groups"
network:
cni:
name: none
dnsDomain: cozy.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOF
# Controlplaneonly patches
cat > patch-controlplane.yaml <<'EOF'
machine:
nodeLabels:
node.kubernetes.io/exclude-from-external-load-balancers:
$patch: delete
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.123.10
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.123.0/24
EOF
# Generate secrets once
if [ ! -f secrets.yaml ]; then
talosctl gen secrets
fi
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 \
--config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
}
@test "Apply Talos configuration to the node" {
# Apply the configuration to all three nodes
for node in 11 12 13; do
talosctl apply -f controlplane.yaml -n 192.168.123.${node} -e 192.168.123.${node} -i
done
# Wait for Talos services to come up again
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Bootstrap Talos cluster" {
# Bootstrap etcd on the first node
timeout 10 sh -ec 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
# Wait until etcd is healthy
timeout 180 sh -ec 'until talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 >/dev/null 2>&1; do sleep 1; done'
timeout 60 sh -ec 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep -q "rpc error"; do sleep 1; done'
# Retrieve kubeconfig
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
# Wait until all three nodes register in Kubernetes
timeout 60 sh -ec 'until [ $(kubectl get node --no-headers | wc -l) -eq 3 ]; do sleep 1; done'
}

View File

@@ -1,139 +0,0 @@
package controller
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"sort"
"time"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/predicate"
)
type CozystackConfigReconciler struct {
client.Client
Scheme *runtime.Scheme
}
var configMapNames = []string{"cozystack", "cozystack-branding", "cozystack-scheduling"}
const configMapNamespace = "cozy-system"
const digestAnnotation = "cozystack.io/cozy-config-digest"
const forceReconcileKey = "reconcile.fluxcd.io/forceAt"
const requestedAt = "reconcile.fluxcd.io/requestedAt"
func (r *CozystackConfigReconciler) Reconcile(ctx context.Context, _ ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
digest, err := r.computeDigest(ctx)
if err != nil {
log.Error(err, "failed to compute config digest")
return ctrl.Result{}, nil
}
var helmList helmv2.HelmReleaseList
if err := r.List(ctx, &helmList); err != nil {
return ctrl.Result{}, fmt.Errorf("failed to list HelmReleases: %w", err)
}
now := time.Now().Format(time.RFC3339Nano)
updated := 0
for _, hr := range helmList.Items {
isSystemApp := hr.Labels["cozystack.io/system-app"] == "true"
isTenantRoot := hr.Namespace == "tenant-root" && hr.Name == "tenant-root"
if !isSystemApp && !isTenantRoot {
continue
}
if hr.Annotations == nil {
hr.Annotations = map[string]string{}
}
if hr.Annotations[digestAnnotation] == digest {
continue
}
patch := client.MergeFrom(hr.DeepCopy())
hr.Annotations[digestAnnotation] = digest
hr.Annotations[forceReconcileKey] = now
hr.Annotations[requestedAt] = now
if err := r.Patch(ctx, &hr, patch); err != nil {
log.Error(err, "failed to patch HelmRelease", "name", hr.Name, "namespace", hr.Namespace)
continue
}
updated++
log.Info("patched HelmRelease with new config digest", "name", hr.Name, "namespace", hr.Namespace)
}
log.Info("finished reconciliation", "updatedHelmReleases", updated)
return ctrl.Result{}, nil
}
func (r *CozystackConfigReconciler) computeDigest(ctx context.Context) (string, error) {
hash := sha256.New()
for _, name := range configMapNames {
var cm corev1.ConfigMap
err := r.Get(ctx, client.ObjectKey{Namespace: configMapNamespace, Name: name}, &cm)
if err != nil {
if kerrors.IsNotFound(err) {
continue // ignore missing
}
return "", err
}
// Sort keys for consistent hashing
var keys []string
for k := range cm.Data {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
v := cm.Data[k]
fmt.Fprintf(hash, "%s:%s=%s\n", name, k, v)
}
}
return hex.EncodeToString(hash.Sum(nil)), nil
}
func (r *CozystackConfigReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
WithEventFilter(predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
cm, ok := e.ObjectNew.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
CreateFunc: func(e event.CreateEvent) bool {
cm, ok := e.Object.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
DeleteFunc: func(e event.DeleteEvent) bool {
cm, ok := e.Object.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
}).
For(&corev1.ConfigMap{}).
Complete(r)
}
func contains(slice []string, val string) bool {
for _, s := range slice {
if s == val {
return true
}
}
return false
}

View File

@@ -248,24 +248,15 @@ func (r *WorkloadMonitorReconciler) reconcilePodForMonitor(
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("pod-%s", pod.Name),
Namespace: pod.Namespace,
Labels: map[string]string{},
},
}
metaLabels := r.getWorkloadMetadata(&pod)
_, err := ctrl.CreateOrUpdate(ctx, r.Client, workload, func() error {
// Update owner references with the new monitor
updateOwnerReferences(workload.GetObjectMeta(), monitor)
// Copy labels from the Pod if needed
for k, v := range pod.Labels {
workload.Labels[k] = v
}
// Add workload meta to labels
for k, v := range metaLabels {
workload.Labels[k] = v
}
workload.Labels = pod.Labels
// Fill Workload status fields:
workload.Status.Kind = monitor.Spec.Kind
@@ -442,12 +433,3 @@ func mapObjectToMonitor[T client.Object](_ T, c client.Client) func(ctx context.
return requests
}
}
func (r *WorkloadMonitorReconciler) getWorkloadMetadata(obj client.Object) map[string]string {
labels := make(map[string]string)
annotations := obj.GetAnnotations()
if instanceType, ok := annotations["kubevirt.io/cluster-instancetype-name"]; ok {
labels["workloads.cozystack.io/kubevirt-vmi-instance-type"] = instanceType
}
return labels
}

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.2.0"
appVersion: "0.1.0"

View File

@@ -1 +0,0 @@
../../../library/cozy-lib

View File

@@ -18,14 +18,3 @@ rules:
resourceNames:
- {{ .Release.Name }}-ui
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.10.1
version: 0.9.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1,11 +1,10 @@
CLICKHOUSE_BACKUP_TAG = $(shell awk '$$0 ~ /^version:/ {print $$2}' Chart.yaml)
CLICKHOUSE_BACKUP_TAG = $(shell awk '$$1 == "version:" {print $$2}' Chart.yaml)
include ../../../scripts/common-envs.mk
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.resourcesPreset.enum = ["none", "nano", "micro", "small", "medium", "large", "xlarge", "2xlarge"]' values.schema.json
image:
docker buildx build images/clickhouse-backup \

View File

@@ -1,19 +1,18 @@
# Managed ClickHouse Service
# Managed Clickhouse Service
ClickHouse is an open source high-performance and column-oriented SQL database management system (DBMS).
It is used for online analytical processing (OLAP).
Cozystack platform uses Altinity operator to provide ClickHouse.
### How to restore backup from S3
### How to restore backup:
1. Find the snapshot:
```bash
1. Find a snapshot:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name snapshots
```
2. Restore it:
```bash
```
restic -r s3:s3.example.org/clickhouse-backups/table_name restore latest --target /tmp/
```
@@ -40,41 +39,32 @@ For more details, read [Restic: Effective Backup from Stdin](https://blog.aenix.
### Backup parameters
| Name | Description | Value |
| ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable periodic backups | `false` |
| `backup.s3Region` | AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | Retention strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | Access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | Secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | Password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Explicit CPU and memory configuration for each ClickHouse replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `small` |
| Name | Description | Value |
| ------------------------ | --------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable periodic backups | `false` |
| `backup.s3Region` | AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | Retention strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | Access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | Secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | Password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Explicit CPU/memory resource requests and limits for the Clickhouse service | `{}` |
| `resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `nano` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
In production environments, it's recommended to set `resources` explicitly.
Example of `resources`:
```yaml
resources:
cpu: 4000m
memory: 4Gi
limits:
cpu: 4000m
memory: 4Gi
requests:
cpu: 100m
memory: 512Mi
```
`resourcePreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `100m` | `128Mi` |
| `micro` | `250m` | `256Mi` |
| `small` | `500m` | `512Mi` |
| `medium` | `500m` | `1Gi` |
| `large` | `1` | `2Gi` |
| `xlarge` | `2` | `4Gi` |
| `2xlarge` | `4` | `8Gi` |
Allowed values for `resourcesPreset` are `none`, `nano`, `micro`, `small`, `medium`, `large`, `xlarge`, `2xlarge`.
This value is ignored if `resources` value is set.

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/clickhouse-backup:0.10.1@sha256:3faf7a4cebf390b9053763107482de175aa0fdb88c1e77424fd81100b1c3a205
ghcr.io/cozystack/cozystack/clickhouse-backup:0.9.0@sha256:3faf7a4cebf390b9053763107482de175aa0fdb88c1e77424fd81100b1c3a205

View File

@@ -1,5 +1,3 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-credentials" .Release.Name) }}
{{- $passwords := dict }}
{{- $users := .Values.users }}
@@ -34,7 +32,7 @@ kind: "ClickHouseInstallation"
metadata:
name: "{{ .Release.Name }}"
spec:
namespaceDomainPattern: "%s.svc.{{ $clusterDomain }}"
namespaceDomainPattern: "%s.svc.cozy.local"
defaults:
templates:
dataVolumeClaimTemplate: data-volume-template
@@ -94,9 +92,6 @@ spec:
templates:
volumeClaimTemplates:
- name: data-volume-template
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
accessModes:
- ReadWriteOnce
@@ -104,9 +99,6 @@ spec:
requests:
storage: {{ .Values.size }}
- name: log-volume-template
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
accessModes:
- ReadWriteOnce
@@ -115,9 +107,6 @@ spec:
storage: {{ .Values.logStorageSize }}
podTemplates:
- name: clickhouse-per-host
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
affinity:
podAntiAffinity:
@@ -133,9 +122,9 @@ spec:
- name: clickhouse
image: clickhouse/clickhouse-server:24.9.2.42
{{- if .Values.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 16 }}
resources: {{- include "cozy-lib.resources.sanitize" .Values.resources | nindent 16 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 16 }}
resources: {{- include "cozy-lib.resources.preset" .Values.resourcesPreset | nindent 16 }}
{{- end }}
volumeMounts:
- name: data-volume-template
@@ -144,9 +133,6 @@ spec:
mountPath: /var/log/clickhouse-server
serviceTemplates:
- name: svc-template
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
generateName: chendpoint-{chi}
spec:
ports:

View File

@@ -24,14 +24,3 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -9,5 +9,5 @@ spec:
kind: clickhouse
type: clickhouse
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}
clickhouse.altinity.com/chi: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -79,23 +79,13 @@
},
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for each ClickHouse replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Explicit CPU/memory resource requests and limits for the Clickhouse service",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"default": "small",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
"description": "Use a common resources preset when `resources` is not set explicitly.",
"default": "nano"
}
}
}
}

View File

@@ -47,11 +47,15 @@ backup:
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Explicit CPU and memory configuration for each ClickHouse replica. When left empty, the preset defined in `resourcesPreset` is applied.
## @param resources Explicit CPU/memory resource requests and limits for the Clickhouse service
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
resourcesPreset: "small"
## @param resourcesPreset Use a common resources preset when `resources` is not set explicitly.
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.1
version: 0.6.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -2,4 +2,3 @@ include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.resourcesPreset.enum = ["none", "nano", "micro", "small", "medium", "large", "xlarge", "2xlarge"]' values.schema.json

View File

@@ -1,21 +1,17 @@
# Managed FerretDB Service
FerretDB is an open source MongoDB alternative.
It translates MongoDB wire protocol queries to SQL and can be used as a direct replacement for MongoDB 5.0+.
Internally, FerretDB service is backed by Postgres.
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------------ | --------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `replicas` | Number of replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `quorum.minSyncReplicas` | Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed | `0` |
| `quorum.maxSyncReplicas` | Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the total number of replicas) | `0` |
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `replicas` | Number of Postgres replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `quorum.minSyncReplicas` | Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed. | `0` |
| `quorum.maxSyncReplicas` | Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances). | `0` |
### Configuration parameters
@@ -25,43 +21,17 @@ Internally, FerretDB service is backed by Postgres.
### Backup parameters
| Name | Description | Value |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable periodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Explicit CPU and memory configuration for each FerretDB replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `nano` |
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
```yaml
resources:
cpu: 4000m
memory: 4Gi
```
`resourcePreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `100m` | `128Mi` |
| `micro` | `250m` | `256Mi` |
| `small` | `500m` | `512Mi` |
| `medium` | `500m` | `1Gi` |
| `large` | `1` | `2Gi` |
| `xlarge` | `2` | `4Gi` |
| `2xlarge` | `4` | `8Gi` |

View File

@@ -1 +0,0 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.14.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
ghcr.io/cozystack/cozystack/postgres-backup:0.12.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f

View File

@@ -24,14 +24,3 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -2,8 +2,6 @@ apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
type: {{ ternary "LoadBalancer" "ClusterIP" .Values.external }}
{{- if .Values.external }}

View File

@@ -12,7 +12,6 @@ spec:
metadata:
labels:
app: {{ .Release.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: ferretdb

View File

@@ -19,9 +19,9 @@ spec:
minSyncReplicas: {{ .Values.quorum.minSyncReplicas }}
maxSyncReplicas: {{ .Values.quorum.maxSyncReplicas }}
{{- if .Values.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 4 }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 4 }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}
monitoring:
enablePodMonitor: true
@@ -35,7 +35,6 @@ spec:
inheritedMetadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.users }}
managed:

View File

@@ -9,5 +9,5 @@ spec:
kind: ferretdb
type: ferretdb
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}
app: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -14,7 +14,7 @@
},
"replicas": {
"type": "number",
"description": "Number of replicas",
"description": "Number of Postgres replicas",
"default": 2
},
"storageClass": {
@@ -27,12 +27,12 @@
"properties": {
"minSyncReplicas": {
"type": "number",
"description": "Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed",
"description": "Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.",
"default": 0
},
"maxSyncReplicas": {
"type": "number",
"description": "Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the total number of replicas)",
"description": "Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).",
"default": 0
}
}
@@ -42,7 +42,7 @@
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable periodic backups",
"description": "Enable pereiodic backups",
"default": false
},
"s3Region": {
@@ -84,23 +84,13 @@
},
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for each FerretDB replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"default": "nano",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}
}

View File

@@ -2,7 +2,7 @@
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param replicas Number of replicas
## @param replicas Number of Postgres replicas
## @param storageClass StorageClass used to store the data
##
external: false
@@ -11,8 +11,8 @@ replicas: 2
storageClass: ""
## Configuration for the quorum-based synchronous replication
## @param quorum.minSyncReplicas Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed
## @param quorum.maxSyncReplicas Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the total number of replicas)
## @param quorum.minSyncReplicas Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.
## @param quorum.maxSyncReplicas Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
@@ -31,7 +31,7 @@ users: {}
## @section Backup parameters
## @param backup.enabled Enable periodic backups
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
@@ -49,11 +49,15 @@ backup:
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Explicit CPU and memory configuration for each FerretDB replica. When left empty, the preset defined in `resourcesPreset` is applied.
## @param resources Resources
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.2
version: 0.5.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -23,8 +23,6 @@ image-nginx:
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.haproxy.properties.resourcesPreset.enum = ["none", "nano", "micro", "small", "medium", "large", "xlarge", "2xlarge"]' values.schema.json
yq -i -o json --indent 4 '.properties.nginx.properties.resourcesPreset.enum = ["none", "nano", "micro", "small", "medium", "large", "xlarge", "2xlarge"]' values.schema.json
update:
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/chrislim2888/IP2Location-C-Library | awk -F'[/^]' 'END{print $$3}') && \

View File

@@ -1,9 +1,8 @@
# Managed Nginx-based HTTP Cache Service
# Managed Nginx Caching Service
The Nginx-based HTTP caching service is designed to optimize web traffic and enhance web application performance.
This service combines custom-built Nginx instances with HAProxy for efficient caching and load balancing.
The Nginx Caching Service is designed to optimize web traffic and enhance web application performance. This service combines custom-built Nginx instances with HAproxy for efficient caching and load balancing.
## Deployment information
## Deployment infromation
The Nginx instances include the following modules and features:
@@ -54,67 +53,27 @@ The deployment architecture is illustrated in the diagram below:
## Known issues
- VTS module shows wrong upstream response time, [github.com/vozlt/nginx-module-vts#198](https://github.com/vozlt/nginx-module-vts/issues/198)
VTS module shows wrong upstream resonse time
- https://github.com/vozlt/nginx-module-vts/issues/198
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `storageClass` | StorageClass used to store the data | `""` |
| `haproxy.replicas` | Number of HAProxy replicas | `2` |
| `nginx.replicas` | Number of Nginx replicas | `2` |
| `haproxy.resources` | Explicit CPU and memory configuration for each HAProxy replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `haproxy.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `nano` |
| `nginx.resources` | Explicit CPU and memory configuration for each nginx replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `nginx.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `nano` |
| Name | Description | Value |
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `storageClass` | StorageClass used to store the data | `""` |
| `haproxy.replicas` | Number of HAProxy replicas | `2` |
| `nginx.replicas` | Number of Nginx replicas | `2` |
| `haproxy.resources` | Resources | `{}` |
| `haproxy.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| `nginx.resources` | Resources | `{}` |
| `nginx.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
### Configuration parameters
| Name | Description | Value |
| ----------- | ----------------------- | ----- |
| `endpoints` | Endpoints configuration | `[]` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
```yaml
resources:
cpu: 4000m
memory: 4Gi
```
`resourcePreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `100m` | `128Mi` |
| `micro` | `250m` | `256Mi` |
| `small` | `500m` | `512Mi` |
| `medium` | `500m` | `1Gi` |
| `large` | `1` | `2Gi` |
| `xlarge` | `2` | `4Gi` |
| `2xlarge` | `4` | `8Gi` |
### endpoints
`endpoints` is a flat list of IP addresses:
```yaml
endpoints:
- 10.100.3.1:80
- 10.100.3.11:80
- 10.100.3.2:80
- 10.100.3.12:80
- 10.100.3.3:80
- 10.100.3.13:80
```

View File

@@ -1 +0,0 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.5.2@sha256:e0a07082bb6fc6aeaae2315f335386f1705a646c72f9e0af512aebbca5cb2b15
ghcr.io/cozystack/cozystack/nginx-cache:0.5.0@sha256:c1944c60a449e36e29153a38db6feee41139d38b02fe3670efb673feb3bc0ee6

View File

@@ -34,9 +34,9 @@ spec:
- image: haproxy:latest
name: haproxy
{{- if .Values.haproxy.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.haproxy.resources $) | nindent 10 }}
resources: {{- toYaml .Values.haproxy.resources | nindent 10 }}
{{- else if ne .Values.haproxy.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.haproxy.resourcesPreset $) | nindent 10 }}
resources: {{- include "resources.preset" (dict "type" .Values.haproxy.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
ports:
- containerPort: 8080

View File

@@ -53,9 +53,9 @@ spec:
containers:
- name: nginx
{{- if $.Values.nginx.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list $.Values.nginx.resources $) | nindent 10 }}
resources: {{- toYaml $.Values.nginx.resources | nindent 10 }}
{{- else if ne $.Values.nginx.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list $.Values.nginx.resourcesPreset $) | nindent 10 }}
resources: {{- include "resources.preset" (dict "type" $.Values.nginx.resourcesPreset "Release" $.Release) | nindent 10 }}
{{- end }}
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}"
readinessProbe:

View File

@@ -1,39 +0,0 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-haproxy
spec:
replicas: {{ .Values.haproxy.replicas }}
minReplicas: 1
kind: http-cache
type: http-cache
selector:
app: {{ $.Release.Name }}-haproxy
version: {{ $.Chart.Version }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-nginx
spec:
replicas: {{ .Values.nginx.replicas }}
minReplicas: 1
kind: http-cache
type: http-cache
selector:
app: {{ $.Release.Name }}-nginx-cache
version: {{ $.Chart.Version }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: http-cache
type: http-cache
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -27,23 +27,13 @@
},
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for each HAProxy replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"default": "nano",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
},
@@ -57,23 +47,13 @@
},
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for each nginx replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"default": "nano",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
},
@@ -84,4 +64,4 @@
"items": {}
}
}
}
}

View File

@@ -12,23 +12,31 @@ size: 10Gi
storageClass: ""
haproxy:
replicas: 2
## @param haproxy.resources Explicit CPU and memory configuration for each HAProxy replica. When left empty, the preset defined in `resourcesPreset` is applied.
## @param haproxy.resources Resources
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param haproxy.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param haproxy.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
nginx:
replicas: 2
## @param nginx.resources Explicit CPU and memory configuration for each nginx replica. When left empty, the preset defined in `resourcesPreset` is applied.
## @param nginx.resources Resources
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param nginx.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param nginx.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
## @section Configuration parameters

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.1
version: 0.6.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -2,5 +2,3 @@ include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.kafka.properties.resourcesPreset.enum = ["none", "nano", "micro", "small", "medium", "large", "xlarge", "2xlarge"]' values.schema.json
yq -i -o json --indent 4 '.properties.zookeeper.properties.resourcesPreset.enum = ["none", "nano", "micro", "small", "medium", "large", "xlarge", "2xlarge"]' values.schema.json

View File

@@ -4,68 +4,22 @@
### Common parameters
| Name | Description | Value |
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `kafka.size` | Persistent Volume size for Kafka | `10Gi` |
| `kafka.replicas` | Number of Kafka replicas | `3` |
| `kafka.storageClass` | StorageClass used to store the Kafka data | `""` |
| `zookeeper.size` | Persistent Volume size for ZooKeeper | `5Gi` |
| `zookeeper.replicas` | Number of ZooKeeper replicas | `3` |
| `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data | `""` |
| `kafka.resources` | Explicit CPU and memory configuration for each Kafka replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `kafka.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `small` |
| `zookeeper.resources` | Explicit CPU and memory configuration for each Zookeeper replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `zookeeper.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `small` |
| Name | Description | Value |
| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `kafka.size` | Persistent Volume size for Kafka | `10Gi` |
| `kafka.replicas` | Number of Kafka replicas | `3` |
| `kafka.storageClass` | StorageClass used to store the Kafka data | `""` |
| `zookeeper.size` | Persistent Volume size for ZooKeeper | `5Gi` |
| `zookeeper.replicas` | Number of ZooKeeper replicas | `3` |
| `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data | `""` |
| `kafka.resources` | Resources | `{}` |
| `kafka.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `small` |
| `zookeeper.resources` | Resources | `{}` |
| `zookeeper.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `micro` |
### Configuration parameters
| Name | Description | Value |
| -------- | -------------------- | ----- |
| `topics` | Topics configuration | `[]` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
```yaml
resources:
cpu: 4000m
memory: 4Gi
```
`resourcePreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `100m` | `128Mi` |
| `micro` | `250m` | `256Mi` |
| `small` | `500m` | `512Mi` |
| `medium` | `500m` | `1Gi` |
| `large` | `1` | `2Gi` |
| `xlarge` | `2` | `4Gi` |
| `2xlarge` | `4` | `8Gi` |
### topics
```yaml
topics:
- name: Results
partitions: 1
replicas: 3
config:
min.insync.replicas: 2
- name: Orders
config:
cleanup.policy: compact
segment.ms: 3600000
max.compaction.lag.ms: 5400000
min.insync.replicas: 2
partitions: 1
replicas: 3
```

View File

@@ -1 +0,0 @@
../../../library/cozy-lib

View File

@@ -25,14 +25,3 @@ rules:
- {{ .Release.Name }}
- {{ $.Release.Name }}-zookeeper
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -9,9 +9,9 @@ spec:
kafka:
replicas: {{ .Values.kafka.replicas }}
{{- if .Values.kafka.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.kafka.resources $) | nindent 6 }}
resources: {{- toYaml .Values.kafka.resources | nindent 6 }}
{{- else if ne .Values.kafka.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.kafka.resourcesPreset $) | nindent 6 }}
resources: {{- include "resources.preset" (dict "type" .Values.kafka.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
listeners:
- name: plain
@@ -71,9 +71,9 @@ spec:
zookeeper:
replicas: {{ .Values.zookeeper.replicas }}
{{- if .Values.zookeeper.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.zookeeper.resources $) | nindent 6 }}
resources: {{- toYaml .Values.zookeeper.resources | nindent 6 }}
{{- else if ne .Values.zookeeper.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.zookeeper.resourcesPreset $) | nindent 6 }}
resources: {{- include "resources.preset" (dict "type" .Values.zookeeper.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
storage:
type: persistent-claim

View File

@@ -27,23 +27,13 @@
},
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for each Kafka replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"default": "small",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "small"
}
}
},
@@ -67,23 +57,13 @@
},
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for each Zookeeper replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"default": "small",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "micro"
}
}
},
@@ -94,4 +74,4 @@
"items": {}
}
}
}
}

View File

@@ -14,25 +14,35 @@ kafka:
size: 10Gi
replicas: 3
storageClass: ""
## @param kafka.resources Explicit CPU and memory configuration for each Kafka replica. When left empty, the preset defined in `resourcesPreset` is applied.
## @param kafka.resources Resources
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param kafka.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kafka.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "small"
zookeeper:
size: 5Gi
replicas: 3
storageClass: ""
## @param zookeeper.resources Explicit CPU and memory configuration for each Zookeeper replica. When left empty, the preset defined in `resourcesPreset` is applied.
## @param zookeeper.resources Resources
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param zookeeper.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
resourcesPreset: "small"
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param zookeeper.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "micro"
## @section Configuration parameters

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.24.2
version: 0.21.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -109,44 +109,34 @@ See the reference for components utilized in this service:
### Kubernetes Control Plane Configuration
| Name | Description | Value |
| -------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | -------- |
| `controlPlane.apiServer.resources` | Explicit CPU and memory configuration for the API Server. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `controlPlane.apiServer.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `medium` |
| `controlPlane.controllerManager.resources` | Explicit CPU and memory configuration for the Controller Manager. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `controlPlane.controllerManager.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `micro` |
| `controlPlane.scheduler.resources` | Explicit CPU and memory configuration for the Scheduler. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `controlPlane.scheduler.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `micro` |
| `controlPlane.konnectivity.server.resources` | Explicit CPU and memory configuration for Konnectivity. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `controlPlane.konnectivity.server.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `micro` |
| Name | Description | Value |
| -------------------------------------------------- | ---------------------------------------------------------------------------- | ------- |
| `controlPlane.apiServer.resources` | Explicit CPU/memory resource requests and limits for the API server. | `{}` |
| `controlPlane.apiServer.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `small` |
| `controlPlane.controllerManager.resources` | Explicit CPU/memory resource requests and limits for the controller manager. | `{}` |
| `controlPlane.controllerManager.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.scheduler.resources` | Explicit CPU/memory resource requests and limits for the scheduler. | `{}` |
| `controlPlane.scheduler.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.konnectivity.server.resources` | Explicit CPU/memory resource requests and limits for the Konnectivity. | `{}` |
| `controlPlane.konnectivity.server.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
In production environments, it's recommended to set `resources` explicitly.
Example of `controlPlane.*.resources`:
```yaml
resources:
cpu: 4000m
memory: 4Gi
limits:
cpu: 4000m
memory: 4Gi
requests:
cpu: 100m
memory: 512Mi
```
`resourcePreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `100m` | `128Mi` |
| `micro` | `250m` | `256Mi` |
| `small` | `500m` | `512Mi` |
| `medium` | `500m` | `1Gi` |
| `large` | `1` | `2Gi` |
| `xlarge` | `2` | `4Gi` |
| `2xlarge` | `4` | `8Gi` |
Allowed values for `controlPlane.*.resourcesPreset` are `none`, `nano`, `micro`, `small`, `medium`, `large`, `xlarge`, `2xlarge`.
This value is ignored if the corresponding `resources` value is set.
## Resources Reference
### instanceType Resources

View File

@@ -1 +0,0 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.24.2@sha256:3a8170433e1632e5cc2b6d9db34d0605e8e6c63c158282c38450415e700e932e
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.21.0@sha256:3a8170433e1632e5cc2b6d9db34d0605e8e6c63c158282c38450415e700e932e

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.24.2@sha256:b478952fab735f85c3ba15835012b1de8af5578b33a8a2670eaf532ffc17681e
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.21.0@sha256:c53cff22980c754eb45f552cb1ccd3d9ad0b4ce4c12b024012e0ae256fd114f0

View File

@@ -8,7 +8,7 @@ ENV GOARCH=$TARGETARCH
RUN git clone https://github.com/kubevirt/cloud-provider-kubevirt /go/src/kubevirt.io/cloud-provider-kubevirt \
&& cd /go/src/kubevirt.io/cloud-provider-kubevirt \
&& git checkout a0acf33
&& git checkout 443a1fe
WORKDIR /go/src/kubevirt.io/cloud-provider-kubevirt

View File

@@ -37,7 +37,7 @@ index 74166b5d9..4e744f8de 100644
klog.Infof("Initializing kubevirtEPSController")
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 53388eb8e..b56882c12 100644
index 6f6e3d322..b56882c12 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -54,10 +54,10 @@ type Controller struct {
@@ -286,6 +286,22 @@ index 53388eb8e..b56882c12 100644
for _, eps := range slicesToDelete {
err := c.infraClient.DiscoveryV1().EndpointSlices(eps.Namespace).Delete(context.TODO(), eps.Name, metav1.DeleteOptions{})
if err != nil {
@@ -474,11 +538,11 @@ func (c *Controller) reconcileByAddressType(service *v1.Service, tenantSlices []
// Create the desired port configuration
var desiredPorts []discovery.EndpointPort
- for _, port := range service.Spec.Ports {
+ for i := range service.Spec.Ports {
desiredPorts = append(desiredPorts, discovery.EndpointPort{
- Port: &port.TargetPort.IntVal,
- Protocol: &port.Protocol,
- Name: &port.Name,
+ Port: &service.Spec.Ports[i].TargetPort.IntVal,
+ Protocol: &service.Spec.Ports[i].Protocol,
+ Name: &service.Spec.Ports[i].Name,
})
}
@@ -588,55 +652,114 @@ func ownedBy(endpointSlice *discovery.EndpointSlice, svc *v1.Service) bool {
return false
}
@@ -421,10 +437,18 @@ index 53388eb8e..b56882c12 100644
return nil
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1c97035b4..14d92d340 100644
index 1fb86e25f..14d92d340 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
@@ -190,7 +190,7 @@ func setupTestKubevirtEPSController() *testKubevirtEPSController {
@@ -13,6 +13,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/intstr"
+ "k8s.io/apimachinery/pkg/util/sets"
dfake "k8s.io/client-go/dynamic/fake"
"k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/testing"
@@ -189,7 +190,7 @@ func setupTestKubevirtEPSController() *testKubevirtEPSController {
}: "VirtualMachineInstanceList",
})
@@ -433,87 +457,83 @@ index 1c97035b4..14d92d340 100644
err := controller.Init()
if err != nil {
@@ -697,51 +697,43 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
*createPort("http", 80, v1.ProtocolTCP),
[]discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
- // Define several unique ports for the Service
@@ -686,5 +687,229 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
return false, err
}).Should(BeTrue(), "EndpointSlice in infra cluster should be recreated by the controller after deletion")
})
+
+ g.It("Should correctly handle multiple unique ports in EndpointSlice", func() {
+ // Create a VMI in the infra cluster
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+
+ // Create an EndpointSlice in the tenant cluster
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,
+ *createPort("http", 80, v1.ProtocolTCP),
+ []discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
+
+ // Define multiple ports for the Service
servicePorts := []v1.ServicePort{
{
- Name: "client",
- Protocol: v1.ProtocolTCP,
- Port: 10001,
- TargetPort: intstr.FromInt(30396),
- NodePort: 30396,
- AppProtocol: nil,
+ servicePorts := []v1.ServicePort{
+ {
+ Name: "client",
+ Protocol: v1.ProtocolTCP,
+ Port: 10001,
+ TargetPort: intstr.FromInt(30396),
+ NodePort: 30396,
},
{
- Name: "dashboard",
- Protocol: v1.ProtocolTCP,
- Port: 8265,
- TargetPort: intstr.FromInt(31003),
- NodePort: 31003,
- AppProtocol: nil,
+ },
+ {
+ Name: "dashboard",
+ Protocol: v1.ProtocolTCP,
+ Port: 8265,
+ TargetPort: intstr.FromInt(31003),
+ NodePort: 31003,
},
{
- Name: "metrics",
- Protocol: v1.ProtocolTCP,
- Port: 8080,
- TargetPort: intstr.FromInt(30452),
- NodePort: 30452,
- AppProtocol: nil,
+ },
+ {
+ Name: "metrics",
+ Protocol: v1.ProtocolTCP,
+ Port: 8080,
+ TargetPort: intstr.FromInt(30452),
+ NodePort: 30452,
},
}
- // Create a Service with the first port
createAndAssertInfraServiceLB("infra-multiport-service", "tenant-service-name", "test-cluster",
- servicePorts[0],
- v1.ServiceExternalTrafficPolicyLocal)
+ },
+ }
+
+ createAndAssertInfraServiceLB("infra-multiport-service", "tenant-service-name", "test-cluster",
+ servicePorts[0], v1.ServiceExternalTrafficPolicyLocal)
- // Update the Service by adding the remaining ports
svc, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(context.TODO(), "infra-multiport-service", metav1.GetOptions{})
Expect(err).To(BeNil())
svc.Spec.Ports = servicePorts
-
_, err = testVals.infraClient.CoreV1().Services(infraNamespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
Expect(err).To(BeNil())
var epsListMultiPort *discoveryv1.EndpointSliceList
- // Verify that the EndpointSlice is created with correct unique ports
Eventually(func() (bool, error) {
epsListMultiPort, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
if len(epsListMultiPort.Items) != 1 {
@@ -758,7 +750,6 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
}
}
- // Verify that all expected ports are present and without duplicates
if len(foundPortNames) != len(expectedPortNames) {
return false, err
}
@@ -769,5 +760,156 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
}).Should(BeTrue(), "EndpointSlice should contain all unique ports from the Service without duplicates")
})
+
+ svc, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(context.TODO(), "infra-multiport-service", metav1.GetOptions{})
+ Expect(err).To(BeNil())
+
+ svc.Spec.Ports = servicePorts
+ _, err = testVals.infraClient.CoreV1().Services(infraNamespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
+ Expect(err).To(BeNil())
+
+ var epsListMultiPort *discoveryv1.EndpointSliceList
+
+ Eventually(func() (bool, error) {
+ epsListMultiPort, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
+ if len(epsListMultiPort.Items) != 1 {
+ return false, err
+ }
+
+ createdSlice := epsListMultiPort.Items[0]
+ expectedPortNames := []string{"client", "dashboard", "metrics"}
+ foundPortNames := []string{}
+
+ for _, port := range createdSlice.Ports {
+ if port.Name != nil {
+ foundPortNames = append(foundPortNames, *port.Name)
+ }
+ }
+
+ if len(foundPortNames) != len(expectedPortNames) {
+ return false, err
+ }
+
+ portSet := sets.NewString(foundPortNames...)
+ expectedPortSet := sets.NewString(expectedPortNames...)
+ return portSet.Equal(expectedPortSet), err
+ }).Should(BeTrue(), "EndpointSlice should contain all unique ports from the Service without duplicates")
+ })
+
+ g.It("Should not panic when Service changes to have a non-nil selector, causing EndpointSlice deletion with no new slices to create", func() {
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,

View File

@@ -1,142 +0,0 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 53388eb8e..28644236f 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -12,7 +12,6 @@ import (
apiequality "k8s.io/apimachinery/pkg/api/equality"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@@ -669,35 +668,50 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
for _, slice := range tenantSlices {
for _, endpoint := range slice.Endpoints {
// find all unique nodes that correspond to an endpoint in a tenant slice
+ if endpoint.NodeName == nil {
+ klog.Warningf("Skipping endpoint without NodeName in slice %s/%s", slice.Namespace, slice.Name)
+ continue
+ }
nodeSet.Insert(*endpoint.NodeName)
}
}
- klog.Infof("Desired nodes for service %s in namespace %s: %v", service.Name, service.Namespace, sets.List(nodeSet))
+ klog.Infof("Desired nodes for service %s/%s: %v", service.Namespace, service.Name, sets.List(nodeSet))
for _, node := range sets.List(nodeSet) {
// find vmi for node name
- obj := &unstructured.Unstructured{}
- vmi := &kubevirtv1.VirtualMachineInstance{}
-
- obj, err := c.infraDynamic.Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).Namespace(c.infraNamespace).Get(context.TODO(), node, metav1.GetOptions{})
+ obj, err := c.infraDynamic.
+ Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).
+ Namespace(c.infraNamespace).
+ Get(context.TODO(), node, metav1.GetOptions{})
if err != nil {
- klog.Errorf("Failed to get VirtualMachineInstance %s in namespace %s:%v", node, c.infraNamespace, err)
+ klog.Errorf("Failed to get VMI %q in namespace %q: %v", node, c.infraNamespace, err)
continue
}
+ vmi := &kubevirtv1.VirtualMachineInstance{}
err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, vmi)
if err != nil {
klog.Errorf("Failed to convert Unstructured to VirtualMachineInstance: %v", err)
- klog.Fatal(err)
+ continue
}
+ if vmi.Status.NodeName == "" {
+ klog.Warningf("Skipping VMI %s/%s: NodeName is empty", vmi.Namespace, vmi.Name)
+ continue
+ }
+ nodeNamePtr := &vmi.Status.NodeName
+
ready := vmi.Status.Phase == kubevirtv1.Running
serving := vmi.Status.Phase == kubevirtv1.Running
terminating := vmi.Status.Phase == kubevirtv1.Failed || vmi.Status.Phase == kubevirtv1.Succeeded
for _, i := range vmi.Status.Interfaces {
if i.Name == "default" {
+ if i.IP == "" {
+ klog.Warningf("VMI %s/%s interface %q has no IP, skipping", vmi.Namespace, vmi.Name, i.Name)
+ continue
+ }
desiredEndpoints = append(desiredEndpoints, &discovery.Endpoint{
Addresses: []string{i.IP},
Conditions: discovery.EndpointConditions{
@@ -705,9 +719,9 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
Serving: &serving,
Terminating: &terminating,
},
- NodeName: &vmi.Status.NodeName,
+ NodeName: nodeNamePtr,
})
- continue
+ break
}
}
}
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1c97035b4..d205d0bed 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
@@ -771,3 +771,55 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
})
})
+
+var _ = g.Describe("getDesiredEndpoints", func() {
+ g.It("should skip endpoints without NodeName and VMIs without NodeName or IP", func() {
+ // Setup controller
+ ctrl := setupTestKubevirtEPSController().controller
+
+ // Manually inject dynamic client content (1 VMI with missing NodeName)
+ vmi := createUnstructuredVMINode("vmi-without-node", "", "10.0.0.1") // empty NodeName
+ _, err := ctrl.infraDynamic.
+ Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).
+ Namespace(infraNamespace).
+ Create(context.TODO(), vmi, metav1.CreateOptions{})
+ Expect(err).To(BeNil())
+
+ // Create service
+ svc := createInfraServiceLB("test-svc", "test-svc", "test-cluster",
+ v1.ServicePort{
+ Name: "http",
+ Port: 80,
+ TargetPort: intstr.FromInt(8080),
+ Protocol: v1.ProtocolTCP,
+ },
+ v1.ServiceExternalTrafficPolicyLocal,
+ )
+
+ // One endpoint has nil NodeName, another is valid
+ nodeName := "vmi-without-node"
+ tenantSlice := &discoveryv1.EndpointSlice{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "slice",
+ Namespace: tenantNamespace,
+ Labels: map[string]string{
+ discoveryv1.LabelServiceName: "test-svc",
+ },
+ },
+ AddressType: discoveryv1.AddressTypeIPv4,
+ Endpoints: []discoveryv1.Endpoint{
+ { // should be skipped due to nil NodeName
+ Addresses: []string{"10.1.1.1"},
+ NodeName: nil,
+ },
+ { // will hit VMI without NodeName and also be skipped
+ Addresses: []string{"10.1.1.2"},
+ NodeName: &nodeName,
+ },
+ },
+ }
+
+ endpoints := ctrl.getDesiredEndpoints(svc, []*discoveryv1.EndpointSlice{tenantSlice})
+ Expect(endpoints).To(HaveLen(0), "Expected no endpoints due to missing NodeName or IP")
+ })
+})

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.24.2@sha256:598ab20550dbf495717e8e123e6b626bb36298f88dde851664301d393ac06ca3
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.21.0@sha256:510e4c8db50126391b94668fccce9f6ed82d298a02882d2585596b5c6213ddc3

View File

@@ -121,21 +121,21 @@ metadata:
spec:
apiServer:
{{- if .Values.controlPlane.apiServer.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.apiServer.resources $) | nindent 6 }}
resources: {{- toYaml .Values.controlPlane.apiServer.resources | nindent 6 }}
{{- else if ne .Values.controlPlane.apiServer.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.apiServer.resourcesPreset $) | nindent 6 }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.apiServer.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
controllerManager:
{{- if .Values.controlPlane.controllerManager.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.controllerManager.resources $) | nindent 6 }}
resources: {{- toYaml .Values.controlPlane.controllerManager.resources | nindent 6 }}
{{- else if ne .Values.controlPlane.controllerManager.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.controllerManager.resourcesPreset $) | nindent 6 }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.controllerManager.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
scheduler:
{{- if .Values.controlPlane.scheduler.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.scheduler.resources $) | nindent 6 }}
resources: {{- toYaml .Values.controlPlane.scheduler.resources | nindent 6 }}
{{- else if ne .Values.controlPlane.scheduler.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.scheduler.resourcesPreset $) | nindent 6 }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.scheduler.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
dataStoreName: "{{ $etcd }}"
addons:
@@ -146,9 +146,9 @@ spec:
server:
port: 8132
{{- if .Values.controlPlane.konnectivity.server.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.controlPlane.konnectivity.server.resources $) | nindent 10 }}
resources: {{- toYaml .Values.controlPlane.konnectivity.server.resources | nindent 10 }}
{{- else if ne .Values.controlPlane.konnectivity.server.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.controlPlane.konnectivity.server.resourcesPreset $) | nindent 10 }}
resources: {{- include "resources.preset" (dict "type" .Values.controlPlane.konnectivity.server.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
kubelet:
cgroupfs: systemd
@@ -211,25 +211,12 @@ spec:
- ["LABEL=ephemeral", "/ephemeral"]
- ["/ephemeral/kubelet", "/var/lib/kubelet", "none", "bind,nofail"]
- ["/ephemeral/containerd", "/var/lib/containerd", "none", "bind,nofail"]
{{- $sec := lookup "v1" "Secret" $.Release.Namespace (printf "%s-patch-containerd" $.Release.Name) }}
{{- if $sec }}
files:
{{- range $key, $_ := $sec.data }}
- path: /etc/containerd/certs.d/{{ trimSuffix ".toml" $key }}/hosts.toml
contentFrom:
secret:
name: {{ $.Release.Name }}-patch-containerd
key: {{ $key }}
permissions: "0400"
{{- end }}
{{- end }}
preKubeadmCommands:
- sed -i 's|root:x:|root::|' /etc/passwd
- systemctl stop containerd.service
- mkdir -p /ephemeral/kubelet /ephemeral/containerd
- mount -o bind /ephemeral/kubelet /var/lib/kubelet
- mount -o bind /ephemeral/containerd /var/lib/containerd
- sudo sed -i '/\[plugins."io.containerd.grpc.v1.cri".registry\]/,/^\[/ s|^\(\s*config_path\s*=\s*\).*|\1"/etc/containerd/certs.d"|' /etc/containerd/config.toml
- systemctl start containerd.service
joinConfiguration:
nodeRegistration:

View File

@@ -1,13 +0,0 @@
{{- $sourceSecret := lookup "v1" "Secret" "cozy-system" "patch-containerd" }}
{{- if $sourceSecret }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-patch-containerd
namespace: {{ .Release.Namespace }}
type: {{ $sourceSecret.type }}
data:
{{- range $key, $value := $sourceSecret.data }}
{{ printf "%s: %s" $key ($value | quote) | indent 2 }}
{{- end }}
{{- end }}

View File

@@ -34,14 +34,3 @@ rules:
- {{ $.Release.Name }}-{{ $groupName }}
{{- end }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -1,4 +1,3 @@
{{- if .Values.addons.certManager.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
@@ -55,4 +54,3 @@ stringData:
values: |
{{- toYaml .Values.addons.certManager.valuesOverride | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -17,7 +17,6 @@ spec:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '>= 0.0.0-0'
kubeConfig:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig

View File

@@ -24,8 +24,8 @@ spec:
secretRef:
name: {{ .Release.Name }}-admin-kubeconfig
key: super-admin.svc
targetNamespace: cozy-monitoring
storageNamespace: cozy-monitoring
targetNamespace: cozy-monitoring-agents
storageNamespace: cozy-monitoring-agents
install:
createNamespace: true
timeout: "300s"

View File

@@ -1,6 +1,4 @@
{{- define "cozystack.defaultVPAValues" -}}
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $targetTenant := index $myNS.metadata.annotations "namespace.cozystack.io/monitoring" }}
vertical-pod-autoscaler:
@@ -15,7 +13,7 @@ vertical-pod-autoscaler:
metric-for-pod-labels: kube_pod_labels{job="kube-state-metrics", tenant="{{ .Release.Namespace }}", cluster="{{ .Release.Name }}"}[8d]
pod-name-label: pod
pod-namespace-label: namespace
prometheus-address: http://vmselect-shortterm.{{ $targetTenant }}.svc.{{ $clusterDomain }}:8481/select/0/prometheus/
prometheus-address: http://vmselect-shortterm.{{ $targetTenant }}.svc.cozy.local:8481/select/0/prometheus/
prometheus-cadvisor-job-name: cadvisor
resources:
limits:

View File

@@ -20,13 +20,13 @@
"properties": {
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for the API Server. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Explicit CPU/memory resource requests and limits for the API server.",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"default": "medium",
"description": "Use a common resources preset when `resources` is not set explicitly.",
"default": "small",
"enum": [
"none",
"nano",
@@ -45,12 +45,12 @@
"properties": {
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for the Controller Manager. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Explicit CPU/memory resource requests and limits for the controller manager.",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"description": "Use a common resources preset when `resources` is not set explicitly.",
"default": "micro",
"enum": [
"none",
@@ -70,12 +70,12 @@
"properties": {
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for the Scheduler. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Explicit CPU/memory resource requests and limits for the scheduler.",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"description": "Use a common resources preset when `resources` is not set explicitly.",
"default": "micro",
"enum": [
"none",
@@ -98,12 +98,12 @@
"properties": {
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for Konnectivity. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Explicit CPU/memory resource requests and limits for the Konnectivity.",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"description": "Use a common resources preset when `resources` is not set explicitly.",
"default": "micro",
"enum": [
"none",

View File

@@ -110,31 +110,35 @@ controlPlane:
replicas: 2
apiServer:
## @param controlPlane.apiServer.resources Explicit CPU and memory configuration for the API Server. When left empty, the preset defined in `resourcesPreset` is applied.
## @param controlPlane.apiServer.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
## @param controlPlane.apiServer.resources Explicit CPU/memory resource requests and limits for the API server.
## @param controlPlane.apiServer.resourcesPreset Use a common resources preset when `resources` is not set explicitly.
## e.g:
## resources:
## cpu: 4000m
## memory: 4Gi
## limits:
## cpu: 4000m
## memory: 4Gi
## requests:
## cpu: 100m
## memory: 512Mi
##
resourcesPreset: "medium"
resourcesPreset: "small"
resources: {}
controllerManager:
## @param controlPlane.controllerManager.resources Explicit CPU and memory configuration for the Controller Manager. When left empty, the preset defined in `resourcesPreset` is applied.
## @param controlPlane.controllerManager.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
## @param controlPlane.controllerManager.resources Explicit CPU/memory resource requests and limits for the controller manager.
## @param controlPlane.controllerManager.resourcesPreset Use a common resources preset when `resources` is not set explicitly.
resourcesPreset: "micro"
resources: {}
scheduler:
## @param controlPlane.scheduler.resources Explicit CPU and memory configuration for the Scheduler. When left empty, the preset defined in `resourcesPreset` is applied.
## @param controlPlane.scheduler.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
## @param controlPlane.scheduler.resources Explicit CPU/memory resource requests and limits for the scheduler.
## @param controlPlane.scheduler.resourcesPreset Use a common resources preset when `resources` is not set explicitly.
resourcesPreset: "micro"
resources: {}
konnectivity:
server:
## @param controlPlane.konnectivity.server.resources Explicit CPU and memory configuration for Konnectivity. When left empty, the preset defined in `resourcesPreset` is applied.
## @param controlPlane.konnectivity.server.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
## @param controlPlane.konnectivity.server.resources Explicit CPU/memory resource requests and limits for the Konnectivity.
## @param controlPlane.konnectivity.server.resourcesPreset Use a common resources preset when `resources` is not set explicitly.
resourcesPreset: "micro"
resources: {}

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.8.2
version: 0.7.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -5,7 +5,6 @@ include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.resourcesPreset.enum = ["none", "nano", "micro", "small", "medium", "large", "xlarge", "2xlarge"]' values.schema.json
image:
docker buildx build images/mariadb-backup \

View File

@@ -1,7 +1,6 @@
## Managed MariaDB Service
The Managed MariaDB Service offers a powerful and widely used relational database solution.
This service allows you to create and manage a replicated MariaDB cluster seamlessly.
The Managed MariaDB Service offers a powerful and widely used relational database solution. This service allows you to create and manage a replicated MariaDB cluster seamlessly.
## Deployment Details
@@ -47,7 +46,7 @@ restic -r s3:s3.example.org/mariadb-backups/database_name restore latest --targe
```
more details:
- https://blog.aenix.io/restic-effective-backup-from-stdin-4bc1e8f083c1
- https://itnext.io/restic-effective-backup-from-stdin-4bc1e8f083c1
### Known issues
@@ -84,67 +83,16 @@ more details:
### Backup parameters
| Name | Description | Value |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------ |
| `backup.enabled` | Enable periodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Explicit CPU and memory configuration for each MariaDB replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `nano` |
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
```yaml
resources:
cpu: 4000m
memory: 4Gi
```
`resourcePreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `100m` | `128Mi` |
| `micro` | `250m` | `256Mi` |
| `small` | `500m` | `512Mi` |
| `medium` | `500m` | `1Gi` |
| `large` | `1` | `2Gi` |
| `xlarge` | `2` | `4Gi` |
| `2xlarge` | `4` | `8Gi` |
### users
```yaml
users:
user1:
maxUserConnections: 1000
password: hackme
user2:
maxUserConnections: 1000
password: hackme
```
### databases
```yaml
databases:
myapp1:
roles:
admin:
- user1
readonly:
- user2
```

View File

@@ -1 +0,0 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/mariadb-backup:0.8.1@sha256:cfd1c37d8ad24e10681d82d6e6ce8a641b4602c1b0ffa8516ae15b4958bb12d4
ghcr.io/cozystack/cozystack/mariadb-backup:0.7.0@sha256:cfd1c37d8ad24e10681d82d6e6ce8a641b4602c1b0ffa8516ae15b4958bb12d4

View File

@@ -25,14 +25,3 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -57,13 +57,6 @@ spec:
name: {{ .Release.Name }}-my-cnf
key: config
service:
metadata:
labels:
app.kubernetes.io/instance: {{ $.Release.Name }}
{{- if and .Values.external (eq (int .Values.replicas) 1) }}
type: LoadBalancer
{{- end }}
storage:
size: {{ .Values.size }}
resizeInUseVolumes: true
@@ -72,7 +65,7 @@ spec:
storageClassName: {{ . }}
{{- end }}
{{- if and .Values.external (gt (int .Values.replicas) 1) }}
{{- if .Values.external }}
primaryService:
type: LoadBalancer
{{- end }}
@@ -81,7 +74,7 @@ spec:
# type: LoadBalancer
{{- if .Values.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 4 }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 4 }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}

View File

@@ -27,7 +27,7 @@
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable periodic backups",
"description": "Enable pereiodic backups",
"default": false
},
"s3Region": {
@@ -69,23 +69,13 @@
},
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for each MariaDB replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"default": "nano",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}
}

View File

@@ -37,7 +37,7 @@ databases: {}
## @section Backup parameters
## @param backup.enabled Enable periodic backups
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
@@ -55,11 +55,15 @@ backup:
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Explicit CPU and memory configuration for each MariaDB replica. When left empty, the preset defined in `resourcesPreset` is applied.
## @param resources Resources
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.7.1
version: 0.6.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -2,4 +2,3 @@ include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.resourcesPreset.enum = ["none", "nano", "micro", "small", "medium", "large", "xlarge", "2xlarge"]' values.schema.json

View File

@@ -1,48 +1,18 @@
# Managed NATS Service
NATS is an open-source, simple, secure, and high performance messaging system.
It provides a data layer for cloud native applications, IoT messaging, and microservices architectures.
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------- | --------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `replicas` | Persistent Volume size for NATS | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `users` | Users configuration | `{}` |
| `jetstream.size` | Jetstream persistent storage size | `10Gi` |
| `jetstream.enabled` | Enable or disable Jetstream | `true` |
| `config.merge` | Additional configuration to merge into NATS config | `{}` |
| `config.resolver` | Additional configuration to merge into NATS config | `{}` |
| `resources` | Explicit CPU and memory configuration for each NATS replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `nano` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
```yaml
resources:
cpu: 4000m
memory: 4Gi
```
`resourcePreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `100m` | `128Mi` |
| `micro` | `250m` | `256Mi` |
| `small` | `500m` | `512Mi` |
| `medium` | `500m` | `1Gi` |
| `large` | `1` | `2Gi` |
| `xlarge` | `2` | `4Gi` |
| `2xlarge` | `4` | `8Gi` |
| Name | Description | Value |
| ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `replicas` | Persistent Volume size for NATS | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `users` | Users configuration | `{}` |
| `jetstream.size` | Jetstream persistent storage size | `10Gi` |
| `jetstream.enabled` | Enable or disable Jetstream | `true` |
| `config.merge` | Additional configuration to merge into NATS config | `{}` |
| `config.resolver` | Additional configuration to merge into NATS config | `{}` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |

View File

@@ -1 +0,0 @@
../../../library/cozy-lib

View File

@@ -1,5 +1,3 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $passwords := dict }}
{{- range $user, $u := .Values.users }}
{{- if $u.password }}
@@ -47,15 +45,12 @@ spec:
- name: nats
image: nats:2.10.17-alpine
{{- if .Values.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 22 }}
resources: {{- toYaml .Values.resources | nindent 22 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 22 }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 22 }}
{{- end }}
fullnameOverride: {{ .Release.Name }}
config:
cluster:
routeURLs:
k8sClusterDomain: {{ $clusterDomain }}
{{- if or (gt (len $passwords) 0) (gt (len .Values.config.merge) 0) }}
merge:
{{- if gt (len $passwords) 0 }}

View File

@@ -24,14 +24,3 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -49,23 +49,13 @@
},
"resources": {
"type": "object",
"description": "Explicit CPU and memory configuration for each NATS replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"default": "nano",
"enum": [
"none",
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
}
}

Some files were not shown because too many files have changed in this diff Show More