Compare commits

..

16 Commits

Author SHA1 Message Date
Andrei Kvapil
e98fa9cd72 Release v0.31.2 (#1071)
This PR prepares the release `v0.31.2`.
2025-06-17 02:27:18 +02:00
Andrei Kvapil
01d90bb736 [Backport release-0.31] Refactor roles and permissions for tenants
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 21:31:41 +02:00
github-actions
e04cfaaa58 Prepare release v0.31.2
Signed-off-by: github-actions <github-actions@github.com>
2025-06-16 19:26:34 +00:00
Andrei Kvapil
8c86905b22 [Backport release-0.31] [cozystack-controller] Fix RBAC for annotating namespaces (#1037)
# Description
Backport of #1031 to `release-0.31`.
2025-06-16 18:21:55 +02:00
Andrei Kvapil
84955d13ac [Backport release-0.31] [kafka] specify mimimal working resource presets (#1041)
# Description
Backport of #1040 to `release-0.31`.
2025-06-16 18:21:42 +02:00
Andrei Kvapil
46e5044851 [Backport release-0.31] Dashboard update and fixes (#1066)
- [dashboard] Cumulative update (#1042)
- [dashboard] Remove dependency on linsting secrets (#1066)
2025-06-16 18:21:17 +02:00
Andrei Kvapil
3a3f44a427 [dashboard] Remove dependency on linsting secrets
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 18:20:46 +02:00
Andrei Kvapil
0cc35a212c [dashboard] Cumulative update
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 18:20:39 +02:00
Andrei Kvapil
0bb79adec0 [Backport release-0.31] [docs] Review the Clickhouse app docs (#1065)
# Description
Backport of #1059 to `release-0.31`.
2025-06-16 18:19:29 +02:00
Andrei Kvapil
9e89a9d3ad [Backport release-0.31] [bugfix] fix distro full bundle (#1064)
# Description
Backport of #1056 to `release-0.31`.
2025-06-16 18:19:19 +02:00
Andrei Kvapil
ddfb1d65e3 [Backport release-0.31] [platform] decrease resources for system applications (#1058)
# Description
Backport of #1054 to `release-0.31`.
2025-06-16 18:18:52 +02:00
Nick Volynkin
efafe16d3b [docs] Review the Clickhouse app docs
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
(cherry picked from commit 980185ca2b)
2025-06-16 16:14:48 +00:00
kklinch0
e1b4861c8a [bugfix] fix distro full bundle
Signed-off-by: kklinch0 <kklinch0@gmail.com>
(cherry picked from commit 6a713e5eb4)
2025-06-16 16:14:37 +00:00
kklinch0
4d0bf14fc3 [platform] cut resources
Signed-off-by: kklinch0 <kklinch0@gmail.com>
(cherry picked from commit 0fa70d9d38)
2025-06-14 06:33:03 +00:00
Andrei Kvapil
35069ff3e9 [kafka] specify mimimal working resource presets
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit ba97a4593c)
2025-06-09 22:34:37 +00:00
Andrei Kvapil
b9afd69df0 [cozystack-controller] Fix RBAC for annotating namespaces
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit ac5145be87)
2025-06-09 08:16:37 +00:00
1242 changed files with 86345 additions and 140009 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1 +1 @@
* @kvaps @lllamnyp @nbykov0
* @kvaps @lllamnyp @klinch0

View File

@@ -1,50 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
labels: 'bug'
assignees: ''
---
<!--
Thank you for submitting a bug report!
Please fill in the fields below to help us investigate the problem.
-->
**Describe the bug**
A clear and concise description of what the bug is.
**Environment**
- Cozystack version
- Provider: on-prem, Hetzner, and so on
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behaviour**
When taking the steps to reproduce, what should have happened differently?
**Actual behaviour**
A clear and concise description of what happens when the bug occurs. Explain how the system currently behaves, including error messages, unexpected results, or incorrect functionality observed during execution.
**Logs**
```
Paste any relevant logs here. Please redact tokens, passwords, private keys.
```
**Screenshots**
If applicable, add screenshots to help explain the problem.
**Additional context**
Add any other context about the problem here.
**Checklist**
- [ ] I have checked the documentation
- [ ] I have searched for similar issues
- [ ] I have included all required information
- [ ] I have provided clear steps to reproduce
- [ ] I have included relevant logs

View File

@@ -1,24 +0,0 @@
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium], [kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres], [virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats, even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported to a previous version.
-->
## What this PR does
### Release note
<!-- Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->
```release-note
[]
```

View File

@@ -2,7 +2,7 @@ name: Pre-Commit Checks
on:
pull_request:
types: [opened, synchronize, reopened]
types: [labeled, opened, synchronize, reopened]
concurrency:
group: pre-commit-${{ github.workflow }}-${{ github.event.pull_request.number }}
@@ -28,7 +28,15 @@ jobs:
- name: Install generate
run: |
curl -sSL https://github.com/cozystack/cozyvalues-gen/releases/download/v1.0.5/cozyvalues-gen-linux-amd64.tar.gz | tar -xzvf- -C /usr/local/bin/ cozyvalues-gen
sudo apt update
sudo apt install curl -y
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt install nodejs -y
git clone https://github.com/bitnami/readme-generator-for-helm
cd ./readme-generator-for-helm
npm install
npm install -g pkg
pkg . -o /usr/local/bin/readme-generator
- name: Run pre-commit hooks
run: |

View File

@@ -1,17 +1,100 @@
name: "Releasing PR"
name: Releasing PR
on:
pull_request:
types: [closed]
paths-ignore:
- 'docs/**/*'
types: [labeled, opened, synchronize, reopened, closed]
# Cancel inflight runs for the same PR when a new push arrives.
concurrency:
group: pr-${{ github.workflow }}-${{ github.event.pull_request.number }}
group: pull-requests-release-${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
verify:
name: Test Release
runs-on: [self-hosted]
permissions:
contents: read
packages: write
if: |
contains(github.event.pull_request.labels.*.name, 'release') &&
github.event.action != 'closed'
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
- name: Extract tag from PR branch
id: get_tag
uses: actions/github-script@v7
with:
script: |
const branch = context.payload.pull_request.head.ref;
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!m) {
core.setFailed(`❌ Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
return;
}
const tag = `v${m[1]}`;
core.setOutput('tag', tag);
- name: Find draft release and get asset IDs
id: fetch_assets
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GH_PAT }}
script: |
const tag = '${{ steps.get_tag.outputs.tag }}';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo,
per_page: 100
});
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) {
core.setFailed(`Draft release '${tag}' not found`);
return;
}
const findAssetId = (name) =>
draft.assets.find(a => a.name === name)?.id;
const installerId = findAssetId("cozystack-installer.yaml");
const diskId = findAssetId("nocloud-amd64.raw.xz");
if (!installerId || !diskId) {
core.setFailed("Missing required assets");
return;
}
core.setOutput("installer_id", installerId);
core.setOutput("disk_id", diskId);
- name: Download assets from GitHub API
run: |
mkdir -p _out/assets
curl -sSL \
-H "Authorization: token ${GH_PAT}" \
-H "Accept: application/octet-stream" \
-o _out/assets/cozystack-installer.yaml \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ steps.fetch_assets.outputs.installer_id }}"
curl -sSL \
-H "Authorization: token ${GH_PAT}" \
-H "Accept: application/octet-stream" \
-o _out/assets/nocloud-amd64.raw.xz \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ steps.fetch_assets.outputs.disk_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
- name: Run tests
run: make test
finalize:
name: Finalize Release
runs-on: [self-hosted]

View File

@@ -1,17 +1,11 @@
name: Pull Request
env:
# TODO: unhardcode this
REGISTRY: iad.ocir.io/idyksih5sir9/cozystack
on:
pull_request:
types: [opened, synchronize, reopened]
paths-ignore:
- 'docs/**/*'
types: [labeled, opened, synchronize, reopened]
# Cancel inflight runs for the same PR when a new push arrives.
concurrency:
group: pr-${{ github.workflow }}-${{ github.event.pull_request.number }}
group: pull-requests-${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
@@ -33,322 +27,35 @@ jobs:
fetch-depth: 0
fetch-tags: true
- name: Run unit tests
run: make unit-tests
- name: Set up Docker config
run: |
if [ -d ~/.docker ]; then
cp -r ~/.docker "${{ runner.temp }}/.docker"
fi
- name: Login to GitHub Container Registry
if: ${{ !github.event.pull_request.head.repo.fork }}
uses: docker/login-action@v3
with:
username: ${{ secrets.OCIR_USER}}
password: ${{ secrets.OCIR_TOKEN }}
registry: iad.ocir.io
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
- name: Build
run: make build
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
- name: Build Talos image
run: make -C packages/core/installer talos-nocloud
- name: Save git diff as patch
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
run: git diff HEAD > _out/assets/pr.patch
- name: Upload git diff patch
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: pr-patch
path: _out/assets/pr.patch
- name: Upload installer
uses: actions/upload-artifact@v4
with:
name: cozystack-installer
path: _out/assets/cozystack-installer.yaml
name: cozystack-artefacts
path: |
_out/assets/nocloud-amd64.raw.xz
_out/assets/cozystack-installer.yaml
- name: Upload Talos image
uses: actions/upload-artifact@v4
with:
name: talos-image
path: _out/assets/nocloud-amd64.raw.xz
resolve_assets:
name: "Resolve assets"
runs-on: ubuntu-latest
if: contains(github.event.pull_request.labels.*.name, 'release')
outputs:
installer_id: ${{ steps.fetch_assets.outputs.installer_id }}
disk_id: ${{ steps.fetch_assets.outputs.disk_id }}
steps:
- name: Checkout code
if: contains(github.event.pull_request.labels.*.name, 'release')
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Extract tag from PR branch (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
id: get_tag
uses: actions/github-script@v7
with:
script: |
const branch = context.payload.pull_request.head.ref;
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!m) {
core.setFailed(`❌ Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
return;
}
core.setOutput('tag', `v${m[1]}`);
- name: Find draft release & asset IDs (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
id: fetch_assets
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GH_PAT }}
script: |
const tag = '${{ steps.get_tag.outputs.tag }}';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo,
per_page: 100
});
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) {
core.setFailed(`Draft release '${tag}' not found`);
return;
}
const find = (n) => draft.assets.find(a => a.name === n)?.id;
const installerId = find('cozystack-installer.yaml');
const diskId = find('nocloud-amd64.raw.xz');
if (!installerId || !diskId) {
core.setFailed('Required assets missing in draft release');
return;
}
core.setOutput('installer_id', installerId);
core.setOutput('disk_id', diskId);
prepare_env:
name: "Prepare environment"
test:
name: Test
runs-on: [self-hosted]
permissions:
contents: read
packages: read
needs: ["build", "resolve_assets"]
if: ${{ always() && (needs.build.result == 'success' || needs.resolve_assets.result == 'success') }}
needs: build
steps:
# ▸ Checkout and prepare the codebase
- name: Checkout code
uses: actions/checkout@v4
# ▸ Regular PR path download artefacts produced by the *build* job
- name: "Download Talos image (regular PR)"
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
with:
name: talos-image
path: _out/assets
- name: Download PR patch
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
with:
name: pr-patch
path: _out/assets
- name: Apply patch
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
run: |
git apply _out/assets/pr.patch
# ▸ Release PR path fetch artefacts from the corresponding draft release
- name: Download assets from draft release (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
run: |
mkdir -p _out/assets
curl -sSL -H "Authorization: token ${GH_PAT}" -H "Accept: application/octet-stream" \
-o _out/assets/nocloud-amd64.raw.xz \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ needs.resolve_assets.outputs.disk_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
# ▸ Start actual job steps
- name: Prepare workspace
run: |
rm -rf /tmp/$SANDBOX_NAME
cp -r ${{ github.workspace }} /tmp/$SANDBOX_NAME
- name: Prepare environment
run: |
cd /tmp/$SANDBOX_NAME
attempt=0
until make SANDBOX_NAME=$SANDBOX_NAME prepare-env; do
attempt=$((attempt + 1))
if [ $attempt -ge 3 ]; then
echo "❌ Attempt $attempt failed, exiting..."
exit 1
fi
echo "❌ Attempt $attempt failed, retrying..."
done
echo "✅ The task completed successfully after $attempt attempts"
install_cozystack:
name: "Install Cozystack"
runs-on: [self-hosted]
permissions:
contents: read
packages: read
needs: ["prepare_env", "resolve_assets"]
if: ${{ always() && needs.prepare_env.result == 'success' }}
steps:
- name: Prepare _out/assets directory
run: mkdir -p _out/assets
# ▸ Regular PR path download artefacts produced by the *build* job
- name: "Download installer (regular PR)"
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
with:
name: cozystack-installer
path: _out/assets
# ▸ Release PR path fetch artefacts from the corresponding draft release
- name: Download assets from draft release (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
run: |
mkdir -p _out/assets
curl -sSL -H "Authorization: token ${GH_PAT}" -H "Accept: application/octet-stream" \
-o _out/assets/cozystack-installer.yaml \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ needs.resolve_assets.outputs.installer_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
# ▸ Start actual job steps
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Sync _out/assets directory
run: |
mkdir -p /tmp/$SANDBOX_NAME/_out/assets
mv _out/assets/* /tmp/$SANDBOX_NAME/_out/assets/
- name: Install Cozystack into sandbox
run: |
cd /tmp/$SANDBOX_NAME
attempt=0
until make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME install-cozystack; do
attempt=$((attempt + 1))
if [ $attempt -ge 3 ]; then
echo "❌ Attempt $attempt failed, exiting..."
exit 1
fi
echo "❌ Attempt $attempt failed, retrying..."
done
echo "✅ The task completed successfully after $attempt attempts."
- name: Run OpenAPI tests
run: |
cd /tmp/$SANDBOX_NAME
make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME test-openapi
detect_test_matrix:
name: "Detect e2e test matrix"
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set.outputs.matrix }}
steps:
- uses: actions/checkout@v4
- id: set
run: |
apps=$(ls hack/e2e-apps/*.bats | cut -f3 -d/ | cut -f1 -d. | jq -R | jq -cs)
echo "matrix={\"app\":$apps}" >> "$GITHUB_OUTPUT"
test_apps:
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.detect_test_matrix.outputs.matrix) }}
name: Test ${{ matrix.app }}
runs-on: [self-hosted]
needs: [install_cozystack,detect_test_matrix]
if: ${{ always() && (needs.install_cozystack.result == 'success' && needs.detect_test_matrix.result == 'success') }}
steps:
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: E2E Apps
run: |
cd /tmp/$SANDBOX_NAME
attempt=0
until make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME test-apps-${{ matrix.app }}; do
attempt=$((attempt + 1))
if [ $attempt -ge 3 ]; then
echo "❌ Attempt $attempt failed, exiting..."
exit 1
fi
echo "❌ Attempt $attempt failed, retrying..."
done
echo "✅ The task completed successfully after $attempt attempts"
collect_debug_information:
name: Collect debug information
runs-on: [self-hosted]
needs: [test_apps]
if: ${{ always() }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Collect report
run: |
cd /tmp/$SANDBOX_NAME
make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME collect-report
- name: Upload cozyreport.tgz
uses: actions/upload-artifact@v4
with:
name: cozyreport
path: /tmp/${{ env.SANDBOX_NAME }}/_out/cozyreport.tgz
- name: Collect images list
run: |
cd /tmp/$SANDBOX_NAME
make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME collect-images
- name: Upload image list
uses: actions/upload-artifact@v4
with:
name: image-list
path: /tmp/${{ env.SANDBOX_NAME }}/_out/images.txt
cleanup:
name: Tear down environment
runs-on: [self-hosted]
needs: [collect_debug_information]
if: ${{ always() && needs.test_apps.result == 'success' }}
# Never run when the PR carries the "release" label.
if: |
!contains(github.event.pull_request.labels.*.name, 'release')
steps:
- name: Checkout code
@@ -357,13 +64,11 @@ jobs:
fetch-depth: 0
fetch-tags: true
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Tear down sandbox
run: make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME delete
- name: Remove workspace
run: rm -rf /tmp/$SANDBOX_NAME
- name: Download artifacts
uses: actions/download-artifact@v4
with:
name: cozystack-artefacts
path: _out/assets/
- name: Test
run: make test

View File

@@ -99,26 +99,18 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
# Build project artifacts
- name: Build
if: steps.check_release.outputs.skip == 'false'
run: make build
env:
DOCKER_CONFIG: ${{ runner.temp }}/.docker
# Commit built artifacts
- name: Commit release artifacts
if: steps.check_release.outputs.skip == 'false'
env:
GH_PAT: ${{ secrets.GH_PAT }}
run: |
git config user.name "cozystack-bot"
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
git config --unset-all http.https://github.com/.extraheader || true
git config user.name "github-actions"
git config user.email "github-actions@github.com"
git add .
git commit -m "Prepare release ${GITHUB_REF#refs/tags/}" -s || echo "No changes to commit"
git push origin HEAD || true
@@ -149,35 +141,36 @@ jobs:
version: ${{ steps.tag.outputs.tag }} # A
compare-to: ${{ steps.latest_release.outputs.tag }} # B
# Create or reuse draft release
# Create or reuse DRAFT GitHub Release
- name: Create / reuse draft release
if: steps.check_release.outputs.skip == 'false'
id: release
uses: actions/github-script@v7
with:
script: |
const tag = '${{ steps.tag.outputs.tag }}';
const isRc = ${{ steps.tag.outputs.is_rc }};
const releases = await github.rest.repos.listReleases({
const tag = '${{ steps.tag.outputs.tag }}';
const isRc = ${{ steps.tag.outputs.is_rc }};
const outdated = '${{ steps.semver.outputs.comparison-result }}' === '<';
const makeLatest = outdated ? false : 'legacy';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo
});
let rel = releases.data.find(r => r.tag_name === tag);
let rel = releases.data.find(r => r.tag_name === tag);
if (!rel) {
rel = await github.rest.repos.createRelease({
owner: context.repo.owner,
repo: context.repo.repo,
tag_name: tag,
name: tag,
draft: true,
prerelease: isRc // no make_latest for drafts
tag_name: tag,
name: tag,
draft: true,
prerelease: isRc,
make_latest: makeLatest
});
console.log(`Draft release created for ${tag}`);
} else {
console.log(`Re-using existing release ${tag}`);
}
core.setOutput('upload_url', rel.upload_url);
# Build + upload assets (optional)
@@ -192,12 +185,7 @@ jobs:
# Create release-X.Y.Z branch and push (force-update)
- name: Create release branch
if: steps.check_release.outputs.skip == 'false'
env:
GH_PAT: ${{ secrets.GH_PAT }}
run: |
git config user.name "cozystack-bot"
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
BRANCH="release-${GITHUB_REF#refs/tags/v}"
git branch -f "$BRANCH"
git push -f origin "$BRANCH"
@@ -207,7 +195,6 @@ jobs:
if: steps.check_release.outputs.skip == 'false'
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GH_PAT }}
script: |
const version = context.ref.replace('refs/tags/v', '');
const base = '${{ steps.get_base.outputs.branch }}';

3
.gitignore vendored
View File

@@ -1,5 +1,4 @@
_out
_repos
.git
.idea
.vscode
@@ -78,5 +77,3 @@ fabric.properties
.DS_Store
**/.DS_Store
tmp/

View File

@@ -1,17 +1,24 @@
repos:
- repo: local
hooks:
- id: gen-versions-map
name: Generate versions map and check for changes
entry: sh -c 'make -C packages/apps check-version-map && make -C packages/extra check-version-map'
language: system
types: [file]
pass_filenames: false
description: Run the script and fail if it generates changes
- id: run-make-generate
name: Run 'make generate' in all app directories
entry: |
flock -x .git/pre-commit.lock sh -c '
for dir in ./packages/apps/*/ ./packages/extra/*/; do
/bin/bash -c '
for dir in ./packages/apps/*/; do
if [ -d "$dir" ]; then
echo "Running make generate in $dir"
make generate -C "$dir" || exit $?
(cd "$dir" && make generate)
fi
done
git diff --color=always | cat
'
language: system
language: script
files: ^.*$

View File

@@ -30,6 +30,3 @@ This list is sorted in chronological order, based on the submission date.
| [Bootstack](https://bootstack.app/) | @mrkhachaturov | 2024-08-01| At Bootstack, we utilize a Kubernetes operator specifically designed to simplify and streamline cloud infrastructure creation.|
| [gohost](https://gohost.kz/) | @karabass_off | 2024-02-01 | Our company has been working in the market of Kazakhstan for more than 15 years, providing clients with a standard set of services: VPS/VDC, IaaS, shared hosting, etc. Now we are expanding the lineup by introducing Bare Metal Kubenetes cluster under Cozystack management. |
| [Urmanac](https://urmanac.com) | @kingdonb | 2024-12-04 | Urmanac is the future home of a hosting platform for the knowledge base of a community of personal server enthusiasts. We use Cozystack to provide support services for web sites hosted using both conventional deployments and on SpinKube, with WASM. |
| [Hidora](https://hikube.cloud) | @matthieu-robin | 2025-09-17 | Hidora is a Swiss cloud provider delivering managed services and infrastructure solutions through datacenters located in Switzerland, ensuring data sovereignty and reliability. Its sovereign cloud platform, Hikube, is designed to run workloads with high availability across multiple datacenters, providing enterprises with a secure and scalable foundation for their applications based on Cozystack. |
| [QOSI](https://qosi.kz) | @tabu-a | 2025-10-04 | QOSI is a non-profit organization driving open-source adoption and digital sovereignty across Kazakhstan and Central Asia. We use Cozystack as a platform for deploying sovereign, GPU-enabled clouds and educational environments under the National AI Program. Our goal is to accelerate the regions transition toward open, self-hosted cloud-native technologies |
|

View File

@@ -1,38 +0,0 @@
# AI Agents Overview
This file provides structured guidance for AI coding assistants and agents
working with the **Cozystack** project.
## Agent Documentation
| Agent | Purpose |
|-------|---------|
| [overview.md](./docs/agents/overview.md) | Project structure and conventions |
| [contributing.md](./docs/agents/contributing.md) | Commits, pull requests, and git workflow |
| [changelog.md](./docs/agents/changelog.md) | Changelog generation instructions |
| [releasing.md](./docs/agents/releasing.md) | Release process and workflow |
## Project Overview
**Cozystack** is a Kubernetes-based platform for building cloud infrastructure with managed services (databases, VMs, K8s clusters), multi-tenancy, and GitOps delivery.
## Quick Reference
### Code Structure
- `packages/core/` - Core platform charts (installer, platform)
- `packages/system/` - System components (CSI, CNI, operators)
- `packages/apps/` - User-facing applications in catalog
- `packages/extra/` - Tenant-specific modules
- `cmd/`, `internal/`, `pkg/` - Go code
- `api/` - Kubernetes CRDs
### Conventions
- **Helm Charts**: Umbrella pattern, vendored upstream charts in `charts/`
- **Go Code**: Controller-runtime patterns, kubebuilder style
- **Git Commits**: `[component] Description` format with `--signoff`
### What NOT to Do
- ❌ Edit `/vendor/`, `zz_generated.*.go`, upstream charts directly
- ❌ Modify `go.mod`/`go.sum` manually (use `go get`)
- ❌ Force push to main/master
- ❌ Commit built artifacts from `_out`

View File

@@ -1,22 +1,3 @@
# Code of Conduct
Cozystack follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
# Cozystack Vendor Neutrality Manifesto
Cozystack exists for the cloud-native community. We are committed to a project culture where no single company, product, or commercial agenda directs our roadmap, governance, brand, or releases. Our North Star is user value, technical excellence, and open collaboration under the CNCF umbrella.
## Our Commitments
- **Community-first:** Decisions prioritize the broader community over any vendor interest.
- **Open collaboration:** Ideas, discussions, and outcomes happen in public spaces; contributions are welcomed from all.
- **Merit over affiliation:** Proposals are evaluated on technical merit and user impact, not on who submits them.
- **Inclusive stewardship:** Leadership and maintenance are open to contributors who demonstrate sustained, constructive impact.
- **Technology choice:** We prefer open, pluggable designs that interoperate with multiple ecosystems and providers.
- **Neutral brand & voice:** Our name, logo, website, and documentation do not imply endorsement or preference for any vendor.
- **Transparent practices:** Funding acknowledgments, partnerships, and potential conflicts are communicated openly.
- **User trust:** Security handling, releases, and communications aim to be timely, transparent, and fair to all users.
By contributing to Cozystack, we affirm these principles and work together to keep the project open, welcoming, and vendor-neutral.
*— The Cozystack community*

View File

@@ -1,151 +0,0 @@
# Contributor Ladder
* [Contributor Ladder](#contributor-ladder)
* [Community Participant](#community-participant)
* [Contributor](#contributor)
* [Reviewer](#reviewer)
* [Maintainer](#maintainer)
* [Inactivity](#inactivity)
* [Involuntary Removal](#involuntary-removal-or-demotion)
* [Stepping Down/Emeritus Process](#stepping-downemeritus-process)
* [Contact](#contact)
## Contributor Ladder
Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
### Community Participant
Description: A Community Participant engages with the project and its community, contributing their time, thoughts, etc. Community participants are usually users who have stopped being anonymous and started being active in project discussions.
* Responsibilities:
* Must follow the [CNCF CoC](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)
* How users can get involved with the community:
* Participating in community discussions
* Helping other users
* Submitting bug reports
* Commenting on issues
* Trying out new releases
* Attending community events
### Contributor
Description: A Contributor contributes directly to the project and adds value to it. Contributions need not be code. People at the Contributor level may be new contributors, or they may only contribute occasionally.
* Responsibilities include:
* Follow the [CNCF CoC](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)
* Follow the project [contributing guide] (https://github.com/cozystack/cozystack/blob/main/CONTRIBUTING.md)
* Requirements (one or several of the below):
* Report and sometimes resolve issues
* Occasionally submit PRs
* Contribute to the documentation
* Show up at meetings, takes notes
* Answer questions from other community members
* Submit feedback on issues and PRs
* Test releases and patches and submit reviews
* Run or helps run events
* Promote the project in public
* Help run the project infrastructure
* Privileges:
* Invitations to contributor events
* Eligible to become a Maintainer
### Reviewer
Description: A Reviewer has responsibility for specific code, documentation, test, or other project areas. They are collectively responsible, with other Reviewers, for reviewing all changes to those areas and indicating whether those changes are ready to merge. They have a track record of contribution and review in the project.
Reviewers are responsible for a "specific area." This can be a specific code directory, driver, chapter of the docs, test job, event, or other clearly-defined project component that is smaller than an entire repository or subproject. Most often it is one or a set of directories in one or more Git repositories. The "specific area" below refers to this area of responsibility.
Reviewers have all the rights and responsibilities of a Contributor, plus:
* Responsibilities include:
* Continues to contribute regularly, as demonstrated by having at least 15 PRs a year, as demonstrated by [Cozystack devstats](https://cozystack.devstats.cncf.io).
* Following the reviewing guide
* Reviewing most Pull Requests against their specific areas of responsibility
* Reviewing at least 40 PRs per year
* Helping other contributors become reviewers
* Requirements:
* Must have successful contributions to the project, including at least one of the following:
* 10 accepted PRs,
* Reviewed 20 PRs,
* Resolved and closed 20 Issues,
* Become responsible for a key project management area,
* Or some equivalent combination or contribution
* Must have been contributing for at least 6 months
* Must be actively contributing to at least one project area
* Must have two sponsors who are also Reviewers or Maintainers, at least one of whom does not work for the same employer
* Has reviewed, or helped review, at least 20 Pull Requests
* Has analyzed and resolved test failures in their specific area
* Has demonstrated an in-depth knowledge of the specific area
* Commits to being responsible for that specific area
* Is supportive of new and occasional contributors and helps get useful PRs in shape to commit
* Additional privileges:
* Has GitHub or CI/CD rights to approve pull requests in specific directories
* Can recommend and review other contributors to become Reviewers
* May be assigned Issues and Reviews
* May give commands to CI/CD automation
* Can recommend other contributors to become Reviewers
The process of becoming a Reviewer is:
1. The contributor is nominated by opening a PR against the appropriate repository, which adds their GitHub username to the OWNERS file for one or more directories.
2. At least two members of the team that owns that repository or main directory, who are already Approvers, approve the PR.
### Maintainer
Description: Maintainers are very established contributors who are responsible for the entire project. As such, they have the ability to approve PRs against any area of the project, and are expected to participate in making decisions about the strategy and priorities of the project.
A Maintainer must meet the responsibilities and requirements of a Reviewer, plus:
* Responsibilities include:
* Reviewing at least 40 PRs per year, especially PRs that involve multiple parts of the project
* Mentoring new Reviewers
* Writing refactoring PRs
* Participating in CNCF maintainer activities
* Determining strategy and policy for the project
* Participating in, and leading, community meetings
* Requirements
* Experience as a Reviewer for at least 6 months
* Demonstrates a broad knowledge of the project across multiple areas
* Is able to exercise judgment for the good of the project, independent of their employer, friends, or team
* Mentors other contributors
* Can commit to spending at least 10 hours per month working on the project
* Additional privileges:
* Approve PRs to any area of the project
* Represent the project in public as a Maintainer
* Communicate with the CNCF on behalf of the project
* Have a vote in Maintainer decision-making meetings
Process of becoming a maintainer:
1. Any current Maintainer may nominate a current Reviewer to become a new Maintainer, by opening a PR against the root of the cozystack repository adding the nominee as an Approver in the [MAINTAINERS](https://github.com/cozystack/cozystack/blob/main/MAINTAINERS.md) file.
2. The nominee will add a comment to the PR testifying that they agree to all requirements of becoming a Maintainer.
3. A majority of the current Maintainers must then approve the PR.
## Inactivity
It is important for contributors to be and stay active to set an example and show commitment to the project. Inactivity is harmful to the project as it may lead to unexpected delays, contributor attrition, and a lost of trust in the project.
* Inactivity is measured by:
* Periods of no contributions for longer than 6 months
* Periods of no communication for longer than 3 months
* Consequences of being inactive include:
* Involuntary removal or demotion
* Being asked to move to Emeritus status
## Involuntary Removal or Demotion
Involuntary removal/demotion of a contributor happens when responsibilities and requirements aren't being met. This may include repeated patterns of inactivity, extended period of inactivity, a period of failing to meet the requirements of your role, and/or a violation of the Code of Conduct. This process is important because it protects the community and its deliverables while also opens up opportunities for new contributors to step in.
Involuntary removal or demotion is handled through a vote by a majority of the current Maintainers.
## Stepping Down/Emeritus Process
If and when contributors' commitment levels change, contributors can consider stepping down (moving down the contributor ladder) vs moving to emeritus status (completely stepping away from the project).
Contact the Maintainers about changing to Emeritus status, or reducing your contributor level.
## Contact
* For inquiries, please reach out to: @kvaps, @tym83

View File

@@ -7,6 +7,6 @@
| Kingdon Barrett | [@kingdonb](https://github.com/kingdonb) | Urmanac | FluxCD and flux-operator |
| Timofei Larkin | [@lllamnyp](https://github.com/lllamnyp) | 3commas | Etcd-operator Lead |
| Artem Bortnikov | [@aobort](https://github.com/aobort) | Timescale | Etcd-operator Lead |
| Andrei Gumilev | [@chumkaska](https://github.com/chumkaska) | Ænix | Platform Documentation |
| Timur Tukaev | [@tym83](https://github.com/tym83) | Ænix | Cozystack Website, Marketing, Community Management |
| Kirill Klinchenkov | [@klinch0](https://github.com/klinch0) | Ænix | Core Maintainer |
| Nikita Bykov | [@nbykov0](https://github.com/nbykov0) | Ænix | Maintainer of ARM and stuff |

View File

@@ -1,4 +1,4 @@
.PHONY: manifests repos assets unit-tests helm-unit-tests
.PHONY: manifests repos assets
build-deps:
@command -V find docker skopeo jq gh helm > /dev/null
@@ -9,31 +9,34 @@ build-deps:
build: build-deps
make -C packages/apps/http-cache image
make -C packages/apps/postgres image
make -C packages/apps/mysql image
make -C packages/apps/clickhouse image
make -C packages/apps/kubernetes image
make -C packages/extra/monitoring image
make -C packages/system/cozystack-api image
make -C packages/system/cozystack-controller image
make -C packages/system/lineage-controller-webhook image
make -C packages/system/cilium image
make -C packages/system/kubeovn image
make -C packages/system/kubeovn-webhook image
make -C packages/system/kubeovn-plunger image
make -C packages/system/dashboard image
make -C packages/system/metallb image
make -C packages/system/kamaji image
make -C packages/system/bucket image
make -C packages/system/objectstorage-controller image
make -C packages/core/testing image
make -C packages/core/installer image
make manifests
repos:
rm -rf _out
make -C packages/apps check-version-map
make -C packages/extra check-version-map
make -C packages/system repo
make -C packages/apps repo
make -C packages/extra repo
mkdir -p _out/logos
cp ./packages/apps/*/logos/*.svg ./packages/extra/*/logos/*.svg _out/logos/
manifests:
mkdir -p _out/assets
@@ -46,15 +49,6 @@ test:
make -C packages/core/testing apply
make -C packages/core/testing test
unit-tests: helm-unit-tests
helm-unit-tests:
hack/helm-unit-tests.sh
prepare-env:
make -C packages/core/testing apply
make -C packages/core/testing prepare-cluster
generate:
hack/update-codegen.sh

View File

@@ -12,15 +12,11 @@
**Cozystack** is a free PaaS platform and framework for building clouds.
Cozystack is a [CNCF Sandbox Level Project](https://www.cncf.io/sandbox-projects/) that was originally built and sponsored by [Ænix](https://aenix.io/).
With Cozystack, you can transform a bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters,
Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
Use Cozystack to build your own cloud or provide a cost-effective development environment.
![Cozystack user interface](https://cozystack.io/img/screenshot-dark.png)
## Use-Cases
* [**Using Cozystack to build a public cloud**](https://cozystack.io/docs/guides/use-cases/public-cloud/)
@@ -32,6 +28,9 @@ You can use Cozystack as a platform to build a private cloud powered by Infrastr
* [**Using Cozystack as a Kubernetes distribution**](https://cozystack.io/docs/guides/use-cases/kubernetes-distribution/)
You can use Cozystack as a Kubernetes distribution for Bare Metal
## Screenshot
![Cozystack screenshot](https://cozystack.io/img/screenshot.png)
## Documentation
@@ -60,10 +59,7 @@ Commits are used to generate the changelog, and their author will be referenced
If you have **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/cozystack/cozystack/discussions/categories/feature-requests).
## Community
You are welcome to join our [Telegram group](https://t.me/cozystack) and come to our weekly community meetings.
Add them to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics) for convenience.
You are welcome to join our weekly community meetings (just add this events to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics)) or [Telegram group](https://t.me/cozystack).
## License

View File

@@ -1,5 +1,4 @@
API rule violation: list_type_missing,github.com/cozystack/cozystack/pkg/apis/apps/v1alpha1,ApplicationStatus,Conditions
API rule violation: list_type_missing,github.com/cozystack/cozystack/pkg/apis/core/v1alpha1,TenantModuleStatus,Conditions
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,Ref
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,Schema
API rule violation: names_match,k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1,JSONSchemaProps,XEmbeddedResource

View File

@@ -1,255 +0,0 @@
// SPDX-License-Identifier: Apache-2.0
// Package v1alpha1 defines front.in-cloud.io API types.
//
// Group: dashboard.cozystack.io
// Version: v1alpha1
package v1alpha1
import (
v1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// -----------------------------------------------------------------------------
// Shared shapes
// -----------------------------------------------------------------------------
// CommonStatus is a generic Status block with Kubernetes conditions.
type CommonStatus struct {
// ObservedGeneration reflects the most recent generation observed by the controller.
// +optional
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
// Conditions represent the latest available observations of an object's state.
// +optional
Conditions []metav1.Condition `json:"conditions,omitempty"`
}
// ArbitrarySpec holds schemaless user data and preserves unknown fields.
// We map the entire .spec to a single JSON payload to mirror the CRDs you provided.
// NOTE: Using apiextensionsv1.JSON avoids losing arbitrary structure during round-trips.
type ArbitrarySpec struct {
// +kubebuilder:validation:XPreserveUnknownFields
// +kubebuilder:pruning:PreserveUnknownFields
v1.JSON `json:",inline"`
}
// -----------------------------------------------------------------------------
// Sidebar
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=sidebars,scope=Cluster
// +kubebuilder:subresource:status
type Sidebar struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type SidebarList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Sidebar `json:"items"`
}
// -----------------------------------------------------------------------------
// CustomFormsPrefill (shortName: cfp)
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=customformsprefills,scope=Cluster,shortName=cfp
// +kubebuilder:subresource:status
type CustomFormsPrefill struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type CustomFormsPrefillList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CustomFormsPrefill `json:"items"`
}
// -----------------------------------------------------------------------------
// BreadcrumbInside
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=breadcrumbsinside,scope=Cluster
// +kubebuilder:subresource:status
type BreadcrumbInside struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type BreadcrumbInsideList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []BreadcrumbInside `json:"items"`
}
// -----------------------------------------------------------------------------
// CustomFormsOverride (shortName: cfo)
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=customformsoverrides,scope=Cluster,shortName=cfo
// +kubebuilder:subresource:status
type CustomFormsOverride struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type CustomFormsOverrideList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CustomFormsOverride `json:"items"`
}
// -----------------------------------------------------------------------------
// TableUriMapping
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=tableurimappings,scope=Cluster
// +kubebuilder:subresource:status
type TableUriMapping struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type TableUriMappingList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []TableUriMapping `json:"items"`
}
// -----------------------------------------------------------------------------
// Breadcrumb
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=breadcrumbs,scope=Cluster
// +kubebuilder:subresource:status
type Breadcrumb struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type BreadcrumbList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Breadcrumb `json:"items"`
}
// -----------------------------------------------------------------------------
// MarketplacePanel
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=marketplacepanels,scope=Cluster
// +kubebuilder:subresource:status
type MarketplacePanel struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type MarketplacePanelList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []MarketplacePanel `json:"items"`
}
// -----------------------------------------------------------------------------
// Navigation
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=navigations,scope=Cluster
// +kubebuilder:subresource:status
type Navigation struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type NavigationList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Navigation `json:"items"`
}
// -----------------------------------------------------------------------------
// CustomColumnsOverride
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=customcolumnsoverrides,scope=Cluster
// +kubebuilder:subresource:status
type CustomColumnsOverride struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type CustomColumnsOverrideList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CustomColumnsOverride `json:"items"`
}
// -----------------------------------------------------------------------------
// Factory
// -----------------------------------------------------------------------------
// +kubebuilder:object:root=true
// +kubebuilder:resource:path=factories,scope=Cluster
// +kubebuilder:subresource:status
type Factory struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ArbitrarySpec `json:"spec"`
Status CommonStatus `json:"status,omitempty"`
}
// +kubebuilder:object:root=true
type FactoryList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []Factory `json:"items"`
}

View File

@@ -1,75 +0,0 @@
/*
Copyright 2025.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Package v1alpha1 contains API Schema definitions for the v1alpha1 API group.
// +kubebuilder:object:generate=true
// +groupName=dashboard.cozystack.io
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
var (
// GroupVersion is group version used to register these objects.
GroupVersion = schema.GroupVersion{Group: "dashboard.cozystack.io", Version: "v1alpha1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme.
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)
func addKnownTypes(scheme *runtime.Scheme) error {
scheme.AddKnownTypes(
GroupVersion,
&Sidebar{},
&SidebarList{},
&CustomFormsPrefill{},
&CustomFormsPrefillList{},
&BreadcrumbInside{},
&BreadcrumbInsideList{},
&CustomFormsOverride{},
&CustomFormsOverrideList{},
&TableUriMapping{},
&TableUriMappingList{},
&Breadcrumb{},
&BreadcrumbList{},
&MarketplacePanel{},
&MarketplacePanelList{},
&Navigation{},
&NavigationList{},
&CustomColumnsOverride{},
&CustomColumnsOverrideList{},
&Factory{},
&FactoryList{},
)
metav1.AddToGroupVersion(scheme, GroupVersion)
return nil
}

View File

@@ -1,654 +0,0 @@
//go:build !ignore_autogenerated
/*
Copyright 2025 The Cozystack Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
package v1alpha1
import (
"k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ArbitrarySpec) DeepCopyInto(out *ArbitrarySpec) {
*out = *in
in.JSON.DeepCopyInto(&out.JSON)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ArbitrarySpec.
func (in *ArbitrarySpec) DeepCopy() *ArbitrarySpec {
if in == nil {
return nil
}
out := new(ArbitrarySpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Breadcrumb) DeepCopyInto(out *Breadcrumb) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Breadcrumb.
func (in *Breadcrumb) DeepCopy() *Breadcrumb {
if in == nil {
return nil
}
out := new(Breadcrumb)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Breadcrumb) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BreadcrumbInside) DeepCopyInto(out *BreadcrumbInside) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BreadcrumbInside.
func (in *BreadcrumbInside) DeepCopy() *BreadcrumbInside {
if in == nil {
return nil
}
out := new(BreadcrumbInside)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *BreadcrumbInside) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BreadcrumbInsideList) DeepCopyInto(out *BreadcrumbInsideList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]BreadcrumbInside, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BreadcrumbInsideList.
func (in *BreadcrumbInsideList) DeepCopy() *BreadcrumbInsideList {
if in == nil {
return nil
}
out := new(BreadcrumbInsideList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *BreadcrumbInsideList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *BreadcrumbList) DeepCopyInto(out *BreadcrumbList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Breadcrumb, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new BreadcrumbList.
func (in *BreadcrumbList) DeepCopy() *BreadcrumbList {
if in == nil {
return nil
}
out := new(BreadcrumbList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *BreadcrumbList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CommonStatus) DeepCopyInto(out *CommonStatus) {
*out = *in
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
*out = make([]v1.Condition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CommonStatus.
func (in *CommonStatus) DeepCopy() *CommonStatus {
if in == nil {
return nil
}
out := new(CommonStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CustomColumnsOverride) DeepCopyInto(out *CustomColumnsOverride) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomColumnsOverride.
func (in *CustomColumnsOverride) DeepCopy() *CustomColumnsOverride {
if in == nil {
return nil
}
out := new(CustomColumnsOverride)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *CustomColumnsOverride) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CustomColumnsOverrideList) DeepCopyInto(out *CustomColumnsOverrideList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]CustomColumnsOverride, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomColumnsOverrideList.
func (in *CustomColumnsOverrideList) DeepCopy() *CustomColumnsOverrideList {
if in == nil {
return nil
}
out := new(CustomColumnsOverrideList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *CustomColumnsOverrideList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CustomFormsOverride) DeepCopyInto(out *CustomFormsOverride) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomFormsOverride.
func (in *CustomFormsOverride) DeepCopy() *CustomFormsOverride {
if in == nil {
return nil
}
out := new(CustomFormsOverride)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *CustomFormsOverride) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CustomFormsOverrideList) DeepCopyInto(out *CustomFormsOverrideList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]CustomFormsOverride, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomFormsOverrideList.
func (in *CustomFormsOverrideList) DeepCopy() *CustomFormsOverrideList {
if in == nil {
return nil
}
out := new(CustomFormsOverrideList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *CustomFormsOverrideList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CustomFormsPrefill) DeepCopyInto(out *CustomFormsPrefill) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomFormsPrefill.
func (in *CustomFormsPrefill) DeepCopy() *CustomFormsPrefill {
if in == nil {
return nil
}
out := new(CustomFormsPrefill)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *CustomFormsPrefill) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CustomFormsPrefillList) DeepCopyInto(out *CustomFormsPrefillList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]CustomFormsPrefill, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CustomFormsPrefillList.
func (in *CustomFormsPrefillList) DeepCopy() *CustomFormsPrefillList {
if in == nil {
return nil
}
out := new(CustomFormsPrefillList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *CustomFormsPrefillList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Factory) DeepCopyInto(out *Factory) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Factory.
func (in *Factory) DeepCopy() *Factory {
if in == nil {
return nil
}
out := new(Factory)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Factory) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FactoryList) DeepCopyInto(out *FactoryList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Factory, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FactoryList.
func (in *FactoryList) DeepCopy() *FactoryList {
if in == nil {
return nil
}
out := new(FactoryList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *FactoryList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *MarketplacePanel) DeepCopyInto(out *MarketplacePanel) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MarketplacePanel.
func (in *MarketplacePanel) DeepCopy() *MarketplacePanel {
if in == nil {
return nil
}
out := new(MarketplacePanel)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *MarketplacePanel) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *MarketplacePanelList) DeepCopyInto(out *MarketplacePanelList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]MarketplacePanel, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new MarketplacePanelList.
func (in *MarketplacePanelList) DeepCopy() *MarketplacePanelList {
if in == nil {
return nil
}
out := new(MarketplacePanelList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *MarketplacePanelList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Navigation) DeepCopyInto(out *Navigation) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Navigation.
func (in *Navigation) DeepCopy() *Navigation {
if in == nil {
return nil
}
out := new(Navigation)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Navigation) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NavigationList) DeepCopyInto(out *NavigationList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Navigation, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NavigationList.
func (in *NavigationList) DeepCopy() *NavigationList {
if in == nil {
return nil
}
out := new(NavigationList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *NavigationList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Sidebar) DeepCopyInto(out *Sidebar) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Sidebar.
func (in *Sidebar) DeepCopy() *Sidebar {
if in == nil {
return nil
}
out := new(Sidebar)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *Sidebar) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SidebarList) DeepCopyInto(out *SidebarList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]Sidebar, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SidebarList.
func (in *SidebarList) DeepCopy() *SidebarList {
if in == nil {
return nil
}
out := new(SidebarList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *SidebarList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TableUriMapping) DeepCopyInto(out *TableUriMapping) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TableUriMapping.
func (in *TableUriMapping) DeepCopy() *TableUriMapping {
if in == nil {
return nil
}
out := new(TableUriMapping)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *TableUriMapping) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TableUriMappingList) DeepCopyInto(out *TableUriMappingList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]TableUriMapping, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TableUriMappingList.
func (in *TableUriMappingList) DeepCopy() *TableUriMappingList {
if in == nil {
return nil
}
out := new(TableUriMappingList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *TableUriMappingList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}

View File

@@ -1,193 +0,0 @@
/*
Copyright 2025.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// +kubebuilder:object:root=true
// +kubebuilder:resource:scope=Cluster
// CozystackResourceDefinition is the Schema for the cozystackresourcedefinitions API
type CozystackResourceDefinition struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec CozystackResourceDefinitionSpec `json:"spec,omitempty"`
}
// +kubebuilder:object:root=true
// CozystackResourceDefinitionList contains a list of CozystackResourceDefinitions
type CozystackResourceDefinitionList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CozystackResourceDefinition `json:"items"`
}
func init() {
SchemeBuilder.Register(&CozystackResourceDefinition{}, &CozystackResourceDefinitionList{})
}
type CozystackResourceDefinitionSpec struct {
// Application configuration
Application CozystackResourceDefinitionApplication `json:"application"`
// Release configuration
Release CozystackResourceDefinitionRelease `json:"release"`
// Secret selectors
Secrets CozystackResourceDefinitionResources `json:"secrets,omitempty"`
// Service selectors
Services CozystackResourceDefinitionResources `json:"services,omitempty"`
// Ingress selectors
Ingresses CozystackResourceDefinitionResources `json:"ingresses,omitempty"`
// Dashboard configuration for this resource
Dashboard *CozystackResourceDefinitionDashboard `json:"dashboard,omitempty"`
}
type CozystackResourceDefinitionChart struct {
// Name of the Helm chart
Name string `json:"name"`
// Source reference for the Helm chart
SourceRef SourceRef `json:"sourceRef"`
}
type SourceRef struct {
// Kind of the source reference
// +kubebuilder:default:="HelmRepository"
Kind string `json:"kind"`
// Name of the source reference
Name string `json:"name"`
// Namespace of the source reference
// +kubebuilder:default:="cozy-public"
Namespace string `json:"namespace"`
}
type CozystackResourceDefinitionApplication struct {
// Kind of the application, used for UI and API
Kind string `json:"kind"`
// OpenAPI schema for the application, used for API validation
OpenAPISchema string `json:"openAPISchema"`
// Plural name of the application, used for UI and API
Plural string `json:"plural"`
// Singular name of the application, used for UI and API
Singular string `json:"singular"`
}
type CozystackResourceDefinitionRelease struct {
// Helm chart configuration
Chart CozystackResourceDefinitionChart `json:"chart"`
// Labels for the release
Labels map[string]string `json:"labels,omitempty"`
// Prefix for the release name
Prefix string `json:"prefix"`
}
// CozystackResourceDefinitionResourceSelector extends metav1.LabelSelector with resourceNames support.
// A resource matches this selector only if it satisfies ALL criteria:
// - Label selector conditions (matchExpressions and matchLabels)
// - AND has a name that matches one of the names in resourceNames (if specified)
//
// The resourceNames field supports Go templates with the following variables available:
// - {{ .name }}: The name of the managing application (from apps.cozystack.io/application.name)
// - {{ .kind }}: The lowercased kind of the managing application (from apps.cozystack.io/application.kind)
// - {{ .namespace }}: The namespace of the resource being processed
//
// Example YAML:
// secrets:
// include:
// - matchExpressions:
// - key: badlabel
// operator: DoesNotExist
// matchLabels:
// goodlabel: goodvalue
// resourceNames:
// - "{{ .name }}-secret"
// - "{{ .kind }}-{{ .name }}-tls"
// - "specificname"
type CozystackResourceDefinitionResourceSelector struct {
metav1.LabelSelector `json:",inline"`
// ResourceNames is a list of resource names to match
// If specified, the resource must have one of these exact names to match the selector
// +optional
ResourceNames []string `json:"resourceNames,omitempty"`
}
type CozystackResourceDefinitionResources struct {
// Exclude contains an array of resource selectors that target resources.
// If a resource matches the selector in any of the elements in the array, it is
// hidden from the user, regardless of the matches in the include array.
Exclude []*CozystackResourceDefinitionResourceSelector `json:"exclude,omitempty"`
// Include contains an array of resource selectors that target resources.
// If a resource matches the selector in any of the elements in the array, and
// matches none of the selectors in the exclude array that resource is marked
// as a tenant resource and is visible to users.
Include []*CozystackResourceDefinitionResourceSelector `json:"include,omitempty"`
}
// ---- Dashboard types ----
// DashboardTab enumerates allowed UI tabs.
// +kubebuilder:validation:Enum=workloads;ingresses;services;secrets;yaml
type DashboardTab string
const (
DashboardTabWorkloads DashboardTab = "workloads"
DashboardTabIngresses DashboardTab = "ingresses"
DashboardTabServices DashboardTab = "services"
DashboardTabSecrets DashboardTab = "secrets"
DashboardTabYAML DashboardTab = "yaml"
)
// CozystackResourceDefinitionDashboard describes how this resource appears in the UI.
type CozystackResourceDefinitionDashboard struct {
// Human-readable name shown in the UI (e.g., "Bucket")
Singular string `json:"singular"`
// Plural human-readable name (e.g., "Buckets")
Plural string `json:"plural"`
// Hard-coded name used in the UI (e.g., "bucket")
// +optional
Name string `json:"name,omitempty"`
// Whether this resource is singular (not a collection) in the UI
// +optional
SingularResource bool `json:"singularResource,omitempty"`
// Order weight for sorting resources in the UI (lower first)
// +optional
Weight int `json:"weight,omitempty"`
// Short description shown in catalogs or headers (e.g., "S3 compatible storage")
// +optional
Description string `json:"description,omitempty"`
// Icon encoded as a string (e.g., inline SVG, base64, or data URI)
// +optional
Icon string `json:"icon,omitempty"`
// Category used to group resources in the UI (e.g., "Storage", "Networking")
Category string `json:"category"`
// Free-form tags for search and filtering
// +optional
Tags []string `json:"tags,omitempty"`
// Which tabs to show for this resource
// +optional
Tabs []DashboardTab `json:"tabs,omitempty"`
// Order of keys in the YAML view
// +optional
KeysOrder [][]string `json:"keysOrder,omitempty"`
// Whether this resource is a module (tenant module)
// +optional
Module bool `json:"module,omitempty"`
}

View File

@@ -25,237 +25,6 @@ import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinition) DeepCopyInto(out *CozystackResourceDefinition) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinition.
func (in *CozystackResourceDefinition) DeepCopy() *CozystackResourceDefinition {
if in == nil {
return nil
}
out := new(CozystackResourceDefinition)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *CozystackResourceDefinition) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionApplication) DeepCopyInto(out *CozystackResourceDefinitionApplication) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionApplication.
func (in *CozystackResourceDefinitionApplication) DeepCopy() *CozystackResourceDefinitionApplication {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionApplication)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionChart) DeepCopyInto(out *CozystackResourceDefinitionChart) {
*out = *in
out.SourceRef = in.SourceRef
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionChart.
func (in *CozystackResourceDefinitionChart) DeepCopy() *CozystackResourceDefinitionChart {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionChart)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionDashboard) DeepCopyInto(out *CozystackResourceDefinitionDashboard) {
*out = *in
if in.Tags != nil {
in, out := &in.Tags, &out.Tags
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Tabs != nil {
in, out := &in.Tabs, &out.Tabs
*out = make([]DashboardTab, len(*in))
copy(*out, *in)
}
if in.KeysOrder != nil {
in, out := &in.KeysOrder, &out.KeysOrder
*out = make([][]string, len(*in))
for i := range *in {
if (*in)[i] != nil {
in, out := &(*in)[i], &(*out)[i]
*out = make([]string, len(*in))
copy(*out, *in)
}
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionDashboard.
func (in *CozystackResourceDefinitionDashboard) DeepCopy() *CozystackResourceDefinitionDashboard {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionDashboard)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionList) DeepCopyInto(out *CozystackResourceDefinitionList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]CozystackResourceDefinition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionList.
func (in *CozystackResourceDefinitionList) DeepCopy() *CozystackResourceDefinitionList {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *CozystackResourceDefinitionList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionRelease) DeepCopyInto(out *CozystackResourceDefinitionRelease) {
*out = *in
out.Chart = in.Chart
if in.Labels != nil {
in, out := &in.Labels, &out.Labels
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionRelease.
func (in *CozystackResourceDefinitionRelease) DeepCopy() *CozystackResourceDefinitionRelease {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionRelease)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionResourceSelector) DeepCopyInto(out *CozystackResourceDefinitionResourceSelector) {
*out = *in
in.LabelSelector.DeepCopyInto(&out.LabelSelector)
if in.ResourceNames != nil {
in, out := &in.ResourceNames, &out.ResourceNames
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionResourceSelector.
func (in *CozystackResourceDefinitionResourceSelector) DeepCopy() *CozystackResourceDefinitionResourceSelector {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionResourceSelector)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionResources) DeepCopyInto(out *CozystackResourceDefinitionResources) {
*out = *in
if in.Exclude != nil {
in, out := &in.Exclude, &out.Exclude
*out = make([]*CozystackResourceDefinitionResourceSelector, len(*in))
for i := range *in {
if (*in)[i] != nil {
in, out := &(*in)[i], &(*out)[i]
*out = new(CozystackResourceDefinitionResourceSelector)
(*in).DeepCopyInto(*out)
}
}
}
if in.Include != nil {
in, out := &in.Include, &out.Include
*out = make([]*CozystackResourceDefinitionResourceSelector, len(*in))
for i := range *in {
if (*in)[i] != nil {
in, out := &(*in)[i], &(*out)[i]
*out = new(CozystackResourceDefinitionResourceSelector)
(*in).DeepCopyInto(*out)
}
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionResources.
func (in *CozystackResourceDefinitionResources) DeepCopy() *CozystackResourceDefinitionResources {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionResources)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CozystackResourceDefinitionSpec) DeepCopyInto(out *CozystackResourceDefinitionSpec) {
*out = *in
out.Application = in.Application
in.Release.DeepCopyInto(&out.Release)
in.Secrets.DeepCopyInto(&out.Secrets)
in.Services.DeepCopyInto(&out.Services)
in.Ingresses.DeepCopyInto(&out.Ingresses)
if in.Dashboard != nil {
in, out := &in.Dashboard, &out.Dashboard
*out = new(CozystackResourceDefinitionDashboard)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CozystackResourceDefinitionSpec.
func (in *CozystackResourceDefinitionSpec) DeepCopy() *CozystackResourceDefinitionSpec {
if in == nil {
return nil
}
out := new(CozystackResourceDefinitionSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in Selector) DeepCopyInto(out *Selector) {
{
@@ -277,21 +46,6 @@ func (in Selector) DeepCopy() Selector {
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SourceRef) DeepCopyInto(out *SourceRef) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SourceRef.
func (in *SourceRef) DeepCopy() *SourceRef {
if in == nil {
return nil
}
out := new(SourceRef)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Workload) DeepCopyInto(out *Workload) {
*out = *in

View File

@@ -26,8 +26,8 @@ import (
func main() {
ctx := genericapiserver.SetupSignalContext()
options := server.NewCozyServerOptions(os.Stdout, os.Stderr)
cmd := server.NewCommandStartCozyServer(ctx, options)
options := server.NewAppsServerOptions(os.Stdout, os.Stderr)
cmd := server.NewCommandStartAppsServer(ctx, options)
code := cli.Run(cmd)
os.Exit(code)
}

View File

@@ -38,7 +38,6 @@ import (
cozystackiov1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
"github.com/cozystack/cozystack/internal/controller"
"github.com/cozystack/cozystack/internal/controller/dashboard"
"github.com/cozystack/cozystack/internal/telemetry"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
@@ -54,7 +53,6 @@ func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(cozystackiov1alpha1.AddToScheme(scheme))
utilruntime.Must(dashboard.AddToScheme(scheme))
utilruntime.Must(helmv2.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}
@@ -69,7 +67,6 @@ func main() {
var telemetryEndpoint string
var telemetryInterval string
var cozystackVersion string
var reconcileDeployment bool
var tlsOpts []func(*tls.Config)
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
@@ -89,8 +86,6 @@ func main() {
"Interval between telemetry data collection (e.g. 15m, 1h)")
flag.StringVar(&cozystackVersion, "cozystack-version", "unknown",
"Version of Cozystack")
flag.BoolVar(&reconcileDeployment, "reconcile-deployment", false,
"If set, the Cozystack API server is assumed to run as a Deployment, else as a DaemonSet.")
opts := zap.Options{
Development: false,
}
@@ -155,12 +150,7 @@ func main() {
// this setup is not recommended for production.
}
// Configure rate limiting for the Kubernetes client
config := ctrl.GetConfigOrDie()
config.QPS = 50.0 // Increased from default 5.0
config.Burst = 100 // Increased from default 10
mgr, err := ctrl.NewManager(config, ctrl.Options{
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
Metrics: metricsServerOptions,
WebhookServer: webhookServer,
@@ -204,37 +194,7 @@ func main() {
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "TenantHelmReconciler")
os.Exit(1)
}
if err = (&controller.CozystackConfigReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CozystackConfigReconciler")
os.Exit(1)
}
cozyAPIKind := "DaemonSet"
if reconcileDeployment {
cozyAPIKind = "Deployment"
}
if err = (&controller.CozystackResourceDefinitionReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
CozystackAPIKind: cozyAPIKind,
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CozystackResourceDefinitionReconciler")
os.Exit(1)
}
dashboardManager := &dashboard.Manager{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}
if err = dashboardManager.SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "DashboardReconciler")
setupLog.Error(err, "unable to create controller", "controller", "Workload")
os.Exit(1)
}
@@ -263,9 +223,7 @@ func main() {
}
setupLog.Info("starting manager")
ctx := ctrl.SetupSignalHandler()
dashboardManager.InitializeStaticResources(ctx)
if err := mgr.Start(ctx); err != nil {
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}

View File

@@ -1,176 +0,0 @@
/*
Copyright 2025.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"crypto/tls"
"flag"
"os"
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
// to ensure that exec-entrypoint and run can make use of them.
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/controller-runtime/pkg/metrics"
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
"sigs.k8s.io/controller-runtime/pkg/webhook"
"github.com/cozystack/cozystack/internal/controller/kubeovnplunger"
// +kubebuilder:scaffold:imports
)
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}
func main() {
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
var kubeOVNNamespace string
var ovnCentralName string
var secureMetrics bool
var enableHTTP2 bool
var disableTelemetry bool
var tlsOpts []func(*tls.Config)
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
flag.StringVar(&kubeOVNNamespace, "kube-ovn-namespace", "cozy-kubeovn", "Namespace where kube-OVN is deployed.")
flag.StringVar(&ovnCentralName, "ovn-central-name", "ovn-central", "Ovn-central deployment name.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.BoolVar(&secureMetrics, "metrics-secure", true,
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
flag.BoolVar(&enableHTTP2, "enable-http2", false,
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
flag.BoolVar(&disableTelemetry, "disable-telemetry", false,
"Disable telemetry collection")
opts := zap.Options{
Development: false,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
// if the enable-http2 flag is false (the default), http/2 should be disabled
// due to its vulnerabilities. More specifically, disabling http/2 will
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
// Rapid Reset CVEs. For more information see:
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
// - https://github.com/advisories/GHSA-4374-p667-p6c8
disableHTTP2 := func(c *tls.Config) {
setupLog.Info("disabling http/2")
c.NextProtos = []string{"http/1.1"}
}
if !enableHTTP2 {
tlsOpts = append(tlsOpts, disableHTTP2)
}
webhookServer := webhook.NewServer(webhook.Options{
TLSOpts: tlsOpts,
})
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
// More info:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/server
// - https://book.kubebuilder.io/reference/metrics.html
metricsServerOptions := metricsserver.Options{
BindAddress: metricsAddr,
SecureServing: secureMetrics,
TLSOpts: tlsOpts,
}
if secureMetrics {
// FilterProvider is used to protect the metrics endpoint with authn/authz.
// These configurations ensure that only authorized users and service accounts
// can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info:
// https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/filters#WithAuthenticationAndAuthorization
metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization
// TODO(user): If CertDir, CertName, and KeyName are not specified, controller-runtime will automatically
// generate self-signed certificates for the metrics server. While convenient for development and testing,
// this setup is not recommended for production.
}
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
Metrics: metricsServerOptions,
WebhookServer: webhookServer,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "29a0338b.cozystack.io",
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
// speeds up voluntary leader transitions as the new leader don't have to wait
// LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
// the manager stops, so would be fine to enable this option. However,
// if you are doing or is intended to do any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
if err != nil {
setupLog.Error(err, "unable to create manager")
os.Exit(1)
}
if err = (&kubeovnplunger.KubeOVNPlunger{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Registry: metrics.Registry,
}).SetupWithManager(mgr, kubeOVNNamespace, ovnCentralName); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "KubeOVNPlunger")
os.Exit(1)
}
// +kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
os.Exit(1)
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
os.Exit(1)
}
setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}

View File

@@ -1,179 +0,0 @@
/*
Copyright 2025.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"crypto/tls"
"flag"
"os"
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
// to ensure that exec-entrypoint and run can make use of them.
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/controller-runtime/pkg/metrics/filters"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
"sigs.k8s.io/controller-runtime/pkg/webhook"
cozystackiov1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
lcw "github.com/cozystack/cozystack/internal/lineagecontrollerwebhook"
// +kubebuilder:scaffold:imports
)
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(cozystackiov1alpha1.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}
func main() {
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
var secureMetrics bool
var enableHTTP2 bool
var tlsOpts []func(*tls.Config)
flag.StringVar(&metricsAddr, "metrics-bind-address", "0", "The address the metrics endpoint binds to. "+
"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.")
flag.BoolVar(&secureMetrics, "metrics-secure", true,
"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.")
flag.BoolVar(&enableHTTP2, "enable-http2", false,
"If set, HTTP/2 will be enabled for the metrics and webhook servers")
opts := zap.Options{
Development: false,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
// if the enable-http2 flag is false (the default), http/2 should be disabled
// due to its vulnerabilities. More specifically, disabling http/2 will
// prevent from being vulnerable to the HTTP/2 Stream Cancellation and
// Rapid Reset CVEs. For more information see:
// - https://github.com/advisories/GHSA-qppj-fm5r-hxr3
// - https://github.com/advisories/GHSA-4374-p667-p6c8
disableHTTP2 := func(c *tls.Config) {
setupLog.Info("disabling http/2")
c.NextProtos = []string{"http/1.1"}
}
if !enableHTTP2 {
tlsOpts = append(tlsOpts, disableHTTP2)
}
webhookServer := webhook.NewServer(webhook.Options{
TLSOpts: tlsOpts,
})
// Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server.
// More info:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/server
// - https://book.kubebuilder.io/reference/metrics.html
metricsServerOptions := metricsserver.Options{
BindAddress: metricsAddr,
SecureServing: secureMetrics,
TLSOpts: tlsOpts,
}
if secureMetrics {
// FilterProvider is used to protect the metrics endpoint with authn/authz.
// These configurations ensure that only authorized users and service accounts
// can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info:
// https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.19.1/pkg/metrics/filters#WithAuthenticationAndAuthorization
metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization
// TODO(user): If CertDir, CertName, and KeyName are not specified, controller-runtime will automatically
// generate self-signed certificates for the metrics server. While convenient for development and testing,
// this setup is not recommended for production.
}
// Configure rate limiting for the Kubernetes client
config := ctrl.GetConfigOrDie()
config.QPS = 50.0 // Increased from default 5.0
config.Burst = 100 // Increased from default 10
mgr, err := ctrl.NewManager(config, ctrl.Options{
Scheme: scheme,
Metrics: metricsServerOptions,
WebhookServer: webhookServer,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "8796f12d.cozystack.io",
// LeaderElectionReleaseOnCancel defines if the leader should step down voluntarily
// when the Manager ends. This requires the binary to immediately end when the
// Manager is stopped, otherwise, this setting is unsafe. Setting this significantly
// speeds up voluntary leader transitions as the new leader don't have to wait
// LeaseDuration time first.
//
// In the default scaffold provided, the program ends immediately after
// the manager stops, so would be fine to enable this option. However,
// if you are doing or is intended to do any operation such as perform cleanups
// after the manager stops then its usage might be unsafe.
// LeaderElectionReleaseOnCancel: true,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
lineageControllerWebhook := &lcw.LineageControllerWebhook{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}
if err := lineageControllerWebhook.SetupWithManagerAsController(mgr); err != nil {
setupLog.Error(err, "unable to setup controller", "controller", "LineageController")
os.Exit(1)
}
if err := lineageControllerWebhook.SetupWithManagerAsWebhook(mgr); err != nil {
setupLog.Error(err, "unable to setup webhook", "webhook", "LineageWebhook")
os.Exit(1)
}
// +kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
os.Exit(1)
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
os.Exit(1)
}
setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,666 +0,0 @@
# Changelog Generation Instructions
This file contains detailed instructions for AI-powered IDE on how to generate changelogs for Cozystack releases.
## When to use these instructions
Follow these instructions when the user explicitly asks to generate a changelog.
## Required Tools
Before generating changelogs, ensure you have access to `gh` (GitHub CLI) tool, which is used to fetch commit and PR author information. The GitHub CLI is used to correctly identify PR authors from commits and pull requests.
## Changelog Generation Process
When the user asks to generate a changelog, follow these steps in the specified order:
**CHECKLIST - All actions that must be completed:**
- [ ] Step 1: Update information from remote (git fetch)
- [ ] Step 2: Check current branch (must be main)
- [ ] Step 3: Determine release type and previous version (minor vs patch release)
- [ ] Step 4: Determine versions and analyze existing changelogs
- [ ] Step 5: Get the list of commits for the release period
- [ ] Step 6: Check additional repositories (website is REQUIRED, optional repos if tags exist)
- [ ] **MANDATORY**: Check website repository for documentation changes WITH authors and PR links via GitHub CLI
- [ ] **MANDATORY**: Check ALL optional repositories (talm, boot-to-talos, cozypkg, cozy-proxy) for tags during release period
- [ ] **MANDATORY**: For ALL commits from additional repos, get GitHub username via CLI, prioritizing PR author over commit author.
- [ ] Step 7: Analyze commits (extract PR numbers, authors, user impact)
- [ ] **MANDATORY**: For EVERY PR in main repo, get PR author via `gh pr view <PR_NUMBER> --json author --jq .author.login` (do NOT skip this step)
- [ ] **MANDATORY**: Extract PR numbers from commit messages, then use `gh pr view` for each PR to get the PR author. Do NOT use commit author. Only for commits without PR numbers (rare), fall back to `gh api repos/cozystack/cozystack/commits/<hash> --jq '.author.login'`
- [ ] Step 8: Form new changelog (structure, format, generate contributors list)
- [ ] Step 9: Verify completeness and save
### 1. Updating information from remote
```bash
git fetch --tags --force --prune
```
This is necessary to get up-to-date information about tags and commits from the remote repository.
### 2. Checking current branch
Make sure we are on the `main` branch:
```bash
git branch --show-current
```
### 3. Determining release type and previous version
**Important**: Determine if you're generating a changelog for a **minor release** (vX.Y.0) or a **patch release** (vX.Y.Z where Z > 0).
**For minor releases (vX.Y.0):**
- Each minor version lives and evolves in its own branch (`release-X.Y`)
- You MUST compare with the **previous minor version** (v(X-1).Y.0), not the last patch release
- This ensures you capture all changes from the entire minor version cycle, including all patch releases
- Example: For v0.38.0, compare with v0.37.0 (not v0.37.8)
- Run a separate cycle to check the diff with the zero version of the previous minor release
**For patch releases (vX.Y.Z where Z > 0):**
- Compare with the previous patch version (vX.Y.(Z-1))
- Example: For v0.37.2, compare with v0.37.1
### 4. Determining versions and analyzing existing changelogs
**Determine the last published version:**
1. Get the list of version tags:
```bash
git tag -l 'v[0-9]*.[0-9]*.[0-9]*' | sort -V
```
2. Get the last tag:
```bash
git tag -l 'v[0-9]*.[0-9]*.[0-9]*' | sort -V | tail -1
```
3. Compare tags with existing changelog files in `docs/changelogs/` to determine the last published version (the newest file `vX.Y.Z.md`)
**Study existing changelog format:**
- Review recent changelog files to understand the format and structure
- Pay attention to:
- **Feature Highlights format** (for minor releases): Use `## Feature Highlights` with `### Feature Name` subsections containing detailed descriptions (2-4 paragraphs each). See v0.35.0 and v0.36.0 for examples.
- Section structure (Major Features and Improvements, Security, Fixes, Dependencies, etc.)
- PR link format (e.g., `[**@username**](https://github.com/username) in #1234`)
- Change description style
- Presence of Breaking changes sections, etc.
### 5. Getting the list of commits
**Important**: Determine if you're generating a changelog for a **minor release** (vX.Y.0) or a **patch release** (vX.Y.Z where Z > 0).
**For patch releases (vX.Y.Z where Z > 0):**
Get the list of commits starting from the previous patch version to HEAD:
**⚠️ CRITICAL: Do NOT use --first-parent flag! It will skip merge commits including backports!**
```bash
# Get all commits including merge commits (backports)
git log <previous_version>..HEAD --pretty=format:"%h - %s (%an, %ar)"
```
For example, if generating changelog for `v0.37.2`:
```bash
git log v0.37.1..HEAD --pretty=format:"%h - %s (%an, %ar)"
```
**⚠️ IMPORTANT: Check for backports:**
- Look for commits with "[Backport release-X.Y]" in the commit message
- For backport PRs, find the original PR number mentioned in the backport commit message or PR description
- Use the original PR author (not the backport PR author) when creating changelog entries
- Include both the original PR number and backport PR number in the changelog entry (e.g., `#1606, #1609`)
**For minor releases (vX.Y.0):**
Minor releases must include **all changes** from patch releases of the previous minor version. Get commits from the previous minor release:
**⚠️ CRITICAL: Do NOT use --first-parent flag! It will skip merge commits including backports!**
```bash
# For v0.38.0, get all commits since v0.37.0 (including all patch releases v0.37.1, v0.37.2, etc.)
git log v<previous_minor_version>..HEAD --pretty=format:"%h - %s (%an, %ar)"
```
For example, if generating changelog for `v0.38.0`:
```bash
git log v0.37.0..HEAD --pretty=format:"%h - %s (%an, %ar)"
```
This will include all commits from v0.37.1, v0.37.2, v0.37.3, etc., up to v0.38.0.
**⚠️ IMPORTANT: Always check merge commits:**
- Merge commits may contain backports that need to be included
- Check all commits in the range, including merge commits
- For backports, always find and reference the original PR
### 6. Analyzing additional repositories
**⚠️ CRITICAL: This step is MANDATORY and must NOT be skipped!**
Cozystack release may include changes from related repositories. Check and include commits from these repositories if tags were released during the release period:
**Required repositories:**
- **Documentation**: [https://github.com/cozystack/website](https://github.com/cozystack/website)
- **MANDATORY**: Always check this repository for documentation changes during the release period
- **MANDATORY**: Get GitHub username for EVERY commit. Extract PR number from commit message, then use `gh pr view <PR_NUMBER> --repo cozystack/website --json author --jq .author.login` to get PR author. Only if no PR number, fall back to `gh api repos/cozystack/website/commits/<hash> --jq '.author.login'`
**Optional repositories (MUST check ALL of them for tags during release period):**
- [https://github.com/cozystack/talm](https://github.com/cozystack/talm)
- [https://github.com/cozystack/boot-to-talos](https://github.com/cozystack/boot-to-talos)
- [https://github.com/cozystack/cozypkg](https://github.com/cozystack/cozypkg)
- [https://github.com/cozystack/cozy-proxy](https://github.com/cozystack/cozy-proxy)
**⚠️ IMPORTANT**: You MUST check ALL optional repositories for tags created during the release period. Do NOT skip this step even if you think there might not be any tags. Use the process below to verify.
**Process for each repository:**
1. **Get release period dates:**
```bash
# Get dates for the release period
cd /path/to/cozystack
RELEASE_START=$(git log -1 --format=%ai v<previous_version>)
RELEASE_END=$(git log -1 --format=%ai HEAD)
```
2. **Check for commits in website repository (always required):**
```bash
# Ensure website repository is cloned and up-to-date
mkdir -p _repos
if [ ! -d "_repos/website" ]; then
cd _repos && git clone https://github.com/cozystack/website.git && cd ..
fi
cd _repos/website
git fetch --all --tags --force
git checkout main 2>/dev/null || git checkout master
git pull
# Get commits between release dates (with some buffer)
git log --since="$RELEASE_START" --until="$RELEASE_END" --format="%H|%s|%an" | while IFS='|' read -r commit_hash subject author_name; do
# Extract PR number from commit message
PR_NUMBER=$(git log -1 --format="%B" "$commit_hash" | grep -oE '#[0-9]+' | head -1 | tr -d '#')
# ALWAYS use PR author if PR number found, not commit author
if [ -n "$PR_NUMBER" ]; then
GITHUB_USERNAME=$(gh pr view "$PR_NUMBER" --repo cozystack/website --json author --jq '.author.login // empty' 2>/dev/null)
echo "$commit_hash|$subject|$author_name|$GITHUB_USERNAME|cozystack/website#$PR_NUMBER"
else
# Only fallback to commit author if no PR number found (rare)
GITHUB_USERNAME=$(gh api repos/cozystack/website/commits/$commit_hash --jq '.author.login // empty')
echo "$commit_hash|$subject|$author_name|$GITHUB_USERNAME|cozystack/website@${commit_hash:0:7}"
fi
done
# Look for documentation updates, new pages, or significant content changes
# Include these in the "Documentation" section of the changelog WITH authors and PR links
```
3. **For optional repositories, check if tags exist during release period:**
**⚠️ MANDATORY: You MUST check ALL optional repositories (talm, boot-to-talos, cozypkg, cozy-proxy). Do NOT skip any repository!**
**Use the helper script:**
```bash
# Get release period dates
RELEASE_START=$(git log -1 --format=%ai v<previous_version>)
RELEASE_END=$(git log -1 --format=%ai HEAD)
# Run the script to check all optional repositories
./docs/changelogs/hack/check-optional-repos.sh "$RELEASE_START" "$RELEASE_END"
```
The script will:
- Check ALL optional repositories (talm, boot-to-talos, cozypkg, cozy-proxy)
- Look for tags created during the release period
- Get commits between tags (if tags exist) or by date range (if no tags)
- Extract PR numbers from commit messages
- For EVERY commit with PR number, get PR author via CLI: `gh pr view <PR_NUMBER> --repo cozystack/<repo> --json author --jq .author.login` (ALWAYS use PR author, not commit author)
- For commits without PR numbers (rare), fallback to: `gh api repos/cozystack/<repo>/commits/<hash> --jq '.author.login'`
- Output results in format: `commit_hash|subject|author_name|github_username|cozystack/repo#PR_NUMBER` or `cozystack/repo@commit_hash`
4. **Extract PR numbers and authors using GitHub CLI:**
- **ALWAYS use PR author, not commit author** for commits from additional repositories
- For each commit, extract PR number from commit message first: Extract `#123` pattern from commit message
- If PR number found, use `gh pr view <PR_NUMBER> --repo cozystack/<repo> --json author --jq .author.login` to get PR author (the person who wrote the code)
- Only if no PR number found (rare), fallback to commit author: `gh api repos/cozystack/<repo>/commits/<hash> --jq '.author.login'`
- **Prefer PR numbers**: Use format `cozystack/website#123` if PR number found in commit message
- **Fallback to commit hash**: Use format `cozystack/website@abc1234` if no PR number
- **ALWAYS include author**: Every entry from additional repositories MUST include author in format `([**@username**](https://github.com/username) in cozystack/repo#123)`
- Determine user impact and categorize appropriately
- Format entries with repository prefix: `[website]`, `[talm]`, etc.
**Example entry format for additional repositories:**
```markdown
# If PR number found in commit message (REQUIRED format):
* **[website] Update installation documentation**: Improved installation guide with new examples ([**@username**](https://github.com/username) in cozystack/website#123).
# If no PR number (fallback, use commit hash):
* **[website] Update installation documentation**: Improved installation guide with new examples ([**@username**](https://github.com/username) in cozystack/website@abc1234).
# For optional repositories:
* **[talm] Add new feature**: Description of the change ([**@username**](https://github.com/username) in cozystack/talm#456).
```
**CRITICAL**:
- **ALWAYS include author** for every entry from additional repositories
- **ALWAYS include PR link or commit hash** for every entry
- Never add entries without author and PR/commit reference
- **ALWAYS use PR author, not commit author**: Extract PR number from commit message, then use `gh pr view <PR_NUMBER> --repo cozystack/<repo> --json author --jq .author.login` to get the PR author (the person who wrote the code)
- Only if no PR number found (rare), fallback to commit author: `gh api repos/cozystack/<repo>/commits/<hash> --jq '.author.login'`
- The commit author (especially for squash/merge commits) is usually the person who merged the PR, not the person who wrote the code
### 7. Analyzing commits and PRs
**⚠️ CRITICAL: You MUST get the author from PR, not from commit! Always use `gh pr view` to get the PR author. Do NOT use commit author!**
**Get all PR numbers from commits:**
**⚠️ CRITICAL: Do NOT use --no-merges flag! It will skip merge commits including backports!**
```bash
# Extract all PR numbers from commit messages in the release range (including merge commits)
git log <previous_version>..<new_version> --format="%s%n%b" | grep -oE '#[0-9]+' | sort -u | tr -d '#'
```
**⚠️ IMPORTANT: Handle backports correctly:**
- Backport PRs have format: `[Backport release-X.Y] <original title> (#BACKPORT_PR_NUMBER)`
- The backport commit message or PR description usually mentions the original PR number
- For backport entries in changelog, use the original PR author (not the backport PR author)
- Include both original and backport PR numbers in the changelog entry (e.g., `#1606, #1609`)
- To find original PR from backport: Check the backport PR description or commit message for "Backport of #ORIGINAL_PR"
**For each PR number, get the author:**
**CRITICAL**: The commit author (especially for squash/merge commits) is usually the person who merged the PR (or GitHub bot), NOT the person who wrote the code. **ALWAYS use the PR author**, not the commit author.
**⚠️ MANDATORY: ALWAYS use `gh pr view` to get the PR author. Do NOT use commit author!**
**ALWAYS use GitHub CLI** to get the PR author:
```bash
# Usage: Get PR author - MANDATORY for EVERY PR
# Loop through ALL PR numbers and get PR author (including backports)
git log <previous_version>..<new_version> --format="%s%n%b" | grep -oE '#[0-9]+' | sort -u | tr -d '#' | while read PR_NUMBER; do
# Check if this is a backport PR
BACKPORT_INFO=$(gh pr view "$PR_NUMBER" --json body --jq '.body' 2>/dev/null | grep -i "backport of #" || echo "")
if [ -n "$BACKPORT_INFO" ]; then
# Extract original PR number from backport description
ORIGINAL_PR=$(echo "$BACKPORT_INFO" | grep -oE 'backport of #([0-9]+)' | grep -oE '[0-9]+' | head -1)
if [ -n "$ORIGINAL_PR" ]; then
# Use original PR author
GITHUB_USERNAME=$(gh pr view "$ORIGINAL_PR" --json author --jq '.author.login // empty')
PR_TITLE=$(gh pr view "$ORIGINAL_PR" --json title --jq '.title // empty')
echo "$PR_NUMBER|$ORIGINAL_PR|$GITHUB_USERNAME|$PR_TITLE|BACKPORT"
else
# Fallback to backport PR author if original not found
GITHUB_USERNAME=$(gh pr view "$PR_NUMBER" --json author --jq '.author.login // empty')
PR_TITLE=$(gh pr view "$PR_NUMBER" --json title --jq '.title // empty')
echo "$PR_NUMBER||$GITHUB_USERNAME|$PR_TITLE|BACKPORT"
fi
else
# Regular PR
GITHUB_USERNAME=$(gh pr view "$PR_NUMBER" --json author --jq '.author.login // empty')
PR_TITLE=$(gh pr view "$PR_NUMBER" --json title --jq '.title // empty')
echo "$PR_NUMBER||$GITHUB_USERNAME|$PR_TITLE|REGULAR"
fi
done
```
**⚠️ IMPORTANT**: You must run this for EVERY PR in the release period. Do NOT skip any PRs or assume the GitHub username based on the git author name.
**CRITICAL**: Always use `gh pr view <PR_NUMBER> --json author --jq .author.login` to get the PR author. This correctly identifies the person who wrote the code, not the person who merged it (which is especially important for squash merges).
**Why this matters**: Using the wrong author in changelogs gives incorrect credit and can confuse contributors. The merge/squash commit is created by the person who clicks "Merge" in GitHub, not the PR author.
**For commits without PR numbers (rare):**
- Only if a commit has no PR number, fall back to commit author: `gh api repos/cozystack/cozystack/commits/<hash> --jq '.author.login'`
- But this should be very rare - most commits should have PR numbers
**Extract PR number from commit messages:**
- Check commit message subject (`%s`) and body (`%b`) for PR references: `#1234` or `(#1234)`
- **Primary method**: Extract from commit message format `(#PR_NUMBER)` or `in #PR_NUMBER` or `Merge pull request #1234`
- Use regex: `grep -oE '#[0-9]+'` to find all PR numbers
**⚠️ CRITICAL: Verify PR numbers match commit messages!**
- Always verify that the PR number in the changelog matches the PR number in the commit message
- Common mistake: Using wrong PR number (e.g., #1614 instead of #1617) when multiple similar commits exist
- To verify: Check the actual commit message: `git log <commit_hash> -1 --format="%s%n%b" | grep -oE '#[0-9]+'`
- If multiple PR numbers appear in a commit, use the one that matches the PR title/description
- For merge commits, check the merged branch commits, not just the merge commit message
3. **Understand the change:**
```bash
# Get PR details (preferred method)
gh pr view <PR_NUMBER> --json title,body,url
# Or get commit details if no PR number
git show <commit_hash> --stat
git show <commit_hash>
```
- Review PR description and changed files
- Understand functionality added/changed/fixed
- **Determine user impact**: What can users do now? What problems are fixed? What improvements do users experience?
4. **For release branches (backports):**
- If commit is from `release-X.Y` branch, check if it's a backport
- Find original commit in `main` to get correct PR number:
```bash
git log origin/main --grep="<part of commit message>" --oneline
```
### 8. Forming a new changelog
Create a new changelog file in the format matching previous versions:
1. **Determine the release type:**
- **Minor release (vX.Y.0)** - use full format with **Feature Highlights** section. **Must include all changes from patch releases of the previous minor version** (e.g., v0.38.0 should include changes from v0.37.1, v0.37.2, v0.37.3, etc.)
- **Patch release (vX.Y.Z, where Z > 0)** - use more compact format, includes only changes since the previous patch release
**Feature Highlights format for minor releases:**
- Use section header: `## Feature Highlights`
- Include 3-6 major features as subsections with `### Feature Name` headers
- Each feature subsection should contain:
- **Detailed description** (2-4 paragraphs) explaining:
- What the feature is and what problem it solves
- How it works and what users can do with it
- How to use it (if applicable)
- Benefits and impact for users
- **Links to documentation** when available (use markdown links)
- **Code examples or configuration snippets** if helpful
- Focus on user value and practical implications, not just technical details
- Each feature should be substantial enough to warrant its own subsection
- Order features by importance/impact (most important first)
- Example format:
```markdown
## Feature Highlights
### Feature Name
Detailed description paragraph explaining what the feature is...
Another paragraph explaining how it works and what users can do...
Learn more in the [documentation](https://cozystack.io/docs/...).
```
**Important for minor releases**: After collecting all commits, **systematically verify** that all PRs from patch releases are included:
```bash
# Extract all PR numbers from patch release changelogs
grep -h "#[0-9]\+" docs/changelogs/v<previous_minor>.*.md | sort -u
# Extract all PR numbers from the new minor release changelog
grep -h "#[0-9]\+" docs/changelogs/v<new_minor>.0.md | sort -u
# Compare and identify missing PRs
# Ensure every PR from patch releases appears in the minor release changelog
```
2. **Structure changes by categories:**
**For minor releases (vX.Y.0):**
- **Feature Highlights** (required) - see format above
- **Major Features and Improvements** - detailed list of all major features and improvements
- **Improvements (minor)** - smaller improvements and enhancements
- **Bug fixes** - all bug fixes
- **Security** - security-related changes
- **Dependencies & version updates** - dependency updates
- **System Configuration** - system-level configuration changes
- **Development, Testing, and CI/CD** - development and testing improvements
- **Documentation** (include changes from website repository here - **MUST include authors and PR links for all entries**)
- **Breaking changes & upgrade notes** (if any)
- **Refactors & chores** (if any)
**For patch releases (vX.Y.Z where Z > 0):**
- **Features and Improvements** - new features and improvements
- **Fixes** - bug fixes
- **Security** - security-related changes
- **Dependencies** - dependency updates
- **System Configuration** - system-level configuration changes
- **Development, Testing, and CI/CD** - development and testing improvements
- **Documentation** (include changes from website repository here - **MUST include authors and PR links for all entries**)
- **Migration and Upgrades** (if applicable)
**Note**: When including changes from additional repositories, group them logically with main repository changes, or create separate subsections if there are many changes from a specific repository.
3. **Entry format:**
- Use the format: `* **Brief description**: detailed description ([**@username**](https://github.com/username) in #PR_NUMBER)`
- **CRITICAL - Get authorship correctly**:
- **ALWAYS use PR author, not commit author**: Extract PR number from commit message, then use `gh pr view` to get the PR author. The commit author (especially for squash/merge commits) is usually the person who merged the PR (or GitHub bot), NOT the person who wrote the code.
```bash
# Get PR author from GitHub CLI (correct method)
# Step 1: Extract PR number from commit message
PR_NUMBER=$(git log <commit_hash> -1 --format="%s%n%b" | grep -oE '#[0-9]+' | head -1 | tr -d '#')
# Step 2: Get PR author (the person who wrote the code)
if [ -n "$PR_NUMBER" ]; then
GITHUB_USERNAME=$(gh pr view "$PR_NUMBER" --json author --jq '.author.login')
else
# Only fallback to commit author if no PR number found (rare)
GITHUB_USERNAME=$(gh api repos/cozystack/cozystack/commits/<commit_hash> --jq '.author.login')
fi
```
**Example**: For PR #1507, the squash commit has author "kvaps" (who merged), but the PR author is "lllamnyp" (who wrote the code). Using `gh pr view 1507 --json author --jq .author.login` correctly returns "lllamnyp".
- **For regular commits**: Use the commit author directly:
```bash
git log <commit_hash> -1 --format="%an|%ae"
```
- **Validation**: Before adding to changelog, verify the author by checking:
- For merge commits: Compare merge commit author vs PR author (they should be different)
- Check existing changelogs for author name to GitHub username mappings
- Verify with: `git log <merge_commit>^1..<merge_commit>^2 --format="%an" --no-merges`
- **Map author name to GitHub username**: Check existing changelogs for author name mappings, or extract from PR links in commit messages
- **Always include user impact**: Each entry must explain how the change affects users
- For new features: explain what users can now do
- For bug fixes: explain what problem is solved for users
- For improvements: explain what users will experience better
- For breaking changes: clearly state what users need to do
- Group related changes
- Use bold font for important components/modules
- Focus on user value, not just technical details
4. **Add a link to the full changelog:**
**For patch releases (vX.Y.Z where Z > 0):**
```markdown
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v<previous_patch_version>...v<new_version>
```
Example: For v0.37.2, use `v0.37.1...v0.37.2`
**For minor releases (vX.Y.0):**
```markdown
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v<previous_minor_version>...v<new_version>
```
Example: For v0.38.0, use `v0.37.0...v0.38.0` (NOT `v0.37.8...v0.38.0`)
**Important**: Minor releases must reference the previous minor release (vX.Y.0), not the last patch release, to include all changes from the entire minor version cycle.
5. **Generate contributors list:**
**⚠️ SIMPLIFIED APPROACH: Extract contributors from the generated changelog itself!**
Since you've already generated the changelog with all PR authors correctly identified, simply extract GitHub usernames from the changelog entries:
```bash
# Extract all GitHub usernames from the current release changelog
# This method is simpler and more reliable than extracting from git history
# For patch releases: extract from the current changelog file
grep -oE '\[@[a-zA-Z0-9_-]+\]' docs/changelogs/v<version>.md | \
sed 's/\[@/@/' | sed 's/\]//' | \
sort -u
# For minor releases: extract from the current changelog file
grep -oE '\[@[a-zA-Z0-9_-]+\]' docs/changelogs/v<version>.md | \
sed 's/\[@/@/' | sed 's/\]//' | \
sort -u
```
**Get all previous contributors (to identify new ones):**
```bash
# Extract GitHub usernames from all previous changelogs
grep -hE '\[@[a-zA-Z0-9_-]+\]' docs/changelogs/v*.md | \
grep -oE '@[a-zA-Z0-9_-]+' | \
sort -u > /tmp/previous_contributors.txt
```
**Identify new contributors (first-time contributors):**
```bash
# Get current release contributors from the changelog
grep -oE '@[a-zA-Z0-9_-]+' docs/changelogs/v<version>.md | \
sort -u > /tmp/current_contributors.txt
# Get all previous contributors
grep -hE '@[a-zA-Z0-9_-]+' docs/changelogs/v*.md | \
grep -oE '@[a-zA-Z0-9_-]+' | \
sort -u > /tmp/all_previous_contributors.txt
# Find new contributors (those in current but not in previous)
comm -23 <(sort /tmp/current_contributors.txt) <(sort /tmp/all_previous_contributors.txt)
```
**Why this approach is better:**
- ✅ Uses the already-verified PR authors from the changelog (no need to query GitHub API again)
- ✅ Automatically handles backports correctly (original PR authors are already in the changelog)
- ✅ Simpler and faster (no git log parsing or API calls)
- ✅ More reliable (matches exactly what's in the changelog)
- ✅ Works for both patch and minor releases
**Add contributors section to changelog:**
Place the contributors section at the end of the changelog, before the "Full Changelog" link:
```markdown
## Contributors
We'd like to thank all contributors who made this release possible:
* [**@username1**](https://github.com/username1)
* [**@username2**](https://github.com/username2)
* [**@username3**](https://github.com/username3)
* ...
### New Contributors
We're excited to welcome our first-time contributors:
* [**@newuser1**](https://github.com/newuser1) - First contribution!
* [**@newuser2**](https://github.com/newuser2) - First contribution!
```
**Formatting guidelines:**
- List contributors in alphabetical order by GitHub username
- Use the format: `* [**@username**](https://github.com/username)`
- For new contributors, add " - First contribution!" note
- If GitHub username cannot be determined, you can skip that contributor or use their git author name
**When to include:**
- **For patch releases**: Contributors section is optional, but can be included for significant releases
- **For minor releases (vX.Y.0)**: Contributors section is required - you must generate and include the contributors list
- Always verify GitHub usernames by checking commit messages, PR links in changelog entries, or by examining PR details
6. **Add a comment with a link to the GitHub release:**
```markdown
<!--
https://github.com/cozystack/cozystack/releases/tag/v<new_version>
-->
```
### 9. Verification and saving
**Before saving, verify completeness:**
**For ALL releases:**
- [ ] Step 5 completed: **ALL commits included** (including merge commits and backports) - do not skip any commits
- [ ] Step 5 completed: **Backports identified and handled correctly** - original PR author used, both original and backport PR numbers included
- [ ] Step 6 completed: Website repository checked for documentation changes WITH authors and PR links via GitHub CLI
- [ ] Step 6 completed: **ALL** optional repositories (talm, boot-to-talos, cozypkg, cozy-proxy) checked for tags during release period
- [ ] Step 6 completed: For ALL commits from additional repos, GitHub username obtained via GitHub CLI (not skipped). For commits with PR numbers, PR author used via `gh pr view` (not commit author)
- [ ] Step 7 completed: For EVERY PR in main repo (including backports), PR author obtained via `gh pr view <PR_NUMBER> --json author --jq .author.login` (not skipped or assumed). Commit author NOT used - always use PR author
- [ ] Step 7 completed: **Backports verified** - for each backport PR, original PR found and original PR author used in changelog
- [ ] Step 8 completed: Contributors list generated
- [ ] All commits from main repository included (including merge commits)
- [ ] User impact described for each change
- [ ] Format matches existing changelogs
**For patch releases:**
- [ ] All commits from the release period are included (including merge commits with backports)
- [ ] PR numbers match commit messages
- [ ] Backports are properly identified and linked to original PRs
**For minor releases (vX.Y.0):**
- [ ] All changes from patch releases (vX.Y.1, vX.Y.2, etc.) are included
- [ ] Contributors section is present and complete
- [ ] Full Changelog link references previous minor version (vX.Y.0), not last patch
- [ ] Verify all PRs from patch releases are included:
```bash
# Extract and compare PR numbers
PATCH_PRS=$(grep -hE "#[0-9]+" docs/changelogs/v<previous_minor>.*.md | grep -oE "#[0-9]+" | sort -u)
MINOR_PRS=$(grep -hE "#[0-9]+" docs/changelogs/v<new_minor>.0.md | grep -oE "#[0-9]+" | sort -u)
MISSING=$(comm -23 <(echo "$PATCH_PRS") <(echo "$MINOR_PRS"))
if [ -n "$MISSING" ]; then
echo "Missing PRs from patch releases:"
echo "$MISSING"
# For each missing PR, check if it's a backport and verify change is included by description
fi
```
**Only proceed to save after all checkboxes are verified!**
**Save the changelog:**
Save the changelog to file `docs/changelogs/v<version>.md` according to the version for which the changelog is being generated.
### Important notes
- **After fetch with --force** local tags are up-to-date, use them for work
- **For release branches** always check original commits in `main` to get correct PR numbers
- **Preserve the format** of existing changelog files
- **Group related changes** logically
- **Be accurate** in describing changes, based on actual commit diffs
- **Check for PR numbers** and commit authors
- **CRITICAL - Get authorship from PR, not from commit**:
- **ALWAYS use PR author**: Extract PR number from commit message, then use `gh pr view <PR_NUMBER> --json author --jq .author.login` to get the PR author
- Do NOT use commit author - the commit author (especially for squash/merge commits) is usually the person who merged the PR, not the person who wrote the code
- For commits without PR numbers (rare), fall back to commit author: `gh api repos/cozystack/cozystack/commits/<commit_hash> --jq '.author.login'`
- **Workflow**: Extract PR numbers from commits → Use `gh pr view` for each PR → Get PR author (the person who wrote the code)
- Example: For PR #1507, the commit author is `@kvaps` (who merged), but `gh pr view 1507 --json author --jq .author.login` correctly returns `@lllamnyp` (who wrote the code)
- Check existing changelogs for author name to GitHub username mappings
- **Validation**: Before adding to changelog, always verify the author using `gh pr view` - never use commit author for PRs
- **MANDATORY**: Always describe user impact: Every changelog entry must explain how the change affects end users, not just what was changed technically. Focus on user value and practical implications.
**Required steps:**
- **Additional repositories (Step 6) - MANDATORY**:
- **⚠️ CRITICAL**: Always check the **website** repository for documentation changes during the release period. This is a required step and MUST NOT be skipped.
- **⚠️ CRITICAL**: You MUST check ALL optional repositories (talm, boot-to-talos, cozypkg, cozy-proxy) for tags during the release period. Do NOT skip any repository even if you think there might not be tags.
- **CRITICAL**: For ALL entries from additional repositories (website and optional), you MUST:
- **MANDATORY**: Extract PR number from commit message first
- **MANDATORY**: For commits with PR numbers, ALWAYS use `gh pr view <PR_NUMBER> --repo cozystack/<repo> --json author --jq .author.login` to get PR author (not commit author)
- **MANDATORY**: Only for commits without PR numbers (rare), fallback to: `gh api repos/cozystack/<repo>/commits/<hash> --jq '.author.login'`
- **MANDATORY**: Do NOT skip getting GitHub username via CLI - do this for EVERY commit
- **MANDATORY**: Do NOT use commit author for PRs - always use PR author
- Include PR link or commit hash reference
- Format: `* **[repo] Description**: details ([**@username**](https://github.com/username) in cozystack/repo#123)`
- For **optional repositories** (talm, boot-to-talos, cozypkg, cozy-proxy), you MUST check ALL of them for tags during the release period. Use the loop provided in Step 6 to check each repository systematically.
- When including changes from additional repositories, use the format: `[repo-name] Description` and link to the repository's PR/issue if available
- **Prefer PR numbers over commit hashes**: For commits from additional repositories, extract PR number from commit message using GitHub API. Use PR format (`cozystack/website#123`) instead of commit hash (`cozystack/website@abc1234`) when available
- **Never add entries without author and PR/commit reference**: Every entry from additional repositories must have both author and link
- Group changes from additional repositories with main repository changes, or create separate subsections if there are many changes from a specific repository
- **PR author verification (Step 7) - MANDATORY**:
- **⚠️ CRITICAL**: You MUST get the author from PR using `gh pr view`, NOT from commit
- **⚠️ CRITICAL**: Extract PR numbers from commit messages, then use `gh pr view <PR_NUMBER> --json author --jq .author.login` for each PR
- **⚠️ CRITICAL**: Do NOT use commit author - commit author is usually the person who merged, not the person who wrote the code
- **⚠️ CRITICAL**: Do NOT skip this step for any PR, even if the author seems obvious
- For commits without PR numbers (rare), fall back to: `gh api repos/cozystack/cozystack/commits/<hash> --jq '.author.login'`
- This ensures correct attribution and prevents errors in changelog entries (especially important for squash/merge commits)
- **Contributors list (Step 8)**:
- For minor releases (vX.Y.0): You must generate a list of all contributors and identify first-time contributors.
- For patch releases: Contributors section is optional, but recommended for significant releases
- Extract GitHub usernames from PR links in commit messages or changelog entries
- This helps recognize community contributions and welcome new contributors
- **Minor releases (vX.Y.0)**:
- Must include **all changes** from patch releases of the previous minor version (e.g., v0.38.0 includes all changes from v0.37.1, v0.37.2, v0.37.3, etc.)
- The "Full Changelog" link must reference the previous minor release (v0.37.0...v0.38.0), NOT the last patch release (v0.37.8...v0.38.0)
- This ensures users can see the complete set of changes for the entire minor version cycle
- **Verification step**: After creating the changelog, extract all PR numbers from patch release changelogs and verify they all appear in the minor release changelog to prevent missing entries
- **Backport handling**: Patch releases may contain backports with different PR numbers (e.g., #1624 in patch release vs #1622 in main). For minor releases, use original PR numbers from main when available, but verify that all changes from patch releases are included regardless of PR number differences
- **Content verification**: Don't rely solely on PR number matching - verify that change descriptions from patch releases appear in the minor release changelog, as backports may have different PR numbers

View File

@@ -1,190 +0,0 @@
# Instructions for AI Agents
Guidelines for AI agents contributing to Cozystack.
## Checklist for Creating a Pull Request
- [ ] Changes are made and tested
- [ ] Commit message uses correct `[component]` prefix
- [ ] Commit is signed off with `--signoff`
- [ ] Branch is rebased on `upstream/main` (no extra commits)
- [ ] PR body includes description and release note
- [ ] PR is pushed and created with `gh pr create`
## How to Commit and Create Pull Requests
### 1. Make Your Changes
Edit the necessary files in the codebase.
### 2. Commit with Proper Format
Use the `[component]` prefix and `--signoff` flag:
```bash
git commit --signoff -m "[component] Brief description of changes"
```
**Component prefixes:**
- System: `[dashboard]`, `[platform]`, `[cilium]`, `[kube-ovn]`, `[linstor]`, `[fluxcd]`, `[cluster-api]`
- Apps: `[postgres]`, `[mysql]`, `[redis]`, `[kafka]`, `[clickhouse]`, `[virtual-machine]`, `[kubernetes]`
- Other: `[tests]`, `[ci]`, `[docs]`, `[maintenance]`
**Examples:**
```bash
git commit --signoff -m "[dashboard] Add config hash annotations to restart pods on config changes"
git commit --signoff -m "[postgres] Update operator to version 1.2.3"
git commit --signoff -m "[docs] Add installation guide"
```
### 3. Rebase on upstream/main (if needed)
If your branch has extra commits, clean it up:
```bash
# Fetch latest
git fetch upstream
# Create clean branch from upstream/main
git checkout -b my-feature upstream/main
# Cherry-pick only your commit
git cherry-pick <your-commit-hash>
# Force push to your branch
git push -f origin my-feature:my-branch-name
```
### 4. Push Your Branch
```bash
git push origin <branch-name>
```
### 5. Create Pull Request
Write the PR body to a temporary file:
```bash
cat > /tmp/pr_body.md << 'EOF'
## What this PR does
Brief description of the changes.
Changes:
- Change 1
- Change 2
### Release note
```release-note
[component] Description for changelog
```
EOF
```
Create the PR:
```bash
gh pr create --title "[component] Brief description" --body-file /tmp/pr_body.md
```
Clean up:
```bash
rm /tmp/pr_body.md
```
## Addressing AI Bot Reviewer Comments
When the user asks to fix comments from AI bot reviewers (like Qodo, Copilot, etc.):
### 1. Get PR Comments
View all comments on the pull request:
```bash
gh pr view <PR-number> --comments
```
Or for the current branch:
```bash
gh pr view --comments
```
### 2. Review Each Comment Carefully
**Important**: Do NOT blindly apply all suggestions. Each comment should be evaluated:
- **Consider context** - Does the suggestion make sense for this specific case?
- **Check project conventions** - Does it align with Cozystack patterns?
- **Evaluate impact** - Will this improve code quality or introduce issues?
- **Question validity** - AI bots can be wrong or miss context
**When to apply:**
- ✅ Legitimate bugs or security issues
- ✅ Clear improvements to code quality
- ✅ Better error handling or edge cases
- ✅ Conformance to project conventions
**When to skip:**
- ❌ Stylistic preferences that don't match project style
- ❌ Over-engineering simple code
- ❌ Changes that break existing patterns
- ❌ Suggestions that show misunderstanding of the code
### 3. Apply Valid Fixes
Make changes addressing the valid comments. Use your judgment.
### 4. Leave Changes Uncommitted
**Critical**: Do NOT commit or push the changes automatically.
Leave the changes in the working directory so the user can:
- Review the fixes
- Decide whether to commit them
- Make additional adjustments if needed
```bash
# After making changes, show status but DON'T commit
git status
git diff
```
The user will commit and push when ready.
### Example Workflow
```bash
# Get PR comments
gh pr view 1234 --comments
# Review comments and identify valid ones
# Make necessary changes to address valid comments
# ... edit files ...
# Show what was changed (but don't commit)
git status
git diff
# Tell the user what was fixed and what was skipped
```
## Git Permissions
Request these permissions when needed:
- `git_write` - For commit, rebase, cherry-pick, branch operations
- `network` - For push, fetch, pull operations
## Common Issues
**PR has extra commits?**
→ Rebase on `upstream/main` and cherry-pick only your commits
**Wrong commit message?**
`git commit --amend --signoff -m "[correct] message"` then `git push -f`
**Need to update PR?**
`gh pr edit <number> --body "new description"`

View File

@@ -1,115 +0,0 @@
# Cozystack Project Overview
This document provides detailed information about Cozystack project structure and conventions for AI agents.
## About Cozystack
Cozystack is an open-source Kubernetes-based platform and framework for building cloud infrastructure. It provides:
- **Managed Services**: Databases, VMs, Kubernetes clusters, object storage, and more
- **Multi-tenancy**: Full isolation and self-service for tenants
- **GitOps-driven**: FluxCD-based continuous delivery
- **Modular Architecture**: Extensible with custom packages and services
- **Developer Experience**: Simplified local development with cozypkg tool
The platform exposes infrastructure services via the Kubernetes API with ready-made configs, built-in monitoring, and alerts.
## Code Layout
```
.
├── packages/ # Main directory for cozystack packages
│ ├── core/ # Core platform logic charts (installer, platform)
│ ├── system/ # System charts (CSI, CNI, operators, etc.)
│ ├── apps/ # User-facing charts shown in dashboard catalog
│ └── extra/ # Tenant-specific modules, singleton charts which are used as dependencies
├── dashboards/ # Grafana dashboards for monitoring
├── hack/ # Helper scripts for local development
│ └── e2e-apps/ # End-to-end application tests
├── scripts/ # Scripts used by cozystack container
│ └── migrations/ # Version migration scripts
├── docs/ # Documentation
│ ├── agents/ # AI agent instructions
│ └── changelogs/ # Release changelogs
├── cmd/ # Go command entry points
│ ├── cozystack-api/
│ ├── cozystack-controller/
│ └── cozystack-assets-server/
├── internal/ # Internal Go packages
│ ├── controller/ # Controller implementations
│ └── lineagecontrollerwebhook/
├── pkg/ # Public Go packages
│ ├── apis/
│ ├── apiserver/
│ └── registry/
└── api/ # Kubernetes API definitions (CRDs)
└── v1alpha1/
```
## Package Structure
Every package is a Helm chart following the umbrella chart pattern:
```
packages/<category>/<package-name>/
├── Chart.yaml # Chart definition and parameter docs
├── Makefile # Development workflow targets
├── charts/ # Vendored upstream charts
├── images/ # Dockerfiles and image build context
├── patches/ # Optional upstream chart patches
├── templates/ # Additional manifests
├── templates/dashboard-resourcemap.yaml # Dashboard resource mapping
├── values.yaml # Override values for upstream
└── values.schema.json # JSON schema for validation and UI
```
## Conventions
### Helm Charts
- Follow **umbrella chart** pattern for system components
- Include upstream charts in `charts/` directory (vendored, not referenced)
- Override configuration in root `values.yaml`
- Use `values.schema.json` for input validation and dashboard UI rendering
### Go Code
- Follow standard **Go conventions** and idioms
- Use **controller-runtime** patterns for Kubernetes controllers
- Prefer **kubebuilder** for API definitions and controllers
- Add proper error handling and structured logging
### Git Commits
- Use format: `[component] Description`
- Always use `--signoff` flag
- Reference PR numbers when available
- Keep commits atomic and focused
- Follow conventional commit format for changelogs
### Documentation
Documentation is organized as follows:
- `docs/` - General documentation
- `docs/agents/` - Instructions for AI agents
- `docs/changelogs/` - Release changelogs
- Main website: https://github.com/cozystack/website
## Things Agents Should Not Do
### Never Edit These
- Do not modify files in `/vendor/` (Go dependencies)
- Do not edit generated files: `zz_generated.*.go`
- Do not change `go.mod`/`go.sum` manually (use `go get`)
- Do not edit upstream charts in `packages/*/charts/` directly (use patches)
- Do not modify image digests in `values.yaml` (generated by build)
### Version Control
- Do not commit built artifacts from `_out`
- Do not commit test artifacts or temporary files
### Git Operations
- Do not force push to main/master
- Do not update git config
- Do not perform destructive operations without explicit request
### Core Components
- Do not modify `packages/core/platform/` without understanding migration impact

View File

@@ -1,29 +0,0 @@
# Release Process
This document provides instructions for AI agents on how to handle release-related tasks.
## When to Use
Follow these instructions when the user asks to:
- Create a new release
- Prepare a release
- Tag a release
- Perform release-related tasks
## Instructions
For detailed release process instructions, follow the steps documented in:
**[docs/release.md](../release.md)**
## Quick Reference
The release process typically involves:
1. Preparing the release branch
2. Generating changelog
3. Updating version numbers
4. Creating git tags
5. Building and publishing artifacts
All detailed steps are documented in `docs/release.md`.

View File

@@ -1,18 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0..
-->
## Features and Improvements
## Security
## Fixes
## Dependencies
## Development, Testing, and CI/CD
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.36.0...main

View File

@@ -1,20 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0..
-->
## Major Features and Improvements
## Security
## Fixes
## Dependencies
## Documentation
## Development, Testing, and CI/CD
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.0...v0.35.0

View File

@@ -129,7 +129,7 @@ For more information, read the [Cozystack Release Workflow](https://github.com/c
* [platform] Reduce requested CPU and RAM for the `kamaji` provider. (@klinch0 in https://github.com/cozystack/cozystack/pull/825)
* [platform] Improve the reconciliation loop for the Cozystack system HelmReleases logic. (@klinch0 in https://github.com/cozystack/cozystack/pull/809 and https://github.com/cozystack/cozystack/pull/810, @kvaps in https://github.com/cozystack/cozystack/pull/811)
* [platform] Remove extra dependencies for the Piraeus operator. (@klinch0 in https://github.com/cozystack/cozystack/pull/856)
* [platform] Refactor dashboard values. (@kvaps in https://github.com/cozystack/cozystack/pull/928, patched by @lllamnyp in https://github.com/cozystack/cozystack/pull/952)
* [platform] Refactor dashboard values. (@kvaps in https://github.com/cozystack/cozystack/pull/928, patched by @llamnyp in https://github.com/cozystack/cozystack/pull/952)
* [platform] Make FluxCD artifact disabled by default. (@klinch0 in https://github.com/cozystack/cozystack/pull/964)
* [kubernetes] Update garbage collection of HelmReleases in tenant Kubernetes clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/835)
* [kubernetes] Fix merging `valuesOverride` for tenant clusters. (@kvaps in https://github.com/cozystack/cozystack/pull/879)

View File

@@ -1,8 +0,0 @@
## Fixes
* [build] Update Talos Linux v1.10.3 and fix assets. (@kvaps in https://github.com/cozystack/cozystack/pull/1006)
* [ci] Fix uploading released artifacts to GitHub. (@kvaps in https://github.com/cozystack/cozystack/pull/1009)
* [ci] Separate build and testing jobs. (@kvaps in https://github.com/cozystack/cozystack/pull/1005)
* [docs] Write a full release post for v0.31.1. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/999)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.31.0...v0.31.1

View File

@@ -1,12 +0,0 @@
## Security
* Resolve a security problem that allowed a tenant administrator to gain enhanced privileges outside the tenant. (@kvaps in https://github.com/cozystack/cozystack/pull/1062, backported in https://github.com/cozystack/cozystack/pull/1066)
## Fixes
* [platform] Fix dependencies in `distro-full` bundle. (@klinch0 in https://github.com/cozystack/cozystack/pull/1056, backported in https://github.com/cozystack/cozystack/pull/1064)
* [platform] Fix RBAC for annotating namespaces. (@kvaps in https://github.com/cozystack/cozystack/pull/1031, backported in https://github.com/cozystack/cozystack/pull/1037)
* [platform] Reduce system resource consumption by using smaller resource presets for VerticalPodAutoscaler, SeaweedFS, and KubeOVN. (@klinch0 in https://github.com/cozystack/cozystack/pull/1054, backported in https://github.com/cozystack/cozystack/pull/1058)
* [dashboard] Fix a number of issues in the Cozystack Dashboard (@kvaps in https://github.com/cozystack/cozystack/pull/1042, backported in https://github.com/cozystack/cozystack/pull/1066)
* [apps] Specify minimal working resource presets. (@kvaps in https://github.com/cozystack/cozystack/pull/1040, backported in https://github.com/cozystack/cozystack/pull/1041)
* [apps] Update built-in documentation and configuration reference for managed Clickhouse application. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1059, backported in https://github.com/cozystack/cozystack/pull/1065)

View File

@@ -1,71 +0,0 @@
Cozystack v0.32.0 is a significant release that brings new features, key fixes, and updates to underlying components.
## Major Features and Improvements
* [platform] Use `cozypkg` instead of Helm (@kvaps in https://github.com/cozystack/cozystack/pull/1057)
* [platform] Introduce the HelmRelease reconciler for system components. (@kvaps in https://github.com/cozystack/cozystack/pull/1033)
* [kubernetes] Enable using container registry mirrors by tenant Kubernetes clusters. Configure containerd for tenant Kubernetes clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/979, patched by @lllamnyp in https://github.com/cozystack/cozystack/pull/1032)
* [platform] Allow users to specify CPU requests in VCPUs. Use a library chart for resource management. (@lllamnyp in https://github.com/cozystack/cozystack/pull/972 and https://github.com/cozystack/cozystack/pull/1025)
* [platform] Annotate all child objects of apps with uniform labels for tracking by WorkloadMonitors. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1018 and https://github.com/cozystack/cozystack/pull/1024)
* [platform] Introduce `cluster-domain` option and un-hardcode `cozy.local`. (@kvaps in https://github.com/cozystack/cozystack/pull/1039)
* [platform] Get instance type when reconciling WorkloadMonitor (https://github.com/cozystack/cozystack/pull/1030)
* [virtual-machine] Add RBAC rules to allow port forwarding in KubeVirt for SSH via `virtctl`. (@mattia-eleuteri in https://github.com/cozystack/cozystack/pull/1027, patched by @klinch0 in https://github.com/cozystack/cozystack/pull/1028)
* [monitoring] Add events and audit inputs (@kevin880202 in https://github.com/cozystack/cozystack/pull/948)
## Security
* Resolve a security problem that allowed tenant administrator to gain enhanced privileges outside the tenant. (@kvaps in https://github.com/cozystack/cozystack/pull/1062)
## Fixes
* [dashboard] Fix a number of issues in the Cozystack Dashboard (@kvaps in https://github.com/cozystack/cozystack/pull/1042)
* [kafka] Specify minimal working resource presets. (@kvaps in https://github.com/cozystack/cozystack/pull/1040)
* [cilium] Fixed Gateway API manifest. (@zdenekjanda in https://github.com/cozystack/cozystack/pull/1016)
* [platform] Fix RBAC for annotating namespaces. (@kvaps in https://github.com/cozystack/cozystack/pull/1031)
* [platform] Fix dependencies for paas-hosted bundle. (@kvaps in https://github.com/cozystack/cozystack/pull/1034)
* [platform] Reduce system resource consumption by using lesser resource presets for VerticalPodAutoscaler, SeaweedFS, and KubeOVN. (@klinch0 in https://github.com/cozystack/cozystack/pull/1054)
* [virtual-machine] Fix handling of cloudinit and ssh-key input for `virtual-machine` and `vm-instance` applications. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1019 and https://github.com/cozystack/cozystack/pull/1020)
* [apps] Fix Clickhouse version parsing. (@kvaps in https://github.com/cozystack/cozystack/commit/28302e776e9d2bb8f424cf467619fa61d71ac49a)
* [apps] Add resource quotas for PostgreSQL jobs and fix application readme generation check in CI. (@klinch0 in https://github.com/cozystack/cozystack/pull/1051)
* [kube-ovn] Enable database health check. (@kvaps in https://github.com/cozystack/cozystack/pull/1047)
* [kubernetes] Fix upstream issue by updating Kubevirt-CCM. (@kvaps in https://github.com/cozystack/cozystack/pull/1052)
* [kubernetes] Fix resources and introduce a migration when upgrading tenant Kubernetes to v0.32.4. (@kvaps in https://github.com/cozystack/cozystack/pull/1073)
* [cluster-api] Add a missing migration for `capi-providers`. (@kvaps in https://github.com/cozystack/cozystack/pull/1072)
## Dependencies
* Introduce cozykpg, update to v1.1.0. (@kvaps in https://github.com/cozystack/cozystack/pull/1057 and https://github.com/cozystack/cozystack/pull/1063)
* Update flux-operator to 0.22.0, Flux to 2.6.x. (@kingdonb in https://github.com/cozystack/cozystack/pull/1035)
* Update Talos Linux to v1.10.3. (@kvaps in https://github.com/cozystack/cozystack/pull/1006)
* Update Cilium to v1.17.4. (@kvaps in https://github.com/cozystack/cozystack/pull/1046)
* Update MetalLB to v0.15.2. (@kvaps in https://github.com/cozystack/cozystack/pull/1045)
* Update Kube-OVN to v1.13.13. (@kvaps in https://github.com/cozystack/cozystack/pull/1047)
## Documentation
* [Oracle Cloud Infrastructure installation guide](https://cozystack.io/docs/operations/talos/installation/oracle-cloud/). (@kvaps, @lllamnyp, and @NickVolynkin in https://github.com/cozystack/website/pull/168)
* [Cluster configuration with `talosctl`](https://cozystack.io/docs/operations/talos/configuration/talosctl/). (@NickVolynkin in https://github.com/cozystack/website/pull/211)
* [Configuring container registry mirrors for tenant Kubernetes clusters](https://cozystack.io/docs/operations/talos/configuration/air-gapped/#5-configure-container-registry-mirrors-for-tenant-kubernetes). (@klinch0 in https://github.com/cozystack/website/pull/210)
* [Explain application management strategies and available versions for managed applications.](https://cozystack.io/docs/guides/applications/). (@NickVolynkin in https://github.com/cozystack/website/pull/219)
* [How to clean up etcd state](https://cozystack.io/docs/operations/faq/#how-to-clean-up-etcd-state). (@gwynbleidd2106 in https://github.com/cozystack/website/pull/214)
* [State that Cozystack is a CNCF Sandbox project](https://github.com/cozystack/cozystack?tab=readme-ov-file#cozystack). (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1055)
## Development, Testing, and CI/CD
* [tests] Add tests for applications `virtual-machine`, `vm-disk`, `vm-instance`, `postgresql`, `mysql`, and `clickhouse`. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1048, patched by @kvaps in https://github.com/cozystack/cozystack/pull/1074)
* [tests] Fix concurrency for the `docker login` action. (@kvaps in https://github.com/cozystack/cozystack/pull/1014)
* [tests] Increase QEMU system disk size in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1011)
* [tests] Increase the waiting timeout for VMs in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1038)
* [ci] Separate build and testing jobs in CI. (@kvaps in https://github.com/cozystack/cozystack/pull/1005 and https://github.com/cozystack/cozystack/pull/1010)
* [ci] Fix the release assets. (@kvaps in https://github.com/cozystack/cozystack/pull/1006 and https://github.com/cozystack/cozystack/pull/1009)
## New Contributors
* @kevin880202 made their first contribution in https://github.com/cozystack/cozystack/pull/948
* @mattia-eleuteri made their first contribution in https://github.com/cozystack/cozystack/pull/1027
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.31.0...v0.32.0
<!--
HEAD https://github.com/cozystack/cozystack/commit/3ce6dbe8
-->

View File

@@ -1,38 +0,0 @@
## Major Features and Improvements
* [postgres] Introduce new functionality for backup and restore in PostgreSQL. (@klinch0 in https://github.com/cozystack/cozystack/pull/1086)
* [apps] Refactor resources in managed applications. (@kvaps in https://github.com/cozystack/cozystack/pull/1106)
* [system] Make VMAgent's `extraArgs` tunable. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1091)
## Fixes
* [postgres] Escape users and database names. (@kvaps in https://github.com/cozystack/cozystack/pull/1087)
* [tenant] Fix monitoring agents HelmReleases for tenant clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/1079)
* [kubernetes] Wrap cert-manager CRDs in a conditional. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1076)
* [kubernetes] Remove `useCustomSecretForPatchContainerd` option and enable it by default. (@kvaps in https://github.com/cozystack/cozystack/pull/1104)
* [apps] Increase default resource presets for Clickhouse and Kafka from `nano` to `small`. Update OpenAPI specs and readme's. (@kvaps in https://github.com/cozystack/cozystack/pull/1103 and https://github.com/cozystack/cozystack/pull/1105)
* [linstor] Add configurable DRBD network options for connection and timeout settings, replacing scripted logic for detecting devices that lost connection. (@kvaps in https://github.com/cozystack/cozystack/pull/1094)
## Dependencies
* Update cozy-proxy to v0.2.0 (@kvaps in https://github.com/cozystack/cozystack/pull/1081)
* Update Kafka Operator to 0.45.1-rc1 (@kvaps in https://github.com/cozystack/cozystack/pull/1082 and https://github.com/cozystack/cozystack/pull/1102)
* Update Flux Operator to 0.23.0 (@kingdonb in https://github.com/cozystack/cozystack/pull/1078)
## Documentation
* [docs] Release notes for v0.32.0 and two beta-versions. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1043)
## Development, Testing, and CI/CD
* [tests] Add Kafka, Redis. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1077)
* [tests] Increase disk space for VMs in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1097)
* [tests] Upd Kubernetes v1.33. (@kvaps in https://github.com/cozystack/cozystack/pull/1083)
* [tests] increase postgres timeouts. (@kvaps in https://github.com/cozystack/cozystack/pull/1108)
* [tests] don't wait for postgres ro service. (@kvaps in https://github.com/cozystack/cozystack/pull/1109)
* [ci] Setup systemd timer to tear down sandbox. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1092)
* [ci] Split testing job into several. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1075)
* [ci] Run E2E tests as separate parallel jobs. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1093)
* [ci] Refactor GitHub workflows. (@kvaps in https://github.com/cozystack/cozystack/pull/1107)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.32.0...v0.32.1

View File

@@ -1,91 +0,0 @@
> [!WARNING]
> A patch release [0.33.2](github.com/cozystack/cozystack/releases/tag/v0.33.2) fixing a regression in 0.33.0 has been released.
> It is recommended to skip this version and upgrade to [0.33.2](github.com/cozystack/cozystack/releases/tag/v0.33.2) instead.
## Feature Highlights
### Unified CPU and Memory Allocation Management
Since version 0.31.0, Cozystack introduced a single-point-of-truth configuration variable `cpu-allocation-ratio`,
making CPU resource requests and limits uniform in Virtual Machines managed by KubeVirt.
The new release 0.33.0 introduces `memory-allocation-ratio` and expands both variables to all managed applications and tenant resource quotas.
Resource presets also respect the allocation ratios and behave in the same way as explicit resource definitions.
The new resource definition format is concise and simple for platform users.
```yaml
# resource definition in the configuration
resources:
cpu: <defined cpu value>
memory: <defined memory value>
```
It results in Kubernetes resource requests and limits, based on defined values and the universal allocation ratios:
```yaml
# actual requests and limits, provided to the application
resources:
limits:
cpu: <defined cpu value>
memory: <defined memory value>
requests:
cpu: <defined cpu value / cpu-allocation-ratio>
memory: <defined memory value / memory-allocation-ratio>
```
When updating from earlier Cozystack versions, resource configuration in managed applications will be automatically migrated to the new format.
### Backing up and Restoring Data in Tenant Kubernetes
One of the main features of the release is backup capability for PVCs in tenant Kubernetes clusters.
It enables platform and tenant administrators to back up and restore data used by services in the tenant clusters.
This new functionality in Cozystack is powered by [Velero](https://velero.io/) and needs an external S3-compatible storage.
## Support for NFS Storage
Cozystack now supports using NFS shared storage with a new optional system module.
See the documentation: https://cozystack.io/docs/operations/storage/nfs/.
## Features and Improvements
* [kubernetes] Enable PVC backups in tenant Kubernetes clusters, powered by [Velero](https://velero.io/). (@klinch0 in https://github.com/cozystack/cozystack/pull/1132)
* [nfs-driver] Enable NFS support by introducing a new optional system module `nfs-driver`. (@kvaps in https://github.com/cozystack/cozystack/pull/1133)
* [virtual-machine] Configure CPU sockets available to VMs with the `resources.cpu.sockets` configuration value. (@klinch0 in https://github.com/cozystack/cozystack/pull/1131)
* [virtual-machine] Add support for using pre-imported "golden image" disks for virtual machines, enabling faster provisioning by referencing existing images instead of downloading via HTTP. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1112)
* [kubernetes] Add an option to expose the Ingress-NGINX controller in tenant Kubernetes cluster via LoadBalancer. New configuration value `exposeMethod` offers a choice of `Proxied` and `LoadBalancer`. (@kvaps in https://github.com/cozystack/cozystack/pull/1114)
* [apps] When updating from earlier Cozystack versions, automatically migrate to the new resource definition format: from `resources.requests.[cpu,memory]` and `resources.limits.[cpu,memory]` to `resources.[cpu,memory]`. (@kvaps in https://github.com/cozystack/cozystack/pull/1127)
* [apps] Give examples of new resource definitions in the managed app README's. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1120)
* [tenant] Respect `cpu-allocation-ratio` in tenant's `resourceQuotas`.(@kvaps in https://github.com/cozystack/cozystack/pull/1119)
* [cozy-lib] Introduce helper function to calculate Java heap params based on memory requests and limits. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1157)
## Security
* [monitoring] Disable sign up in Alerta. (@klinch0 in https://github.com/cozystack/cozystack/pull/1129)
## Fixes
* [platform] Always set resources for managed apps . (@lllamnyp in https://github.com/cozystack/cozystack/pull/1156)
* [platform] Remove the memory limit for Keycloak deployment. (@klinch0 in https://github.com/cozystack/cozystack/pull/1122)
* [kubernetes] Fix a condition in the ingress template for tenant Kubernetes. (@kvaps in https://github.com/cozystack/cozystack/pull/1143)
* [kubernetes] Fix a deadlock on reattaching a KubeVirt-CSI volume. (@kvaps in https://github.com/cozystack/cozystack/pull/1135)
* [mysql] MySQL applications with a single replica now correctly create a `LoadBalancer` service. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1113)
* [etcd] Fix resources and headless services in the etcd application. (@kvaps in https://github.com/cozystack/cozystack/pull/1128)
* [apps] Enable selecting `resourcePreset` from a drop-down list for all applications by adding enum of allowed values in the config scheme. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1117)
* [apps] Refactor resource presets provided to managed apps by `cozy-lib`. (@kvaps in https://github.com/cozystack/cozystack/pull/1155)
* [keycloak] Calculate and pass Java heap parameters explicitly to prevent OOM errors. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1157)
## Development, Testing, and CI/CD
* [dx] Introduce cozyreport tool and gather reports in CI. (@kvaps in https://github.com/cozystack/cozystack/pull/1139)
* [ci] Use Nexus as a pull-through cache for CI. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1124)
* [ci] Save a list of observed images after each workflow run. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1089)
* [ci] Skip Cozystack tests on PRs that only change the docs. Don't restart CI when a PR is labeled. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1136)
* [dx] Fix Makefile variables for `capi-providers`. (@kvaps in https://github.com/cozystack/cozystack/pull/1115)
* [tests] Introduce self-destructing testing environments. (@kvaps in https://github.com/cozystack/cozystack/pull/1138, https://github.com/cozystack/cozystack/pull/1140, https://github.com/cozystack/cozystack/pull/1141, https://github.com/cozystack/cozystack/pull/1142)
* [e2e] Retry flaky application tests to improve total test time. (@kvaps in https://github.com/cozystack/cozystack/pull/1123)
* [maintenance] Add a PR template. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1121)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.32.1...v0.33.0

View File

@@ -1,3 +0,0 @@
## Fixes
* [kubevirt-csi] Fix a regression by updating the role of the CSI controller. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1165)

View File

@@ -1,19 +0,0 @@
## Features and Improvements
* [vm-instance] Enable running [Windows](https://cozystack.io/docs/operations/virtualization/windows/) and [MikroTik RouterOS](https://cozystack.io/docs/operations/virtualization/mikrotik/) in Cozystack. Add `bus` option and always specify `bootOrder` for all disks. (@kvaps in https://github.com/cozystack/cozystack/pull/1168)
* [cozystack-api] Refactor OpenAPI Schema and support reading it from config. (@kvaps in https://github.com/cozystack/cozystack/pull/1173)
* [cozystack-api] Enable using singular resource names in Cozystack API. For example, `kubectl get tenant` is now a valid command, in addition to `kubectl get tenants`. (@kvaps in https://github.com/cozystack/cozystack/pull/1169)
* [postgres] Explain how to back up and restore PostgreSQL using Velero backups. (@klinch0 and @NickVolynkin in https://github.com/cozystack/cozystack/pull/1141)
## Fixes
* [virtual-machine,vm-instance] Adjusted RBAC role to let users read the service associated with the VMs they create. Consequently, users can now see details of the service in the dashboard and therefore read the IP address of the VM. (@klinch0 in https://github.com/cozystack/cozystack/pull/1161)
* [cozystack-api] Fix an error with `resourceVersion` which resulted in message 'failed to update HelmRelease: helmreleases.helm.toolkit.fluxcd.io "xxx" is invalid...'. (@kvaps in https://github.com/cozystack/cozystack/pull/1170)
* [cozystack-api] Fix an error in updating lists in Cozystack objects, which resulted in message "Warning: resource ... is missing the kubectl.kubernetes.io/last-applied-configuration annotation". (@kvaps in https://github.com/cozystack/cozystack/pull/1171)
* [cozystack-api] Disable `startegic-json-patch` support. (@kvaps in https://github.com/cozystack/cozystack/pull/1179)
* [dashboard] Fix the code for removing dashboard comments which used to mistakenly remove shebang from cloudInit scripts. (@kvaps in https://github.com/cozystack/cozystack/pull/1175).
* [virtual-machine] Fix cloudInit and sshKeys processing. (@kvaps in https://github.com/cozystack/cozystack/pull/1175 and https://github.com/cozystack/cozystack/commit/da3ee5d0ea9e87529c8adc4fcccffabe8782292e)
* [applications] Fix a typo in preset resource tables in the built-in documentation of managed applications. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1172)
* [kubernetes] Enable deleting Velero component from a tenant Kubernetes cluster. (@klinch0 in https://github.com/cozystack/cozystack/pull/1176)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.33.1...v0.33.2

View File

@@ -1,87 +0,0 @@
Cozystack v0.34.0 is a stable release.
It focuses on cluster reliability, virtualization capabilities, and enhancements to the Cozystack API.
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.34.0
-->
> [!WARNING]
> A regression was found in this release and fixed in patch [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
> When upgrading Cozystack, it's recommended to skip this version and upgrade directly to [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
## Major Features and Improvements
* [kubernetes] Enable users to select Kubernetes versions in tenant clusters. Supported versions range from 1.28 to 1.33, updated to the latest patches. (@lllamnyp and @IvanHunters in https://github.com/cozystack/cozystack/pull/1202)
* [kubernetes] Enable PVC snapshot capability in tenant Kubernetes clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/1203)
* [vpa] Implement autoscaling for the Vertical Pod Autoscaler itself, ensuring that VPA has sufficient resources and reducing the number of configuration parameters that platform administrators have to manage. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1198)
* [vm-instance] Enable running [Windows](https://cozystack.io/docs/operations/virtualization/windows/) and [MikroTik RouterOS](https://cozystack.io/docs/operations/virtualization/mikrotik/) in Cozystack. Add `bus` option and always specify `bootOrder` for all disks. (@kvaps in https://github.com/cozystack/cozystack/pull/1168)
* [cozystack-api] Specify OpenAPI schema for apps. (@kvaps in https://github.com/cozystack/cozystack/pull/1174)
* [cozystack-api] Refactor OpenAPI Schema and support reading it from config. (@kvaps in https://github.com/cozystack/cozystack/pull/1173)
* [cozystack-api] Enable using singular resource names in Cozystack API. For example, `kubectl get tenant` is now a valid command, in addition to `kubectl get tenants`. (@kvaps in https://github.com/cozystack/cozystack/pull/1169)
* [postgres] Explain how to back up and restore PostgreSQL using Velero backups. (@klinch0 and @NickVolynkin in https://github.com/cozystack/cozystack/pull/1141)
* [seaweedfs] Support multi-zone configuration for S3 storage. (@kvaps in https://github.com/cozystack/cozystack/pull/1194)
* [dashboard] Put YAML editor first when deploying and upgrading applications, as a more powerful option. Fix handling multiline strings. (@kvaps in https://github.com/cozystack/cozystack/pull/1227)
## Security
* [seaweedfs] Ensure that JWT signing keys in the SeaweedFS security configuration remain consistent across Helm upgrades. Resolve an upstream issue. (@kvaps in https://github.com/cozystack/cozystack/pull/1193 and https://github.com/seaweedfs/seaweedfs/pull/6967)
## Fixes
* [cozystack-controller] Fix stale workloads not being deleted when marked for deletion. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1210, @kvaps in https://github.com/cozystack/cozystack/pull/1229)
* [cozystack-controller] Improve reliability when updating HelmRelease objects to prevent unintended changes during reconciliation. (@klinch0 in https://github.com/cozystack/cozystack/pull/1205)
* [kubevirt-csi] Fix a regression by updating the role of the CSI controller. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1165)
* [virtual-machine,vm-instance] Adjusted RBAC role to let users read the service associated with the VMs they create. Consequently, users can now see details of the service in the dashboard and therefore read the IP address of the VM. (@klinch0 in https://github.com/cozystack/cozystack/pull/1161)
* [virtual-machine] Fix cloudInit and sshKeys processing. (@kvaps in https://github.com/cozystack/cozystack/pull/1175 and https://github.com/cozystack/cozystack/commit/da3ee5d0ea9e87529c8adc4fcccffabe8782292e)
* [cozystack-api] Fix an error with `resourceVersion` which resulted in message 'failed to update HelmRelease: helmreleases.helm.toolkit.fluxcd.io "xxx" is invalid...'. (@kvaps in https://github.com/cozystack/cozystack/pull/1170)
* [cozystack-api] Fix an error in updating lists in Cozystack objects, which resulted in message "Warning: resource ... is missing the kubectl.kubernetes.io/last-applied-configuration annotation". (@kvaps in https://github.com/cozystack/cozystack/pull/1171)
* [cozystack-api] Disable `strategic-json-patch` support. (@kvaps in https://github.com/cozystack/cozystack/pull/1179)
* [cozystack-api] Fix non-existing OpenAPI references. (@kvaps in https://github.com/cozystack/cozystack/pull/1208)
* [dashboard] Fix the code for removing dashboard comments which used to mistakenly remove shebang from `cloudInit` scripts. (@kvaps in https://github.com/cozystack/cozystack/pull/1175).
* [applications] Reorder configuration values in application README's for better readability. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1214)
* [applications] Disallow selecting `resourcePreset = none` in the visual editor when deploying and upgrading applications. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1196)
* [applications] Fix a typo in preset resource tables in the built-in documentation of managed applications. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1172)
* [kubernetes] Enable deleting Velero component from a tenant Kubernetes cluster. (@klinch0 in https://github.com/cozystack/cozystack/pull/1176)
* [kubernetes] Explicitly mention available K8s versions for tenant clusters in the README. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1212)
* [oidc] Enable deleting Keycloak service. (@klinch0 in https://github.com/cozystack/cozystack/pull/1178)
* [tenant] Enable deleting extra applications from a tenant. (@klinch0 and @kvaps and in https://github.com/cozystack/cozystack/pull/1162)
* [nats] Fix a typo in the application template. (@klinch0 in https://github.com/cozystack/cozystack/pull/1195)
* [postgres] Resolve an issue with the visibility of PostgreSQL load balancer on the dashboard. (@klinch0 https://github.com/cozystack/cozystack/pull/1204)
* [objectstorage] Update COSI controller and sidecar, including fixes from upstream. (@kvaps in https://github.com/cozystack/cozystack/pull/1209, https://github.com/kubernetes-sigs/container-object-storage-interface/pull/89, and https://github.com/kubernetes-sigs/container-object-storage-interface/pull/90)
## Dependencies
* Update FerretDB from v1 to v2.4.0.<br>**Breaking change:** before upgrading FerretDB instances, back up and restore the data following the [migration guide](https://docs.ferretdb.io/migration/migrating-from-v1/). (@kvaps in https://github.com/cozystack/cozystack/pull/1206)
* Update Talos Linux to v1.10.5. (@kvaps in https://github.com/cozystack/cozystack/pull/1186)
* Update LINSTOR to v1.31.2. (@kvaps in https://github.com/cozystack/cozystack/pull/1180)
* Update KubeVirt to v1.5.2. (@kvaps in https://github.com/cozystack/cozystack/pull/1183)
* Update CDI to v1.62.0. (@kvaps in https://github.com/cozystack/cozystack/pull/1183)
* Update Flux Operator to 0.24.0. (@kingdonb in https://github.com/cozystack/cozystack/pull/1167)
* Update Kamaji to edge-25.7.1. (@kvaps in https://github.com/cozystack/cozystack/pull/1184)
* Update Kube-OVN to v1.13.14. (@kvaps in https://github.com/cozystack/cozystack/pull/1182)
* Update Cilium to v1.17.5. (@kvaps in https://github.com/cozystack/cozystack/pull/1181)
* Update MariaDB Operator to v0.38.1. (@kvaps in https://github.com/cozystack/cozystack/pull/1188)
* Update SeaweedFS to v3.94. (@kvaps in https://github.com/cozystack/cozystack/pull/1194)
## Documentation
* [Updated Cozystack Roadmap and Backlog for 2024-2026](https://cozystack.io/docs/roadmap/). (@tym83 and @kvapsova in https://github.com/cozystack/website/pull/249)
* [Running Windows VMs](https://cozystack.io/docs/operations/virtualization/windows/). (@kvaps and @NickVolynkin in https://github.com/cozystack/website/pull/246)
* [Running MikroTik RouterOS VMs](https://cozystack.io/docs/operations/virtualization/mikrotik/). (@kvaps and @NickVolynkin in https://github.com/cozystack/website/pull/247)
* [Public-network Kubernetes Deployment](https://cozystack.io/docs/operations/faq/#public-network-kubernetes-deployment). (@klinch0 and @NickVolynkin in https://github.com/cozystack/website/pull/242)
* [How to allocate space on system disk for user storage](https://cozystack.io/docs/operations/faq/#how-to-allocate-space-on-system-disk-for-user-storage). (@klinch0 and @NickVolynkin in https://github.com/cozystack/website/pull/242)
* [Resource Management in Cozystack](https://cozystack.io/docs/guides/resource-management/). (@NickVolynkin in https://github.com/cozystack/website/pull/233)
* [Key Concepts of Cozystack](https://cozystack.io/docs/guides/concepts/). (@NickVolynkin in https://github.com/cozystack/website/pull/254)
* [Cozystack Architecture and Platform Stack](https://cozystack.io/docs/guides/platform-stack/). (@NickVolynkin in https://github.com/cozystack/website/pull/252)
* Fixed a parameter in Kubespan: `cluster.discovery.enabled = true`. (@lb0o in https://github.com/cozystack/website/pull/241)
* Updated the Linux Foundation trademark text on the Cozystack website. (@krook in https://github.com/cozystack/website/pull/251)
* Auto-update the managed applications reference pages. (@NickVolynkin in https://github.com/cozystack/website/pull/243 and https://github.com/cozystack/website/pull/245)
## Development, Testing, and CI/CD
* [ci] Improve workflow for contributors submitting PRs from forks. Use Oracle Cloud Infrastructure Registry for non-release PRs, bypassing restrictions preventing pushing to ghcr.io with default GitHub token. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1226)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.33.0...v0.34.0

View File

@@ -1,15 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.34.1
-->
> [!WARNING]
> A regression was found in this release and fixed in patch [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
> When upgrading Cozystack, it's recommended to skip this version and upgrade directly to [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
## Fixes
* [kubernetes] Fix regression in `volumesnapshotclass` installation from https://github.com/cozystack/cozystack/pull/1203. (@kvaps in https://github.com/cozystack/cozystack/pull/1238)
* [objectstorage] Fix building objectstorage images. (@kvaps in https://github.com/cozystack/cozystack/commit/a9e9dfca1fadde1bf2b4e100753e0731bbcfe923)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.0...v0.34.1

View File

@@ -1,14 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.34.2
-->
> [!WARNING]
> A regression was found in this release and fixed in patch [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
> When upgrading Cozystack, it's recommended to skip this version and upgrade directly to [0.34.3](https://github.com/cozystack/cozystack/releases/tag/v0.34.3).
## Fixes
* [objectstorage] Fix recording image in objectstorage. (@kvaps in https://github.com/cozystack/cozystack/commit/4d9a8389d6bc7e86d63dd976ec853b374a91a637)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.1...v0.34.2

View File

@@ -1,13 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.34.3
-->
## Fixes
* [tenant] Fix tenant network policy to allow traffic to additional tenant-related services across namespace hierarchies. (@klinch0 in https://github.com/cozystack/cozystack/pull/1232, backported in https://github.com/cozystack/cozystack/pull/1272)
* [kubernetes] Add dependency for snapshot CRD and migration to latest version. (@kvaps in https://github.com/cozystack/cozystack/pull/1275, backported in https://github.com/cozystack/cozystack/pull/1279)
* [seaweedfs] Add support for whitelisting and exporting via nginx-ingress. Update cosi-driver. (@kvaps in https://github.com/cozystack/cozystack/pull/1277)
* [kubevirt] Fix building Kubevirt CCM (@kvaps in 3c7e256906e1dbb0f957dc3a205fa77a147d419d)
* [virtual-machine] Fix a regression with field `optional=true`. (@kvaps in https://github.com/cozystack/cozystack/commit/01053f7c3180d1bd045d7c5fb949984c2bdaf19d)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.2...v0.34.3

View File

@@ -1,21 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.34.4
-->
## Security
* [keycloak] Store administrative passwords in the management cluster's secrets. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1286)
* [keycloak] Update Keycloak client redirect URI to use HTTPS instead of HTTP. Enable `cookie-secure`. (@klinch0 in https://github.com/cozystack/cozystack/pull/1287, backported in https://github.com/cozystack/cozystack/pull/1291)
## Fixes
* [kubernetes] Resolve problems with pod names exceeding allowed length by shortening the name of volume snapshot CRD from `*-volumesnapshot-crd-for-tenant-k8s` to `*-vsnap-crd`. To apply this change, update each affected tenant Kubernetes cluster after updating Cozystack. (@klinch0 in https://github.com/cozystack/cozystack/pull/1284)
* [cozystack-api] Show correct `kind` values of `ApplicationList`. (@kvaps in https://github.com/cozystack/cozystack/pull/1290, backported in https://github.com/cozystack/cozystack/pull/1293)
## Development, Testing, and CI/CD
* [tests] Add tests for S3 buckets. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1283, backported in https://github.com/cozystack/cozystack/pull/1292)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.3...v0.34.4

View File

@@ -1,11 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.34.5
-->
## Fixes
* [virtual-machine] Enable using custom `instanceType` values in `virtual-machine` and `vm-instance` by disabling field validation. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1300, backported in https://github.com/cozystack/cozystack/pull/1303)
* [kubernetes] Disable VPA for VPA in tenant Kubernetes clusters. Tenant clusters have no need for this feature, and it was not designed to work in a tenant cluster, but was enabled by mistake. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1301, backported in https://github.com/cozystack/cozystack/pull/1305)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.4...v0.34.5

View File

@@ -1,9 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.34.6
-->
## Fixes
* [dashboard] Fix filling multiline values in the visual editor. (@kvaps in https://github.com/cozystack/cozystack/commit/56fca9bd75efeca25f9483f6c514b6fec26d5d22 and https://github.com/cozystack/kubeapps/commit/4926bc68fabb0914afab574006643c85a597b371)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.5...v0.34.6

View File

@@ -1,11 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.34.7
-->
## Fixes
* [seaweedfs] Disable proxy buffering and proxy request buffering for ingress. (@kvaps in https://github.com/cozystack/cozystack/pull/1330, backported in https://github.com/cozystack/cozystack/commit/96d462e911d4458704b596533d3f10e4b5e80862)
* [linstor] Update LINSTOR monitoring configuration to use label `controller_node` instead of `node`. (@kvaps in https://github.com/cozystack/cozystack/pull/1326, backported in https://github.com/cozystack/cozystack/pull/1327)
* [kubernetes] Disable VPA for VPA in tenant Kubernetes clusters, patched a fix from v0.34.5. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1318, backported in https://github.com/cozystack/cozystack/pull/1319)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.6...v0.34.7

View File

@@ -1,11 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.34.8
-->
## Fixes
* [etcd] Fix `topologySpreadConstraints`. (@klinch0 in https://github.com/cozystack/cozystack/pull/1331, backported in https://github.com/cozystack/cozystack/pull/1332)
* [linstor] Update LINSTOR monitoring configuration: switch labels on `linstor-satellite` and `linstor-controller`. (@kvaps in https://github.com/cozystack/cozystack/pull/1335, backported in https://github.com/cozystack/cozystack/pull/1336)
* [kamaji] Fix broken migration jobs originating from missing environment variables in the in-tree build. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1338, backported in https://github.com/cozystack/cozystack/pull/1340)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.7...v0.34.8

View File

@@ -1,138 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.35.0
-->
## Feature Highlights
### External Application Sources in Cozystack
Cozystack now supports adding external application packages to the platform's application catalog.
Platform administrators can include custom or third-party applications alongside built-in ones, using the Cozystack API.
Adding an application requires making an application package, similar to the ones included in Cozystack
under [`packages/apps`](https://github.com/cozystack/cozystack/tree/main/packages/apps).
Using external packages is enabled by a new CustomResourceDefinition (CRD) called `CozystackResourceDefinition` and
a corresponding controller (reconciler) that watches for these resources.
Add your own managed application using the [documentation](https://cozystack.io/docs/applications/external/)
and an example at [github.com/cozystack/external-apps-example](https://github.com/cozystack/external-apps-example).
<!--
* [platform] Enable using external application packages by adding a `CozystackResourceDefinition` reconciler. Read the documentation on [adding external applications to Cozystack](https://cozystack.io/docs/applications/external/) to learn more. (@klinch0 in https://github.com/cozystack/cozystack/pull/1313)
* [cozystack-api] Provide an API for administrators to define custom managed applications alongside existing managed apps. (@klinch in https://github.com/cozystack/cozystack/pull/1230)
-->
### Cozystack API Improvements
This release brings significant improvements to the OpenAPI specs for all managed applications in Cozystack,
including databases, tenant Kubernetes, virtual machines, monitoring, and others.
These changes include more precise type definitions for fields that were previously defined only as generic objects,
and many fields now have value constraints.
Now many possible misconfigurations are detected immediately upon API request, and not later, with a failed deployment.
The Cozystack API now also displays default values for the application resources.
Most other fields now have sane default values when such values are possible.
All these changes pave the road for the new Cozystack UI, which is currently under development.
### Hetzner RobotLB Support
MetalLB, the default load balancer included in Cozystack, is built for bare metal and self-hosted VMs,
but is not supported on most cloud providers.
For example, Hetzner provides its own RobotLB service, which Cozystack now supports as an optional component.
Read the updated guide on [deploying Cozystack on Hetzner.com](https://cozystack.io/docs/install/providers/hetzner/)
to learn more and deploy your own Cozystack cluster on Hetzner.
### S3 Service: Dedicated Clusters and Monitoring
You can now deploy dedicated Cozystack clusters to run the S3 service, powered by SeaweedFS.
Thanks to the support for [integration with remote filer endpoints](https://cozystack.io/docs/operations/stretched/seaweedfs-multidc/),
you can connect your primary Cozystack cluster to use S3 storage in a dedicated cluster.
For security, platform administrators can now configure the SeaweedFS application with
a list of IP addresses or CIDR ranges that are allowed to access the filer service.
SeaweedFS has also been integrated into the monitoring stack and now has its own Grafana dashboard.
Together, these enhancements help Cozystack users build a more reliable, scalable, and observable S3 service.
### ClickHouse Keeper
The ClickHouse application now includes a ClickHouse Keeper service to improve cluster reliability and availability.
This component is deployed by default with every ClickHouse cluster.
Learn more in the [ClickHouse configuration reference](https://cozystack.io/docs/applications/clickhouse/#clickhouse-keeper-parameters).
## Major Features and Improvements
* [platform] Enable using external application packages by adding a `CozystackResourceDefinition` reconciler. Read the documentation on [adding external applications to Cozystack](https://cozystack.io/docs/applications/external/) to learn more. (@klinch0 in https://github.com/cozystack/cozystack/pull/1313)
* [cozystack-api, apps] Add default values, clear type definitions, value constraints and other improvements to the OpenAPI specs and READMEs by migrating to [cozyvalue-gen](https://github.com/cozystack/cozyvalues-gen). (@kvaps and @NickVolynkin in https://github.com/cozystack/cozystack/pull/1216, https://github.com/cozystack/cozystack/pull/1314, https://github.com/cozystack/cozystack/pull/1316, https://github.com/cozystack/cozystack/pull/1321, and https://github.com/cozystack/cozystack/pull/1333)
* [cozystack-api] Show default values from the OpenAPI spec in the application resources. (@kvaps in https://github.com/cozystack/cozystack/pull/1241)
* [cozystack-api] Provide an API for administrators to define custom managed applications alongside existing managed apps. (@klinch in https://github.com/cozystack/cozystack/pull/1230)
* [robotlb] Introduce the Hetzner RobotLB balancer. (@IvanHunters and @gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1233)
* [platform, robotlb] Autodetect if node ports should be assigned to load balancer services. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1271)
* [seaweedfs] Enable [integration with remote filer endpoints](https://cozystack.io/docs/operations/stretched/seaweedfs-multidc/) by adding new `Client` topology. (@kvaps in https://github.com/cozystack/cozystack/pull/1239)
* [seaweedfs] Add support for whitelisting and exporting via nginx-ingress. Update cosi-driver. (@kvaps in https://github.com/cozystack/cozystack/pull/1277)
* [monitoring, seaweedfs] Add monitoring and Grafana dashboard for SeaweedFS. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1285)
* [clickhouse] Add the ClickHouse Keeper component. (@klinch0 in https://github.com/cozystack/cozystack/pull/1298 and https://github.com/cozystack/cozystack/pull/1320)
## Security
* [keycloak] Store administrative passwords in the management cluster's secrets. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1286)
* [keycloak] Update Keycloak client redirect URI to use HTTPS instead of HTTP. Enable `cookie-secure`. (@klinch0 in https://github.com/cozystack/cozystack/pull/1287)
## Fixes
* [platform] Introduce a fixed 2-second delay at the start of reconciliation for system and tenant Helm operations. (@klinch0 in https://github.com/cozystack/cozystack/pull/1343)
* [kubernetes] Add dependency for snapshot CRD and migration to the latest version. (@kvaps in https://github.com/cozystack/cozystack/pull/1275)
* [kubernetes] Fix regression in `volumesnapshotclass` installation from https://github.com/cozystack/cozystack/pull/1203. (@kvaps in https://github.com/cozystack/cozystack/pull/1238)
* [kubernetes] Resolve problems with pod names exceeding allowed length by shortening the name of volume snapshot CRD from `*-volumesnapshot-crd-for-tenant-k8s` to `*-vsnap-crd`. To apply this change, update each affected tenant Kubernetes cluster after updating Cozystack. (@klinch0 in https://github.com/cozystack/cozystack/pull/1284)
* [kubernetes] Disable VPA for VPA in tenant Kubernetes clusters. Tenant clusters have no need for this feature, and it was not designed to work in a tenant cluster, but was enabled by mistake. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1301 and https://github.com/cozystack/cozystack/pull/1318)
* [kamaji] Fix broken migration jobs originating from missing environment variables in the in-tree build. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1338)
* [etcd] Fix the `topologySpreadConstraints` for etcd. (@klinch0 in https://github.com/cozystack/cozystack/pull/1331)
* [tenant] Fix tenant network policy to allow traffic to additional tenant-related services across namespace hierarchies. (@klinch0 in https://github.com/cozystack/cozystack/pull/1232)
* [tenant, monitoring] Improve the reliability of tenant monitoring by increasing the timeout and number of retries. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1294)
* [kubevirt] Fix building KubeVirt CCM image. (@kvaps in https://github.com/cozystack/cozystack/commit/3c7e256906e1dbb0f957dc3a205fa77a147d419d)
* [virtual-machine] Fix a regression with `optional=true` field. (@kvaps in https://github.com/cozystack/cozystack/commit/01053f7c3180d1bd045d7c5fb949984c2bdaf19d)
* [virtual-machine] Enable using custom `instanceType` values in `virtual-machine` and `vm-instance` by disabling field validation. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1300, backported in https://github.com/cozystack/cozystack/pull/1303)
* [cozystack-api] Show correct `kind` values of `ApplicationList`. (@kvaps in https://github.com/cozystack/cozystack/pull/1290)
* [cozystack-api] Add missing roles to allow cozystack-controller to read Kubernetes deployments. (@klinch0 in https://github.com/cozystack/cozystack/pull/1342)
* [linstor] Update LINSTOR monitoring configuration to use label `controller_node` instead of `node`. (@kvaps in https://github.com/cozystack/cozystack/pull/1326 and https://github.com/cozystack/cozystack/pull/1335)
* [seaweedfs] Fix SeaweedFS volume configuration. Increase the volume size limit from 100MB to 30,000MB. (@kvaps in https://github.com/cozystack/cozystack/pull/1328)
* [seaweedfs] Disable proxy buffering and proxy request buffering for ingress. (@kvaps in https://github.com/cozystack/cozystack/pull/1330)
## Dependencies
* Update flux-operator to 0.28.0. (@kingdonb in https://github.com/cozystack/cozystack/pull/1315 and https://github.com/cozystack/cozystack/pull/1344)
## Documentation
* [Reimplement Cozystack Roadmap as a GitHub project](https://github.com/orgs/cozystack/projects/1). (@cozystack team)
* [SeaweedFS Multi-DC Configuration](https://cozystack.io/docs/operations/stretched/seaweedfs-multidc/). (@kvaps and @NickVolynkin in https://github.com/cozystack/website/pull/272)
* [Troubleshooting Kube-OVN](https://cozystack.io/docs/operations/troubleshooting/#kube-ovn-crash). (@kvaps and @NickVolynkin in https://github.com/cozystack/website/pull/273)
* [Removing failed nodes from Cozystack cluster](https://cozystack.io/docs/operations/troubleshooting/#remove-a-failed-node-from-the-cluster). (@kvaps and @NickVolynkin in https://github.com/cozystack/website/pull/273)
* [Installing Talos with `kexec`](https://cozystack.io/docs/talos/install/kexec/). (@kvaps and @NickVolynkin in https://github.com/cozystack/website/pull/268)
* [Rewrite Cozystack tutorial](https://cozystack.io/docs/getting-started/). (@NickVolynkin in https://github.com/cozystack/website/pull/262 and https://github.com/cozystack/website/pull/268)
* [How to install Cozystack in Hetzner](https://cozystack.io/docs/install/providers/hetzner/). (@NickVolynkin and @IvanHunters in https://github.com/cozystack/website/pull/280)
* [Adding External Applications to Cozystack Catalog](https://cozystack.io/docs/applications/external/). (@klinch0 and @NickVolynkin in https://github.com/cozystack/website/pull/283)
* [Creating and Using Named VM Images (Golden Images)](https://cozystack.io/docs/virtualization/vm-image/) (@NickVolynkin and @kvaps in https://github.com/cozystack/website/pull/276)
* [Creating Encrypted Storage on LINSTOR](https://cozystack.io/docs/operations/storage/disk-encryption/). (@kvaps and @NickVolynkin in https://github.com/cozystack/website/pull/282)
* [Adding and removing components on Cozystack installation using `bundle-enable` and `bundle-disable`](https://cozystack.io/docs/operations/bundles/#how-to-enable-and-disable-bundle-components) (@NickVolynkin in https://github.com/cozystack/website/pull/281)
* Restructure Cozystack documentation. Bring [managed Kubernetes](https://cozystack.io/docs/kubernetes/), [managed applications](https://cozystack.io/docs/applications/), [virtualization](https://cozystack.io/docs/virtualization/), and [networking](https://cozystack.io/docs/networking/) guides to the top level. (@NickVolynkin in https://github.com/cozystack/website/pull/266)
## Development, Testing, and CI/CD
* [tests] Add tests for S3 buckets. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1283)
* [tests, ci] Simplify test discovery logic; run two k8s tests as separate jobs; delete Clickhouse application after a successful test. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1236)
* [dx] When running `make` commands with `BUILDER` value specified, `PLATFORM` is optional. (@kvaps in https://github.com/cozystack/cozystack/pull/1288)
* [tests] Fix resource specification in virtual machine tests. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1308)
* [tests] Increase available space for e2e tests. (@kvaps in https://github.com/cozystack/cozystack/commit/168a24ffdf1202b3bf2e7d2b5ef54b72b7403baf)
* [tests, ci] Continue application tests after one of them fails. (@NickVolynkin in https://github.com/cozystack/cozystack/commit/634b77edad6c32c101f3e5daea6a5ffc0c83d904)
* [ci] Use a subdomain of aenix.org for Nexus service in CI. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1322)
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.34.0...v0.35.0

View File

@@ -1,10 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.35.1
-->
## Fixes
* [cozy-lib] Fix malformed retrieval of `cozyConfig` in the cozy-lib template. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1348)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.35.0...v0.35.1

View File

@@ -1,22 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.35.2
-->
## Features and Improvements
* [talos] Add LLDPD (`ghcr.io/siderolabs/lldpd`) as a built-in system extension, enabling LLDP-based neighbor discovery out of the box. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1351 and https://github.com/cozystack/cozystack/pull/1360)
## Fixes
* [cozystack-api] Sanitize the OpenAPI v2 schema. (@kvaps in https://github.com/cozystack/cozystack/pull/1353)
* [seaweedfs] Fix a problem where S3 gateway would be moved to an external pod, resulting in authentication failure. (@kvaps in https://github.com/cozystack/cozystack/pull/1361)
## Dependencies
* Update LINSTOR to v1.31.3. (@kvaps in https://github.com/cozystack/cozystack/pull/1358)
* Update SeaweedFS to v3.96. (@kvaps in https://github.com/cozystack/cozystack/pull/1361)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.35.1...v0.35.2

View File

@@ -1,10 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.35.3
-->
## Fixes
* [seaweedfs] Add a liveness check for the SeaweedFS S3 endpoint to improve health monitoring and enable automatic recovery. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1368)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.35.2...v0.35.3

View File

@@ -1,14 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.35.4
-->
## Fixes
* [virtual-machine] Fix the regression in VM update hook introduced in https://github.com/cozystack/cozystack/pull/1169 by targeting the correct API resource and avoiding conflicts with KubeVirt resources. (@kvaps in https://github.com/cozystack/cozystack/pull/1376, backported in https://github.com/cozystack/cozystack/pull/1377)
* [cozy-lib] Add the missing template `cozy-lib.resources.flatten`. (@kvaps in https://github.com/cozystack/cozystack/pull/1372, backported in https://github.com/cozystack/cozystack/pull/1375)
* [platform] Fix a boolean override bug in Helm merge. ConfigMap values now correctly take precedence over bundle defaults. (@dyudin0821 in https://github.com/cozystack/cozystack/pull/1385, backported in https://github.com/cozystack/cozystack/pull/1388)
* [seaweedfs] Resolve connectivity issues in SeaweedFS. Increase Nginx ingress timeouts for SeaweedFS S3 endpoint. (@kvaps in https://github.com/cozystack/cozystack/pull/1386, backported in https://github.com/cozystack/cozystack/pull/1390)
* [dx] Remove the BUILDER and PLATFORM autodetect logic in Makefiles. (@kvaps in https://github.com/cozystack/cozystack/pull/1391, backported in https://github.com/cozystack/cozystack/pull/1392)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.35.3...v0.35.4

View File

@@ -1,11 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.35.5
-->
## Fixes
* [etcd] Ensure that TopologySpreadConstraints consistently target etcd pods. (@kvaps in https://github.com/cozystack/cozystack/pull/1405, backported in https://github.com/cozystack/cozystack/pull/1406)
* [tests] Add resource quota for testing namespaces. (@IvanHunters in https://github.com/cozystack/cozystack/commit/4982cdf5024c8bb9aa794b91d55545ea6b105d17)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.35.4...v0.35.5

View File

@@ -1,117 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.36.0
-->
## Feature Highlights
Release v0.36.0 focuses on the stability, observability, and flexible configuration of managed applications.
### Per-Namespace Resource Limits for Tenants
Resource management for Cozystack tenants has received a final patch and is now graduated to a stable feature.
Platform administrators can define explicit CPU, memory, and storage limits for each tenant's namespace
via the tenant specification.
This prevents any single tenant from consuming more than their share of cluster resources,
ensuring cluster stability and a guaranteed service level for each tenant.
### Kube-OVN Cluster Health Monitor
A new component called the Kube-OVN Plunger continuously monitors the health of the Kube-OVN network's central control cluster.
This external agent gathers OVN cluster status and consensus information, exposing Prometheus metrics and live events stream via SSE.
As a result, it provides much better visibility of the virtual network layer and helps maintain a reliable and observable network in Cozystack.
This change opens the road to automated Kube-OVN database operations and recovery in specific corner cases.
### Configurable CoreDNS Addon for Kubernetes
Cozystack introduces a dedicated CoreDNS addon for managing cluster DNS with greater flexibility.
CoreDNS is now deployed via a Helm chart and can be tuned through custom values in the cluster specification,
including autoscaling, replica count, and adjusting service IP.
CoreDNS can now be configured in the dashboard and using Cozystack API.
### Granular SeaweedFS Service Configuration
The SeaweedFS S3 storage service in Cozystack is now far more configurable at a component level.
The Helm chart for SeaweedFS now includes independent configuration for each component and its resources.
It includes the master nodes, volume servers with support for multiple zones, filers, the backing database, and the S3 gateway.
Administrators can set per-component parameters such as the number of replicas, available CPU, memory, and storage size.
### Server-side Encryption for S3
Cozystack v0.36.0 includes SeaweedFS 3.97, bringing support for server-side encryption of S3 buckets (SSE-C, SSE-KMS, and SSE-S3).
**Breaking change:** upon updating Cozystack, SeaweedFS will be updated to a newer version, and the services specification
will be converted to the new format.
### Custom Resource Profiles for Ingress Controller
NGINX controller is now configurable on a per-replica basis.
Configurations include the ingress controller pods' CPU and memory requests/limits, either with direct values or using one of the available presets.
### Cozystack REST API Documentation
[Cozystack REST API reference](https://cozystack.io/docs/cozystack-api/rest/) is now published on the website.
It includes endpoints and methods for listing, creating, updating, and removing each managed application, defined as Cozystack CRD.
### Built-in LLDP-Based Neighbor Discovery in Talos
Cozystack now includes the LLDPD extension in its Talos OS image, enabling Link Layer Discovery Protocol (LLDP) out of the box.
This means each node can automatically discover and advertise its network neighbors and topology without any manual setup.
### Use external IP for Egress Traffic in VMs
When a virtual machine has an external IP assigned to it, it will now always use it for egress traffic, independently of the external method used.
## Major Features and Improvements
* [talos] Add LLDPD (`ghcr.io/siderolabs/lldpd`) as a built-in system extension, enabling LLDP-based neighbor discovery out of the box. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1351 and https://github.com/cozystack/cozystack/pull/1360)
* [kubernetes] Add a configurable CoreDNS addon with valuesOverride, packaged chart, and managed deployment (metrics, autoscaling, HPA, customizable Service). (@klinch0 in https://github.com/cozystack/cozystack/pull/1362)
* [kube-ovn] Implement the Kube-OVN plunger, an external monitoring agent for the ovn-central cluster. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1380, patched in https://github.com/cozystack/cozystack/pull/1414 and https://github.com/cozystack/cozystack/pull/1418)
* [tenant] Enable per-namespace resource quota settings in tenants, with explicit cpu, memory, and storage values. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1389)
* [seaweedfs] Add detailed resource configuration for each component of the SeaweedFS service. (@klinch0 and @kvaps in https://github.com/cozystack/cozystack/pull/1415)
* [ingress] Enable per-replica resource configuration to the ingress controller. (@kvaps in https://github.com/cozystack/cozystack/pull/1416)
* [virtual-machine] Use external IP for egress traffic with `PortList` method. (@kvaps in https://github.com/cozystack/cozystack/pull/1349)
## Fixes
* [cozy-lib] Fix malformed retrieval of `cozyConfig` in the cozy-lib template. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1348)
* [cozy-lib] Add the missing template `cozy-lib.resources.flatten`. (@kvaps in https://github.com/cozystack/cozystack/pull/1372)
* [cozystack-api] Sanitize the OpenAPI v2 schema. (@kvaps in https://github.com/cozystack/cozystack/pull/1353)
* [kube-ovn] Improve northd leader detection. Patch the northd leader check to test against all endpoints instead of just the first one marked as ready. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1363)
* [seaweedfs] Add a liveness check for the SeaweedFS S3 endpoint to improve health monitoring and enable automatic recovery. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1368)
* [seaweedfs] Resolve race conditions in SeaweedFS. Increase deployment timeouts and set install/upgrade remediation to unlimited retries to improve deployment resilience. (@IvanHunters in https://github.com/cozystack/cozystack/pull/1371)
* [seaweedfs] Resolve connectivity issues in SeaweedFS. Increase Nginx ingress timeouts for SeaweedFS S3 endpoint. (@kvaps in https://github.com/cozystack/cozystack/pull/1386)
* [virtual-machine] Fix the reg ression in VM update hook introduced in https://github.com/cozystack/cozystack/pull/1169. Target the correct API resource and avoid conflicts with KubeVirt resources. (@kvaps in https://github.com/cozystack/cozystack/pull/1376)
* [virtual-machine] Correct app version references in `virtual-machine` and `vm-instance`, ensuring accurate versioning during migrations. (@kvaps in https://github.com/cozystack/cozystack/pull/1378).
* [cozyreport] Fix an error where cozyreport tried to parse non-existent objects and generated garbage output in CI debug logs. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1383)
* [platform] Fix a boolean override bug in Helm merge. ConfigMap values now correctly take precedence over bundle defaults. (@dyudin0821 in https://github.com/cozystack/cozystack/pull/1385)
* [kubernetes] CoreDNS release now installs and stores state in the `kube-system` namespace. (@kvaps in https://github.com/cozystack/cozystack/pull/1395)
* [kubernetes] Expose configuration for CoreDNS, enabling setting the image repository and replica count via `values.yaml`. (@kvaps in https://github.com/cozystack/cozystack/pull/1410)
* [etcd] Ensure that TopologySpreadConstraints consistently target etcd pods. (@kvaps in https://github.com/cozystack/cozystack/pull/1405)
* [tenant] Use force-upgrade for ingress controller charts. (@klinch0 in https://github.com/cozystack/cozystack/pull/1404)
* [cozystack-controller] Fix an RBAC error that prevented the workload labelling feature from working. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1419)
* [seaweedfs] Remove VerticalPodAutoscaler for SeaweedFS. (@kvaps in https://github.com/cozystack/cozystack/pull/1421)
## Dependencies
* Update LINSTOR to v1.31.3. (@kvaps in https://github.com/cozystack/cozystack/pull/1358)
* Update SeaweedFS to v3.97. (@kvaps in https://github.com/cozystack/cozystack/pull/1361 and https://github.com/cozystack/cozystack/pull/1373)
* Update Kube-OVN to 1.14.5. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1363)
* Replace Bitnami images with alternatives in all charts. (@kvaps in https://github.com/cozystack/cozystack/pull/1374)
## Documentation
## Development, Testing, and CI/CD
* [dx] Remove the BUILDER and PLATFORM autodetect logic in Makefiles. (@kvaps in https://github.com/cozystack/cozystack/pull/1391)
* [ci] Use the host buildx config in CI. (@kvaps in https://github.com/cozystack/cozystack/pull/1015)
* [ci] Add `jq` and `git` to the installer image. (@kvaps in https://github.com/cozystack/cozystack/pull/1417)
* [ci] Source the `REGISTRY` environment variable from actions' variables, not secrets, so external pull requests can work. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1423)
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.35.0...v0.36.0

View File

@@ -1,22 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.36.1
-->
## Major Features and Improvements
* [cozystack-api] Implement recursive, Kubernetes-like defaulting for applications: missing fields in nested objects and arrays are auto-populated safely without mutating shared defaults. (@kvaps in https://github.com/cozystack/cozystack/pull/1432)
## Fixes
* [cozystack-api] Update defaulting API schemas. (@kvaps in https://github.com/cozystack/cozystack/pull/1433)
* [dashboard] Fix Bitnami dependencies. (@kvaps in https://github.com/cozystack/cozystack/pull/1431)
* [seaweedfs] Fix SeaweedFS migration. (@kvaps in https://github.com/cozystack/cozystack/pull/1430)
## Development, Testing, and CI/CD
* [adopters] Add [Hidora](https://hikube.cloud) to the Cozystack adopters list. (@matthieu-robin in https://github.com/cozystack/cozystack/pull/1429)
---
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.36.0...v0.36.1

View File

@@ -1,18 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.36.2
-->
## Features and Improvements
## Security
## Fixes
## Dependencies
## Development, Testing, and CI/CD
---
**Full Changelog**: [v0.36.1...v0.36.2](https://github.com/cozystack/cozystack/compare/v0.36.1...v0.36.2)

View File

@@ -1,117 +0,0 @@
# Cozystack v0.37 — “OpenAPI Dashboard & Lineage Everywhere”
Weve shipped a big usability push this cycle: a brand-new **OpenAPI-driven dashboard**, lineage labeling across core resource types, and several reliability improvements to smooth upgrades from 0.36→ 0.37. Below are the highlights and the full categorized lists.
## Highlights
* **New OpenAPI-based Dashboard** replaces the old UI, adds module-aware navigation, dynamic branding, and richer Kubernetes resource views ([**@kvaps**](https://github.com/kvaps) in #1269, #1463, #1460).
* **Lineage Webhook** tags Pods, PVCs, Services, Ingresses, and Secrets, adding labels referencing the managing Cozystack application ([**@lllamnyp**](https://github.com/lllamnyp) in #1448, #1452, #1477, #1486, #1497; [**@kvaps**](https://github.com/kvaps) in #1454).
* **Smoother upgrades** with installer and migration hardening, decoupled CRDs vs. API server ([**@lllamnyp**](https://github.com/lllamnyp) in #1494, #1498; [**@kvaps**](https://github.com/kvaps) in #1506).
* **Operations quality**: Kubernetes tests with smarter waits/readiness checks ([**@IvanHunters**](https://github.com/IvanHunters) in #1485).
---
## New features
### Dashboard
* Introduce the OpenAPI-based dashboard and controller; implement TenantNamespace, TenantModules, TenantSecret/SecretsTable resources ([**@kvaps**](https://github.com/kvaps) in #1269).
* Module-aware navigation, richer detail views (Services/Secrets/Ingresses), improved sidebars; “Tenant Modules” grouping ([**@kvaps**](https://github.com/kvaps) in #1463).
* Dynamic branding via cluster config (tenant name, footer/title, logo/icon SVGs) ([**@kvaps**](https://github.com/kvaps) in #1460).
* Dashboard: fix namespace listing for unprivileged users and stabilize streamed requests; build-time patching ([**@kvaps**](https://github.com/kvaps) in #1456).
* Dashboard UX set: marketplace hides module resources; consistent navigation/links; prefill “name” in forms; ingress factory; formatted TenantNamespaces tables ([**@kvaps**](https://github.com/kvaps) in #1463).
* **Dashboard**: list modules reliably; remove Tenant from Marketplace; fix field override while typing ([**@kvaps**](https://github.com/kvaps) in #1501, #1503).
* **Dashboard**: correct API group for applications; sidebars; disable auto-expand; fix `/docs` redirect ([**@kvaps**](https://github.com/kvaps) in #1463, #1465, #1462).
* **Dashboard**: show Secrets with empty values correctly ([**@kvaps**](https://github.com/kvaps) in #1480).
* Dashboard configuration refactor: generate static resources at startup; auto-cleanup stale objects; higher controller client throughput ([**@kvaps**](https://github.com/kvaps) in #1457).
### Migration to v0.37
* **Installer/Migrations**: prevent unintended deletion of platform resource definitions; resilient timestamping; tolerant annotations; stronger migrate-then-reconcile flow ([**@kvaps**](https://github.com/kvaps) in #1475; Andrei Kvapil & [**@lllamnyp**](https://github.com/lllamnyp) in #1498).
* Installer hardening for **migration #20**: packaged apply, ordered waits/readiness checks, RFC3339(nano) stamping; Helm in installer image (Andrei Kvapil & [**@lllamnyp**](https://github.com/lllamnyp) in #1498).
* **Decoupled API & CozyRDs**: You can now upgrade the Cozystack API server independently of CRDs/CozyRD instances, easing 0.36 → 0.37 migrations ([**@lllamnyp**](https://github.com/lllamnyp) in #1494).
* **Migration #20**: The installer runs migration from packaged Helm charts with ordered waits/readiness checks; annotations are tolerant; timestamps are environment-robust (Andrei Kvapil & [**@lllamnyp**](https://github.com/lllamnyp) in #1498; [**@kvaps**](https://github.com/kvaps) in #1475).
### Webhook / Lineage
* Add a lineage mutating webhook to auto-label Pods/Secrets/PVCs/Ingresses/WorkloadMonitors with owning app ([**@lllamnyp**](https://github.com/lllamnyp) in #1448, #1497, [**@kvaps**](https://github.com/kvaps) in #1454).
* **Name-based** selectors for Secret visibility (templates supported) ([**@lllamnyp**](https://github.com/lllamnyp) in #1477).
* Select **Services** and **Ingresses** in CRDs/API; treat them as user-facing when configured ([**@lllamnyp**](https://github.com/lllamnyp) in #1486).
* **VictoriaMetrics integration**: Lineage labels are explicitly set on VM resources; `managedMetadata` is configured to avoid controller “fights” over labels ([**@lllamnyp**](https://github.com/lllamnyp) in #1452).
* Webhook **excludes** `default` and `kube-system` to avoid unintended mutations (part of the installer/migration hardening by Andrei Kvapil & [**@lllamnyp**](https://github.com/lllamnyp) in #1498).
### API / Platform
* Decouple the Cozystack API from Cozystack Resource Definitions to allow independent upgrades ([**@lllamnyp**](https://github.com/lllamnyp) in #1494).
* Add **label selectors** to app definitions for Secret include/exclude ([**@lllamnyp**](https://github.com/lllamnyp) in #1447).
### Monitoring & Ops
* Reduce node labelsets in target relabeling configs on cadvisor/kubelet metrics to reduce cardinality while keeping useful CPU metrics ([**@IvanHunters**](https://github.com/IvanHunters) in #1455).
### Storage & Backups
* PVC expansion in tenant clusters via KubeVirt CSI resizer; RBAC updates (Klinch0 in #1438).
* Velero upgraded to **v1.17.0**; node agent enabled by default and a raft of usability features ([**@kvaps**](https://github.com/kvaps) in #1484).
### Kubernetes/tests & Tooling
* Smarter Kubernetes test flows: node readiness checks, kubelet version validation, longer rollout waits, per-component readiness ([**@IvanHunters**](https://github.com/IvanHunters) in #1485).
### UI/Icons
* New **VM-Disk** SVG icon ([**@kvapsova**](https://github.com/kvapsova) in #1435).
---
## Improvements (minor)
* Make the **Info** app deploy irrespective of OIDC settings ([**klinch0**](https://github.com/klinch0) in #1474).
* Move SA token Secret creation to **Info** app ([**@lllamnyp**](https://github.com/lllamnyp) in #1446).
* Explicitly set lineage labels for VictoriaMetrics resources ([**@lllamnyp**](https://github.com/lllamnyp) in #1452).
---
## Bug fixes
* **Kubernetes**: fix MachineDeployment `spec.selector` mismatch to ensure proper targeting ([**@kvaps**](https://github.com/kvaps) in #1502).
* **Old dashboard**: FerretDB spec typo prevented deploy/display ([**@lllamnyp**](https://github.com/lllamnyp) in #1440).
* **SeaweedFS**: fix per-zone size fallback for multi-DC volumes; make migrations more robust ([**@kvaps**](https://github.com/kvaps) in #1476, #1430).
* **CoreDNS**: pin tag to v1.12.4 ([**@kvaps**](https://github.com/kvaps) in #1469).
* **OIDC**: avoid creating KeycloakRealmGroup before operator API is available ([**@lllamnyp**](https://github.com/lllamnyp) in #1495).
* **Kafka**: disable noisy alerts when Kafka isnt deployed ([**@lllamnyp**](https://github.com/lllamnyp) in #1488).
---
## Dependency & version updates
* **Velero → v1.17.0**; Helm chart v11; node agent default-on ([**@kvaps**](https://github.com/kvaps) in #1484).
* **Cilium → v1.17.8** ([**@kvaps**](https://github.com/kvaps) in #1473).
* **Flux Operator → v0.29.0** (Kingdon Barrett in #1466).
---
## Refactors & chores
* Remove legacy `versions_map`; unify packaging targets; tighten HelmRelease defaults; replace many chart versions with build-time placeholders ([**@kvaps**](https://github.com/kvaps) in #1453).
* Pin CoreDNS image and refresh numerous images ([**@kvaps**](https://github.com/kvaps) in #1469; related image refreshes across #1448 work).
---
## Documentation & governance
* **Contributor Ladder** created and later updated (Timur Tukaev in #1224; Andrei Kvapil & Timur Tukaev in #1492).
* **Code of Conduct** updated with a Vendor Neutrality Manifesto (Timur Tukaev in #1493).
* **Adopters**: add Hidora (Matthieu Robin in #1429).
* **MAINTAINERS**: add/remove entries (Nikita Bykov in #1487; Timur Tukaev in #1491).
* **Issue templates**: new bug-report template and tweaks (Moriarti).
* **README**: updated dark-theme screenshot ([**@kvaps**](https://github.com/kvaps) in #1459).
---
## Breaking changes & upgrade notes
---
## Security & stability

View File

@@ -1,31 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.37.1
-->
## Features and Improvements
* **[api] Efficient listing of TenantNamespaces**: Optimized TenantNamespace listing by replacing per-namespace SubjectAccessReview calls with group-based rolebinding checks, significantly reducing API latency and improving performance ([**@lllamnyp**](https://github.com/lllamnyp) in #1507).
## Fixes
* **[api] Fix RBAC for listing of TenantNamespaces and handle system:masters**: Fixed regression in TenantNamespace listing RBAC and added proper handling for system:masters group to ensure correct authorization ([**@kvaps**](https://github.com/kvaps) in #1511).
* **[dashboard] Fix logout**: Fixed dashboard logout functionality to properly clear session and redirect users ([**@kvaps**](https://github.com/kvaps) in #1510).
* **[installer] Add additional check to wait for lineage-webhook**: Added additional readiness check to ensure lineage-webhook is fully ready before proceeding with installation, improving upgrade reliability ([**@kvaps**](https://github.com/kvaps) in #1506).
## Development, Testing, and CI/CD
* **[tests] Make Kubernetes tests POSIX-compatible**: Replaced bash-specific constructs with POSIX-compliant code, ensuring tests work reliably with /bin/sh and improving compatibility across different shell environments ([**@IvanHunters**](https://github.com/IvanHunters) in #1509).
## Documentation
* **[website] Update troubleshooting documentation**: Updated Kubernetes installation troubleshooting guide with additional information and fixes ([**@lb0o**](https://github.com/lb0o) in cozystack/website@82beddd).
* **[website] Add LLDPD disabling documentation**: Added minimal patch documentation for disabling lldpd based on official LLDPD usage guide ([**@lb0o**](https://github.com/lb0o) in cozystack/website@7ec5d7b).
* **[website] Fix typo in utility command**: Fixed typo in utility command documentation ([**@lb0o**](https://github.com/lb0o) in cozystack/website@6c76cb5).
* **[website] Update backup and recovery docs**: Updated backup and recovery documentation with latest information ([**@kvaps**](https://github.com/kvaps) in cozystack/website@2781aa5).
* **[website] Add Troubleshooting checklist**: Added troubleshooting checklist to help users diagnose and resolve common issues ([**@kvaps**](https://github.com/kvaps) in cozystack/website@59fc304).
---
**Full Changelog**: [v0.37.0...v0.37.1](https://github.com/cozystack/cozystack/compare/v0.37.0...v0.37.1)

View File

@@ -1,21 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.37.2
-->
## Features and Improvements
* **[lineage] Separate webhook from cozy controller**: Separated the lineage-controller-webhook from cozystack-controller into a separate daemonset component deployed on all control-plane nodes, reducing API server latency and improving performance by decreasing outgoing API calls. Introduced internal label to track resources already handled by the webhook ([**@lllamnyp**](https://github.com/lllamnyp) in #1515).
## Fixes
* **[api] Fix listing tenantnamespaces for non-oidc users**: Fixed TenantNamespace listing functionality for users not using OIDC authentication, ensuring proper namespace visibility for all authentication methods ([**@kvaps**](https://github.com/kvaps) in #1517, #1519).
## Migration and Upgrades
* **[platform] Better migration for 0.36.2->0.37.2+**: Improved migration script for users upgrading directly from 0.36.2 to 0.37.2+, ensuring the new lineage webhook daemonset is properly deployed and fixing a bug where webhook readiness was not appropriately verified during migration ([**@lllamnyp**](https://github.com/lllamnyp) in #1521, #1522).
---
**Full Changelog**: [v0.37.1...v0.37.2](https://github.com/cozystack/cozystack/compare/v0.37.1...v0.37.2)

View File

@@ -1,45 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.37.3
-->
## Features and Improvements
* **[apps] Make VM service user facing**: Virtual machine services are now marked as user-facing, improving service discovery and visibility in the dashboard ([**@lllamnyp**](https://github.com/lllamnyp) in #1523).
* **[seaweedfs] Allow users to discover their buckets**: Users can now discover and list their S3 buckets in SeaweedFS, improving usability and bucket management ([**@kvaps**](https://github.com/kvaps) in #1528).
* **[seaweedfs] Update SeaweedFS v3.99 and deploy S3 as stacked service**: Updated SeaweedFS to version 3.99 and deployed S3 gateway as a stacked service for better integration and performance ([**@kvaps**](https://github.com/kvaps) in #1562).
* **[dashboard] Show service LB IP**: Fixed JSON path issue to correctly display Service LoadBalancer IPs in the dashboard table view, improving visibility of service endpoints ([**@lllamnyp**](https://github.com/lllamnyp) in #1524).
* **[dashboard] Update openapi-ui v1.0.3 + fixes**: Updated OpenAPI UI to version 1.0.3 with various fixes and improvements ([**@kvaps**](https://github.com/kvaps) in #1564).
* **[kubernetes] Use controlPlane.replicas field**: Fixed managed Kubernetes app to properly use the `controlPlane.replicas` field instead of hardcoding the value, allowing users to configure control plane replica count ([**@lllamnyp**](https://github.com/lllamnyp) in #1556).
* **[monitoring] add settings alert for slack**: Added Slack integration configuration for Alerta alerts, enabling notifications to Slack channels ([**@scooby87**](https://github.com/scooby87) in #1545).
## Fixes
* **[lineage] Check for nil chart in HelmRelease**: Added nil check to prevent crashes when lineage webhook encounters HelmReleases using `chartRef` instead of `chart`, improving stability ([**@lllamnyp**](https://github.com/lllamnyp) in #1525).
* **[kamaji] Respect 3rd party labels**: Applied patch to Kamaji controller to respect third-party labels, preventing reconciliation loops between lineage webhook and Kamaji controller ([**@lllamnyp**](https://github.com/lllamnyp) in #1531, #1534).
* **[redis-operator] Build patched operator in-tree**: Moved Redis operator build into Cozystack organization and patched it to prevent overwriting third-party labels on owned resources ([**@lllamnyp**](https://github.com/lllamnyp) in #1547).
* **[mariadb-operator] Add post-delete job to remove PVCs**: Added post-delete job to automatically remove PersistentVolumeClaims when MariaDB instances are deleted, preventing orphaned storage resources ([**@IvanHunters**](https://github.com/IvanHunters) in #1553).
* **[velero] Set defaultItemOperationTimeout=24h**: Set default item operation timeout to 24 hours for Velero backups, preventing timeouts on large backup operations ([**@kvaps**](https://github.com/kvaps) in #1542).
## Dependencies
* **Update LINSTOR v1.32.3**: Updated LINSTOR to version 1.32.3 with latest features and bug fixes ([**@kvaps**](https://github.com/kvaps) in #1565).
## System Configuration
* **[system] kube-ovn: turn off enableLb**: Disabled load balancer functionality in Kube-OVN configuration ([**@nbykov0**](https://github.com/nbykov0) in #1548).
## Documentation
* **[website] Update LINSTOR documentation**: Updated LINSTOR guide and set failmode=continue for ZFS configurations ([**@kvaps**](https://github.com/kvaps) in cozystack/website@033804e).
* **[website] Update managed apps reference**: Updated managed applications reference documentation ([**@kvaps**](https://github.com/kvaps) in cozystack/website@b886a74).
* **[website] Update external apps documentation**: Updated documentation for external applications ([**@kvaps**](https://github.com/kvaps) in cozystack/website@565dad9).
* **[website] Add naming conventions**: Added naming conventions documentation ([**@kvaps**](https://github.com/kvaps) in cozystack/website@b227abb).
* **[website] Update golden image documentation**: Updated documentation for creating golden images for virtual machines ([**@kvaps**](https://github.com/kvaps) in cozystack/website@34c2f3a, cozystack/website@ef65593).
* **[website] Fix documentation formatting**: Fixed alerts, infoboxes, tabs styles and main page formatting ([**@kvaps**](https://github.com/kvaps) in cozystack/website@e992e97, cozystack/website@b2c4dee).
* **[website] Fix typo in blog article**: Fixed typo in blog article ([**@kvaps**](https://github.com/kvaps) in cozystack/website@0a4bbf3).
---
**Full Changelog**: [v0.37.2...v0.37.3](https://github.com/cozystack/cozystack/compare/v0.37.2...v0.37.3)

View File

@@ -1,29 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.37.4
-->
## Features and Improvements
* **[tenant] Allow listing workloads**: Enabled listing of workloads for tenants, improving visibility and management of tenant resources ([**@kvaps**](https://github.com/kvaps) in #1576, #1577).
## Fixes
* **[seaweedfs] Fix migration to v3.99**: Fixed migration issues when upgrading SeaweedFS to version 3.99, ensuring smooth upgrades ([**@kvaps**](https://github.com/kvaps) in #1572, #1575).
* **[nats] Merge container spec, not podTemplate**: Fixed NATS configuration to properly merge container specifications instead of podTemplate, ensuring correct container configuration ([**@lllamnyp**](https://github.com/lllamnyp) in #1571, #1574).
## Development, Testing, and CI/CD
* **[e2e] Increase Kubernetes connection timeouts**: Increased connection and request timeouts in E2E tests when communicating with Kubernetes API, improving test stability under high load and slow cluster response conditions ([**@IvanHunters**](https://github.com/IvanHunters) in #1570, #1573).
## Documentation
* **[website] Optimize website for mobile devices**: Improved website layout and responsiveness for mobile devices ([**@kvaps**](https://github.com/kvaps) in cozystack/website@3ab2338).
* **[website] Add OpenAPI UI**: Added OpenAPI UI documentation and integration ([**@kvaps**](https://github.com/kvaps) in cozystack/website@b1c1668).
* **[website] Update Cozystack video in hero banner**: Updated hero banner with new Cozystack video ([**@kvaps**](https://github.com/kvaps) in cozystack/website@e351137).
* **[website] Add screenshots carousel**: Added screenshots carousel to showcase Cozystack features ([**@kvaps**](https://github.com/kvaps) in cozystack/website@8422bd0).
---
**Full Changelog**: [v0.37.3...v0.37.4](https://github.com/cozystack/cozystack/compare/v0.37.3...v0.37.4)

View File

@@ -1,28 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.37.5
-->
## Features and Improvements
* **[dashboard-controller] Move badges generation logic to internal dashboard component**: Moved badges generation logic to internal dashboard component for better code organization and maintainability ([**@kvaps**](https://github.com/kvaps) in #1567).
## Security
* **[redis] Bump Redis image version for security fixes**: Updated Redis image version to include latest security fixes, improving cluster security ([**@IvanHunters**](https://github.com/IvanHunters) in #1580).
* **[flux] Close Flux Operator ports to external access**: Removed hostPort and hostNetwork from Flux Operator Deployment, ensuring ports 8080 and 8081 are only accessible within the cluster, preventing external exposure and improving security ([**@IvanHunters**](https://github.com/IvanHunters) in #1581).
* **[ingress] Enforce HTTPS-only for API**: Added force-ssl-redirect annotation to default API Ingress, ensuring all HTTP traffic is redirected to HTTPS, preventing unencrypted external access and improving security ([**@IvanHunters**](https://github.com/IvanHunters) in #1582, #1585).
## Fixes
* **[nats] Fixes for NATS App Helm chart, fix template issues with config.merge**: Fixed template issues in NATS Helm chart related to config.merge value, ensuring correct configuration ([**@insignia96**](https://github.com/insignia96) in #1583, #1591).
* **[kubevirt] Fix: kubevirt metrics rule**: Fixed KubeVirt metrics rule configuration ([**@kvaps**](https://github.com/kvaps) in #1584, #1588).
## System Configuration
* **[core] rm talos lldp extension**: Removed Talos LLDP extension from core configuration ([**@nbykov0**](https://github.com/nbykov0) in #1586).
---
**Full Changelog**: [v0.37.4...v0.37.5](https://github.com/cozystack/cozystack/compare/v0.37.4...v0.37.5)

View File

@@ -1,30 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.37.6
-->
## Features and Improvements
* **[api] Use shared informer cache**: Optimized API server by using shared informer cache, reducing API server load and improving performance ([**@lllamnyp**](https://github.com/lllamnyp) in #1539).
* **[dashboard] sync with upstream & enhancements**: Synchronized dashboard with upstream and added various enhancements ([**@kvaps**](https://github.com/kvaps) in #1603).
* **[cozystack-api][dashboard] Fix filtering for application services/ingresses/secrets**: Fixed filtering functionality for application services, ingresses, and secrets in both API and dashboard ([**@kvaps**](https://github.com/kvaps) in #1612).
## Fixes
* **[controller] Remove crdmem, handle DaemonSet**: Removed crdmem and improved DaemonSet handling in controller ([**@lllamnyp**](https://github.com/lllamnyp) in #1555).
* **[dashboard] Revert reconciler removal**: Reverted reconciler removal to restore proper dashboard functionality ([**@lllamnyp**](https://github.com/lllamnyp) in #1559).
* **[dashboard-controller] Fix static resources reconciliation and showing secrets**: Fixed static resources reconciliation and improved secret display in dashboard controller ([**@kvaps**](https://github.com/kvaps) in #1605).
* **[api,lineage] Ensure node-local traffic**: Ensured node-local traffic handling for API and lineage components ([**@lllamnyp**](https://github.com/lllamnyp) in #1606).
* **[virtual-machine] Revert per-vm network policies**: Reverted per-VM network policies to previous behavior ([**@lllamnyp**](https://github.com/lllamnyp) in #1611).
* **[cozy-lib] Fix: handling resources=nil**: Fixed handling of nil resources in cozy-lib templates ([**@kvaps**](https://github.com/kvaps) in #1607).
* **[nats] Use dig function to check for existing secret and prevent nil indexing**: Fixed NATS app chart to use dig function for checking existing secrets and prevent nil indexing errors ([**@kvaps**](https://github.com/kvaps) in #1609, #1610).
## Development, Testing, and CI/CD
* **[cozystack-controller] improve API tests**: Improved API tests for cozystack-controller ([**@lllamnyp**](https://github.com/lllamnyp) in #1599).
* **[kubernetes] Helm hooks for cleanup**: Added Helm hooks for cleanup operations in Kubernetes app ([**@lllamnyp**](https://github.com/lllamnyp) in #1616).
---
**Full Changelog**: [v0.37.5...v0.37.6](https://github.com/cozystack/cozystack/compare/v0.37.5...v0.37.6)

View File

@@ -1,18 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.37.7
-->
## Fixes
* **[kubernetes] Cleanup loadbalancer services**: Added cleanup functionality for load balancer services in Kubernetes app ([**@lllamnyp**](https://github.com/lllamnyp) in #1622).
* **[rbac] Fix permissions for high-privilege users**: Fixed RBAC permissions for high-privilege users, ensuring proper access control ([**@lllamnyp**](https://github.com/lllamnyp) in #1624).
## System Configuration
* **[system] kubeovn: increase limits**: Increased resource limits for Kube-OVN components to improve stability and performance ([**@nbykov0**](https://github.com/nbykov0) in #1629).
---
**Full Changelog**: [v0.37.6...v0.37.7](https://github.com/cozystack/cozystack/compare/v0.37.6...v0.37.7)

View File

@@ -1,19 +0,0 @@
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.37.8
-->
## Fixes
* **[cozy-lib] Fix malformed ResourceQuota rendering for LoadBalancer services**: Fixed malformed ResourceQuota rendering for LoadBalancer services in cozy-lib templates ([**@IvanHunters**](https://github.com/IvanHunters) in #1642).
* **[extra] ingress: rm spaces from external ip list**: Removed spaces from external IP list in ingress configuration, fixing formatting issues ([**@nbykov0**](https://github.com/nbykov0) in #1652).
* **scripts: fix 20 migration**: Fixed migration script #20 to ensure proper execution during upgrades ([**@nbykov0**](https://github.com/nbykov0) in #1653).
## System Configuration
* **Increase strimzi memory limit**: Increased memory limit for Strimzi Kafka operator to improve stability and performance ([**@nbykov0**](https://github.com/nbykov0) in #1651).
---
**Full Changelog**: [v0.37.7...v0.37.8](https://github.com/cozystack/cozystack/compare/v0.37.7...v0.37.8)

View File

@@ -1,235 +0,0 @@
# Cozystack v0.38 — "VPC & Enhanced Networking"
This release introduces **Virtual Private Cloud (VPC)** support, enabling advanced networking capabilities for tenant applications. We've also added VNC console support in the dashboard, made Kubernetes worker versions configurable, and delivered numerous improvements and fixes across the platform.
### Virtual Private Cloud (VPC) Networking
Cozystack v0.38.0 introduces Virtual Private Cloud (VPC) support, enabling platform administrators to create isolated network segments for tenant applications. VPCs provide network isolation and allow fine-grained control over network topology, subnets, and routing. Each VPC can contain multiple subnets, and administrators can configure subnet details including IP ranges, gateway settings, and DNS configuration.
The VPC feature integrates seamlessly with the Cozystack dashboard, allowing users to view and manage VPCs and their subnets through an intuitive interface. Subnet details are exposed in the dashboard as tables, making it easy to understand network configuration at a glance. VPC configuration is stored in ConfigMaps with predictable naming, ensuring reliable access to subnet information.
This feature is particularly valuable for multi-tenant environments where network isolation is critical, and for applications that require specific network configurations or routing rules.
### VNC Console for Virtual Machines
The Cozystack dashboard now includes a built-in VNC console for virtual machines, enabling users to access VM console directly from the web interface without requiring external tools. This feature provides immediate access to virtual machine consoles for troubleshooting, configuration, and maintenance tasks. The VNC console integration streamlines VM management workflows and improves the user experience by keeping all VM operations within the Cozystack dashboard.
## Highlights
* **Virtual Private Cloud (VPC)**: New VPC system module enables advanced networking with Multus CNI, subnet management, and network isolation for tenant applications ([**@nbykov0**](https://github.com/nbykov0) in #1543; [**@lllamnyp**](https://github.com/lllamnyp) in #1587, #1590, #1600, #1621, #1638).
* **VNC Console in Dashboard**: Users can now access virtual machine consoles directly from the dashboard, improving VM management experience ([**@kvaps**](https://github.com/kvaps) in #1627).
* **Configurable Kubernetes Worker Versions**: Platform administrators can now configure Kubernetes worker node versions independently, providing more flexibility in cluster management ([**@lllamnyp**](https://github.com/lllamnyp) in #1619).
* **Security Enhancements**: Multiple security improvements including HTTPS-only enforcement for API, closed Flux Operator ports, and Redis security updates ([**@IvanHunters**](https://github.com/IvanHunters) in #1580, #1581, #1582).
* **Cozy-lib Improvements**: Enhanced flatten function with better ResourceQuota handling and nil resource support ([**@lllamnyp**](https://github.com/lllamnyp) in #1647; [**@IvanHunters**](https://github.com/IvanHunters) in #1642; [**@kvaps**](https://github.com/kvaps) in #1607).
---
## New features
### VPC (Virtual Private Cloud)
* **[system] Add VPC**: Introduced Virtual Private Cloud system module with Multus CNI integration, enabling advanced networking capabilities for tenant applications ([**@nbykov0**](https://github.com/nbykov0) in #1543).
* **[vpc] Install Multus by default**: Multus CNI is now installed by default when VPC is enabled, providing multi-network interface support ([**@lllamnyp**](https://github.com/lllamnyp) in #1587).
* **[vpc] Give predictable name to subnet configmap**: Subnet configuration maps now use predictable naming for better management and debugging ([**@lllamnyp**](https://github.com/lllamnyp) in #1590).
* **[vpc] Entry per subnet in the subnets configmap**: Each subnet now has its own entry in the subnets configmap, improving subnet organization and management ([**@lllamnyp**](https://github.com/lllamnyp) in #1600).
* **[vpc,dashboard] Print subnet details as table**: Subnet details are now displayed as a table in the dashboard, improving visibility and management ([**@lllamnyp**](https://github.com/lllamnyp) in #1621).
* **[apps] Add VPC app**: Added VPC application for tenant use, enabling users to create and manage VPCs ([**@nbykov0**](https://github.com/nbykov0) in #1543).
### Dashboard
* **[dashboard] Introduce VNC console**: Added VNC console support in the dashboard, allowing users to access virtual machine consoles directly from the web interface ([**@kvaps**](https://github.com/kvaps) in #1627).
* **[dashboard] sync with upstream & enhancements**: Synchronized dashboard with upstream project and added various enhancements ([**@kvaps**](https://github.com/kvaps) in #1603).
* **[dashboard] Migrate patches to upstream project**: Migrated dashboard patches to upstream project for better maintainability ([**@kvaps**](https://github.com/kvaps) in #1569).
### Kubernetes
* **[kubernetes] Make worker version configurable**: Platform administrators can now configure Kubernetes worker node versions independently from control plane versions, providing more flexibility ([**@lllamnyp**](https://github.com/lllamnyp) in #1619).
* **[kubernetes] Use controlPlane.replicas field**: Fixed managed Kubernetes app to properly use the `controlPlane.replicas` field instead of hardcoding the value ([**@lllamnyp**](https://github.com/lllamnyp) in #1556).
* **[kubernetes] Helm hooks for cleanup**: Added Helm hooks for cleanup operations in Kubernetes app ([**@lllamnyp**](https://github.com/lllamnyp) in #1606).
### API & Platform
* **[api] Efficient listing of TenantNamespaces**: Optimized TenantNamespace listing by replacing per-namespace SubjectAccessReview calls with group-based rolebinding checks, significantly reducing API latency ([**@lllamnyp**](https://github.com/lllamnyp) in #1507).
* **[api] Use shared informer cache**: Optimized API server by using shared informer cache, reducing API server load and improving performance ([**@lllamnyp**](https://github.com/lllamnyp) in #1539).
* **[api] Fix representation of dynamic list kinds**: Fixed API representation of dynamic list kinds for better compatibility ([**@lllamnyp**](https://github.com/lllamnyp) in #1630).
* **[api] Delete previous instance when changing type**: API now properly deletes previous instance when changing application type ([**@lllamnyp**](https://github.com/lllamnyp) in #1579).
### Applications
* **[tenant] Allow listing workloads**: Enabled listing of workloads for tenants, improving visibility and management of tenant resources ([**@kvaps**](https://github.com/kvaps) in #1576).
* **[apps] Make VM service user facing**: Virtual machine services are now marked as user-facing, improving service discovery and visibility in the dashboard ([**@lllamnyp**](https://github.com/lllamnyp) in #1523).
* **[foundationdb] Upgrade FDB app for latest Cozy**: Upgraded FoundationDB application for compatibility with latest Cozystack version ([**@lllamnyp**](https://github.com/lllamnyp) in #1505).
### Storage & Backups
* **[seaweedfs] Update SeaweedFS v3.99 and deploy S3 as stacked service**: Updated SeaweedFS to version 3.99 and deployed S3 gateway as a stacked service for better integration and performance ([**@kvaps**](https://github.com/kvaps) in #1562).
* **[seaweedfs] Allow users to discover their buckets**: Users can now discover and list their S3 buckets in SeaweedFS, improving usability and bucket management ([**@kvaps**](https://github.com/kvaps) in #1528).
* **[velero] Set defaultItemOperationTimeout=24h**: Set default item operation timeout to 24 hours for Velero backups, preventing timeouts on large backup operations ([**@kvaps**](https://github.com/kvaps) in #1542).
### Monitoring & Operations
* **[monitoring] add settings alert for slack**: Added Slack integration configuration for Alerta alerts, enabling notifications to Slack channels ([**@scooby87**](https://github.com/scooby87) in #1545).
---
## Improvements (minor)
* **[lineage] Separate webhook from cozy controller**: Separated the lineage-controller-webhook from cozystack-controller into a separate daemonset component deployed on all control-plane nodes, reducing API server latency ([**@lllamnyp**](https://github.com/lllamnyp) in #1515).
* **[dashboard] Show service LB IP**: Fixed JSON path issue to correctly display Service LoadBalancer IPs in the dashboard table view ([**@lllamnyp**](https://github.com/lllamnyp) in #1524).
* **[dashboard] Update openapi-ui v1.0.3 + fixes**: Updated OpenAPI UI to version 1.0.3 with various fixes and improvements ([**@kvaps**](https://github.com/kvaps) in #1564).
* **[dashboard-controller] Move badges generation logic to internal dashboard component**: Moved badges generation logic to internal dashboard component for better code organization ([**@kvaps**](https://github.com/kvaps) in #1567).
* **[bucket] Expose bucket name in secrets**: Bucket names are now exposed in secrets for better integration with applications ([**@lllamnyp**](https://github.com/lllamnyp) in #1518).
* **[platform] Better migration for 0.36.2->0.37.2+**: Improved migration script for users upgrading directly from 0.36.2 to 0.37.2+ ([**@lllamnyp**](https://github.com/lllamnyp) in #1521).
* **[cozy-lib] Improve flatten function**: Improved flatten function in cozy-lib with better handling of complex resource structures ([**@lllamnyp**](https://github.com/lllamnyp) in #1647).
* **[dx] JSDoc compatible syntax for values.yaml**: Added JSDoc compatible syntax for values.yaml documentation ([**@kvaps**](https://github.com/kvaps) in #1536).
* **[system] Tune kubevirt rollout and eviction settings**: Tuned KubeVirt rollout and eviction settings for better stability ([**@nbykov0**](https://github.com/nbykov0) in #1544).
* **[system] multus: update to the latest version**: Updated Multus CNI to the latest version ([**@nbykov0**](https://github.com/nbykov0) in #1628).
* **[system] kubeovn: increase limits**: Increased resource limits for Kube-OVN components to improve stability and performance ([**@nbykov0**](https://github.com/nbykov0) in #1629).
* **[linstor] Update Piraeus Operator to v2.10.1 to enable RWX support**: Updated Piraeus Operator to v2.10.1, enabling ReadWriteMany (RWX) volume support ([**@kvaps**](https://github.com/kvaps) in #1650).
* **[ci,dx] Bump MariaDB operator version**: Bumped MariaDB operator version for latest features and bug fixes ([**@IvanHunters**](https://github.com/IvanHunters) in #1646).
---
## Bug fixes
* **[api] Fix RBAC for listing of TenantNamespaces and handle system:masters**: Fixed regression in TenantNamespace listing RBAC and added proper handling for system:masters group ([**@kvaps**](https://github.com/kvaps) in #1511).
* **[api] Fix listing tenantnamespaces for non-oidc users**: Fixed TenantNamespace listing functionality for users not using OIDC authentication ([**@kvaps**](https://github.com/kvaps) in #1517).
* **[dashboard] Fix logout**: Fixed dashboard logout functionality to properly clear session and redirect users ([**@kvaps**](https://github.com/kvaps) in #1510).
* **[installer] Add additional check to wait for lineage-webhook**: Added additional readiness check to ensure lineage-webhook is fully ready before proceeding with installation ([**@kvaps**](https://github.com/kvaps) in #1506).
* **[lineage] Check for nil chart in HelmRelease**: Added nil check to prevent crashes when lineage webhook encounters HelmReleases using `chartRef` instead of `chart` ([**@lllamnyp**](https://github.com/lllamnyp) in #1525).
* **[kamaji] Respect 3rd party labels**: Applied patch to Kamaji controller to respect third-party labels, preventing reconciliation loops ([**@lllamnyp**](https://github.com/lllamnyp) in #1531).
* **[redis-operator] Build patched operator in-tree**: Moved Redis operator build into Cozystack organization and patched it to prevent overwriting third-party labels ([**@lllamnyp**](https://github.com/lllamnyp) in #1547).
* **[mariadb-operator] Add post-delete job to remove PVCs**: Added post-delete job to automatically remove PersistentVolumeClaims when MariaDB instances are deleted ([**@IvanHunters**](https://github.com/IvanHunters) in #1553).
* **[seaweedfs] Fix migration to v3.99**: Fixed migration issues when upgrading SeaweedFS to version 3.99 ([**@kvaps**](https://github.com/kvaps) in #1572).
* **[nats] Merge container spec, not podTemplate**: Fixed NATS configuration to properly merge container specifications instead of podTemplate ([**@lllamnyp**](https://github.com/lllamnyp) in #1571).
* **[nats] Fixes for NATS App Helm chart, fix template issues with config.merge**: Fixed template issues in NATS Helm chart related to config.merge value ([**@insignia96**](https://github.com/insignia96) in #1583).
* **[nats] Fix NATS app chart to use existing secret credentials when present**: Fixed NATS app chart to use existing secret credentials when present, preventing credential regeneration ([**@insignia96**](https://github.com/insignia96) in #1599).
* **[kubevirt] Fix: kubevirt metrics rule**: Fixed KubeVirt metrics rule configuration ([**@kvaps**](https://github.com/kvaps) in #1584).
* **[controller] Remove crdmem, handle DaemonSet**: Removed crdmem and improved DaemonSet handling in controller ([**@lllamnyp**](https://github.com/lllamnyp) in #1555).
* **[dashboard] Revert reconciler removal**: Reverted reconciler removal to restore proper dashboard functionality ([**@lllamnyp**](https://github.com/lllamnyp) in #1559).
* **[dashboard-controller] Fix static resources reconciliation and showing secrets**: Fixed static resources reconciliation and improved secret display in dashboard controller ([**@kvaps**](https://github.com/kvaps) in #1615).
* **[cozystack-api][dashboard] Fix filtering for application services/ingresses/secrets**: Fixed filtering functionality for application services, ingresses, and secrets in both API and dashboard ([**@kvaps**](https://github.com/kvaps) in #1612).
* **[virtual-machine] Revert per-vm network policies**: Reverted per-VM network policies to previous behavior ([**@kvaps**](https://github.com/kvaps) in #1611).
* **[cozy-lib] Fix: handling resources=nil**: Fixed handling of nil resources in cozy-lib templates ([**@kvaps**](https://github.com/kvaps) in #1607).
* **[cozy-lib] Fix malformed ResourceQuota rendering for LoadBalancer services**: Fixed malformed ResourceQuota rendering for LoadBalancer services in cozy-lib templates ([**@IvanHunters**](https://github.com/IvanHunters) in #1642).
* **[kubernetes] Cleanup loadbalancer services**: Added cleanup functionality for load balancer services in Kubernetes app ([**@lllamnyp**](https://github.com/lllamnyp) in #1631).
* **[rbac] Fix permissions for high-privilege users**: Fixed RBAC permissions for high-privilege users, ensuring proper access control ([**@lllamnyp**](https://github.com/lllamnyp) in #1622).
* **[vpc] Fix access to subnet details configmap**: Fixed access to subnet details configmap in VPC functionality ([**@lllamnyp**](https://github.com/lllamnyp) in #1638).
* **[api,lineage] Ensure node-local traffic**: Ensured node-local traffic handling for API and lineage components ([**@lllamnyp**](https://github.com/lllamnyp) in #1554).
* **[extra] ingress: rm spaces from external ip list**: Removed spaces from external IP list in ingress configuration, fixing formatting issues ([**@nbykov0**](https://github.com/nbykov0) in #1652).
* **scripts: fix 20 migration**: Fixed migration script #20 to ensure proper execution during upgrades ([**@nbykov0**](https://github.com/nbykov0) in #1653).
---
## Security
* **[redis] Bump Redis image version for security fixes**: Updated Redis image version to include latest security fixes, improving cluster security ([**@IvanHunters**](https://github.com/IvanHunters) in #1580).
* **[flux] Close Flux Operator ports to external access**: Removed hostPort and hostNetwork from Flux Operator Deployment, ensuring ports 8080 and 8081 are only accessible within the cluster ([**@IvanHunters**](https://github.com/IvanHunters) in #1581).
* **[ingress] Enforce HTTPS-only for API**: Added force-ssl-redirect annotation to default API Ingress, ensuring all HTTP traffic is redirected to HTTPS ([**@IvanHunters**](https://github.com/IvanHunters) in #1582).
---
## Dependencies & version updates
* **Update LINSTOR v1.32.3**: Updated LINSTOR to version 1.32.3 with latest features and bug fixes ([**@kvaps**](https://github.com/kvaps) in #1565).
* **Update Talos Linux v1.11.3**: Updated Talos Linux to version 1.11.3 ([**@kvaps**](https://github.com/kvaps) in #1527).
* **Update Kube-OVN v1.14.11**: Updated Kube-OVN to version 1.14.11 ([**@kvaps**](https://github.com/kvaps) in #1514).
* **[linstor] Update Piraeus Operator to v2.10.1**: Updated Piraeus Operator to v2.10.1 to enable RWX support ([**@kvaps**](https://github.com/kvaps) in #1650).
* **[system] multus: update to the latest version**: Updated Multus CNI to the latest version ([**@nbykov0**](https://github.com/nbykov0) in #1628).
* **[ci,dx] Bump MariaDB operator version**: Bumped MariaDB operator version ([**@IvanHunters**](https://github.com/IvanHunters) in #1646).
* **Increase strimzi memory limit**: Increased memory limit for Strimzi Kafka operator to improve stability and performance ([**@nbykov0**](https://github.com/nbykov0) in #1651).
---
## System Configuration
* **[system] kube-ovn: turn off enableLb**: Disabled load balancer functionality in Kube-OVN configuration ([**@nbykov0**](https://github.com/nbykov0) in #1548).
* **[core] rm talos lldp extension**: Removed Talos LLDP extension from core configuration ([**@nbykov0**](https://github.com/nbykov0) in #1586).
---
## Development, Testing, and CI/CD
* **[tests] Make Kubernetes tests POSIX-compatible**: Replaced bash-specific constructs with POSIX-compliant code, ensuring tests work reliably with /bin/sh ([**@IvanHunters**](https://github.com/IvanHunters) in #1509).
* **[ferretdb] fix tests**: Fixed FerretDB tests to ensure proper execution ([**@IvanHunters**](https://github.com/IvanHunters) in #1540).
* **[e2e] Increase Kubernetes connection timeouts**: Increased connection and request timeouts in E2E tests when communicating with Kubernetes API ([**@IvanHunters**](https://github.com/IvanHunters) in #1570).
* **[cozystack-controller] improve API tests**: Improved API tests for cozystack-controller ([**@kvaps**](https://github.com/kvaps) in #1617).
* **[ci] Fix build from external forks**: Fixed build process to work correctly from external forks ([**@kvaps**](https://github.com/kvaps) in #1530).
* **[ci,dx] Add unit tests for cozy-lib**: Added unit tests for cozy-lib to improve code quality and reliability ([**@lllamnyp**](https://github.com/lllamnyp) in #1643).
---
## Documentation
* **[website] Add VPC page**: Added VPC documentation page explaining VPC features and usage ([**@nbykov0**](https://github.com/nbykov0) in cozystack/website@9ccac78).
* **[website] Add VPC to auto-update list**: Added VPC to auto-update list in documentation ([**@nbykov0**](https://github.com/nbykov0) in cozystack/website@ca2bce6).
* **[website] Update dashboard part in OIDC configuration doc**: Updated OIDC configuration documentation with dashboard information ([**@nbykov0**](https://github.com/nbykov0) in cozystack/website@6c44b93).
* **[website] Update storage requirements**: Updated storage requirements documentation ([**@nbykov0**](https://github.com/nbykov0) in cozystack/website@cac3af6).
* **[website] Add System Resource Planning Recommendations**: Added system resource planning recommendations documentation ([**@kvaps**](https://github.com/kvaps) in cozystack/website@c877c2a).
* **[website] Optimize website for mobile devices**: Improved website layout and responsiveness for mobile devices ([**@kvaps**](https://github.com/kvaps) in cozystack/website@3ab2338).
* **[website] Add OpenAPI UI**: Added OpenAPI UI documentation and integration ([**@kvaps**](https://github.com/kvaps) in cozystack/website@b1c1668).
* **[website] Update Cozystack video in hero banner**: Updated hero banner with new Cozystack video ([**@kvaps**](https://github.com/kvaps) in cozystack/website@e351137).
* **[website] Add screenshots carousel**: Added screenshots carousel to showcase Cozystack features ([**@kvaps**](https://github.com/kvaps) in cozystack/website@8422bd0).
* **[website] Update LINSTOR documentation**: Updated LINSTOR guide and set failmode=continue for ZFS configurations ([**@kvaps**](https://github.com/kvaps) in cozystack/website@033804e).
* **[website] Update managed apps reference**: Updated managed applications reference documentation ([**@kvaps**](https://github.com/kvaps) in cozystack/website@b886a74, cozystack/website@41c1849, cozystack/website@0ab71fd).
* **[website] Update external apps documentation**: Updated documentation for external applications ([**@kvaps**](https://github.com/kvaps) in cozystack/website@565dad9).
* **[website] Add naming conventions**: Added naming conventions documentation ([**@kvaps**](https://github.com/kvaps) in cozystack/website@b227abb).
* **[website] Update golden image documentation**: Updated documentation for creating golden images for virtual machines ([**@kvaps**](https://github.com/kvaps) in cozystack/website@34c2f3a, cozystack/website@ef65593).
* **[website] Fix documentation formatting**: Fixed alerts, infoboxes, tabs styles and main page formatting ([**@kvaps**](https://github.com/kvaps) in cozystack/website@e992e97, cozystack/website@b2c4dee).
* **[website] Fix typo in blog article**: Fixed typo in blog article ([**@kvaps**](https://github.com/kvaps) in cozystack/website@0a4bbf3).
* **[apps] vpc: more docs**: Added more VPC documentation ([**@nbykov0**](https://github.com/nbykov0) in #1594).
* **[apps] vpc: fix typo in README**: Fixed typo in VPC README ([**@nbykov0**](https://github.com/nbykov0) in #1637).
---
## Additional Repositories
### boot-to-talos
* **[boot-to-talos] Introduce boot/install mode**: Introduced boot/install mode in boot-to-talos tool ([**@kvaps**](https://github.com/kvaps) in cozystack/boot-to-talos#5).
### cozypkg
* **[cozypkg] Handle valuesFiles from cozypkg.cozystack.io/values-files annotation**: Added support for handling valuesFiles from annotation in cozypkg ([**@kvaps**](https://github.com/kvaps) in cozystack/cozypkg#8).
---
## Refactors & chores
* **[dashboard] Migrate patches to upstream project**: Migrated dashboard patches to upstream project for better maintainability ([**@kvaps**](https://github.com/kvaps) in #1569).
* **Update CODEOWNERS**: Updated CODEOWNERS file ([**@nbykov0**](https://github.com/nbykov0) in #1537).
* **Add QOSI to ADOPTERS.md**: Added QOSI to adopters list ([**@tabu-a**](https://github.com/tabu-a) in #1589).
---
## Breaking changes & upgrade notes
No breaking changes in this release.
---
## Contributors
We'd like to thank all contributors who made this release possible:
* [**@IvanHunters**](https://github.com/IvanHunters)
* [**@insignia96**](https://github.com/insignia96)
* [**@kvaps**](https://github.com/kvaps)
* [**@lllamnyp**](https://github.com/lllamnyp)
* [**@nbykov0**](https://github.com/nbykov0)
* [**@scooby87**](https://github.com/scooby87)
* [**@tabu-a**](https://github.com/tabu-a)
### New Contributors
We're excited to welcome our first-time contributors:
* [**@tabu-a**](https://github.com/tabu-a) - First contribution!
---
**Full Changelog**: [v0.37.0...v0.38.0](https://github.com/cozystack/cozystack/compare/v0.37.0...v0.38.0)
<!--
https://github.com/cozystack/cozystack/releases/tag/v0.38.0
-->

16
go.mod
View File

@@ -37,7 +37,6 @@ require (
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fluxcd/pkg/apis/kustomize v1.6.1 // indirect
@@ -59,7 +58,6 @@ require (
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/pprof v0.0.0-20240727154555-813a5fbdbec8 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect
github.com/imdario/mergo v0.3.6 // indirect
@@ -67,11 +65,9 @@ require (
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/moby/spdystream v0.4.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_golang v1.19.1 // indirect
@@ -95,14 +91,14 @@ require (
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/crypto v0.28.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/net v0.30.0 // indirect
golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/sync v0.8.0 // indirect
golang.org/x/sys v0.26.0 // indirect
golang.org/x/term v0.25.0 // indirect
golang.org/x/text v0.19.0 // indirect
golang.org/x/time v0.7.0 // indirect
golang.org/x/tools v0.26.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect

34
go.sum
View File

@@ -2,8 +2,6 @@ github.com/NYTimes/gziphandler v1.1.1 h1:ZUDjpQae29j0ryrS0u/B8HZfJBtBQHjqw2rQ2cq
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/antlr4-go/antlr/v4 v4.13.0 h1:lxCg3LAv+EUK6t1i0y1V6/SLeUi0eKEKdhQAlS8TVTI=
github.com/antlr4-go/antlr/v4 v4.13.0/go.mod h1:pfChB/xh/Unjila75QW7+VU4TSnWnnk9UTnmpPaOR2g=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a h1:idn718Q4B6AGu/h5Sxe66HYVdqdGu2l9Iebqhi/AEoA=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
@@ -28,8 +26,8 @@ github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkp
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v0.5.2 h1:xVCHIVMUu1wtM/VkR9jVZ45N3FhZfYMMYGorLCR8P3k=
github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
@@ -117,8 +115,6 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/moby/spdystream v0.4.0 h1:Vy79D6mHeJJjiPdFEL2yku1kl0chZpJfZcPpb16BRl8=
github.com/moby/spdystream v0.4.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -126,8 +122,6 @@ github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9G
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/onsi/ginkgo/v2 v2.19.0 h1:9Cnnf7UHo57Hy3k6/m5k3dRfGTMXGvxhHFvkDTCTpvA=
github.com/onsi/ginkgo/v2 v2.19.0/go.mod h1:rlwLi9PilAFJ8jCg9UE1QP6VBpd6/xj3SRC0d6TU0To=
github.com/onsi/gomega v1.33.1 h1:dsYjIxxSR755MDmKVsaFQTE22ChNBcuuTWgkUDSubOk=
@@ -218,8 +212,8 @@ go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw=
golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@@ -228,26 +222,26 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24=
golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -1,32 +0,0 @@
#!/bin/bash
set -e
name="$1"
url="$2"
if [ -z "$name" ] || [ -z "$url" ]; then
echo "Usage: <name> <url>"
echo "Example: 'ubuntu' 'https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img'"
exit 1
fi
#### create DV ubuntu source for CDI image cloning
kubectl create -f - <<EOF
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: "vm-image-$name"
namespace: cozy-public
annotations:
cdi.kubevirt.io/storage.bind.immediate.requested: "true"
spec:
source:
http:
url: "$url"
storage:
resources:
requests:
storage: 5Gi
storageClassName: replicated
EOF

View File

@@ -1,145 +0,0 @@
#!/bin/bash
###############################################################################
# check-optional-repos.sh - Check optional repositories for tags and commits #
# during a release period #
###############################################################################
set -eu
# Function to ensure repository is cloned and up-to-date
update_repo() {
local repo_name=$1
local repo_url="https://github.com/cozystack/${repo_name}.git"
mkdir -p _repos
cd _repos
if [ -d "$repo_name" ]; then
cd "$repo_name"
git fetch --all --tags --force
git checkout main 2>/dev/null || git checkout master
git pull
else
git clone "$repo_url"
cd "$repo_name"
fi
cd ../..
}
# Check if required parameters are provided
if [ $# -lt 2 ]; then
echo "Usage: $0 <RELEASE_START> <RELEASE_END>"
echo "Example: $0 '2025-10-10 12:27:31 +0400' '2025-10-13 16:04:33 +0200'"
exit 1
fi
RELEASE_START="$1"
RELEASE_END="$2"
# Get the script directory to return to it later
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
COZYSTACK_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
cd "$COZYSTACK_ROOT"
echo "Checking optional repositories for tags and commits between:"
echo " Start: $RELEASE_START"
echo " End: $RELEASE_END"
echo ""
# Loop through ALL optional repositories
for repo_name in talm boot-to-talos cozypkg cozy-proxy; do
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Checking repository: $repo_name"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Update/clone repository
update_repo "$repo_name"
cd "_repos/$repo_name"
REPO_NAME=$(basename "$(pwd)")
git fetch --all --tags --force
# Check for tags matching release version pattern or created during release period
TAGS=$(git for-each-ref --format='%(refname:short) %(creatordate)' refs/tags 2>/dev/null | \
awk -v start="$RELEASE_START" -v end="$RELEASE_END" '$2 >= start && $2 <= end {print $1}' || true)
if [ -n "$TAGS" ]; then
echo "Found tags in $repo_name: $TAGS"
PREV_TAG=$(echo "$TAGS" | head -1)
NEW_TAG=$(echo "$TAGS" | tail -1)
echo ""
echo "Commits between $PREV_TAG and $NEW_TAG:"
# Include merge commits to capture backports
git log "$PREV_TAG..$NEW_TAG" --format="%H|%s|%an" 2>/dev/null | while IFS='|' read -r commit_hash subject author_name; do
if [ -z "$commit_hash" ]; then
continue
fi
# Get PR number from commit message
COMMIT_MSG=$(git log -1 --format=%B "$commit_hash" 2>/dev/null || echo "")
PR_NUMBER=$(echo "$COMMIT_MSG" | grep -oE '#[0-9]+' | head -1 | tr -d '#' || echo "")
# Get author: prioritize PR author, fallback to commit author
GITHUB_USERNAME=""
if [ -n "$PR_NUMBER" ]; then
GITHUB_USERNAME=$(gh pr view "$PR_NUMBER" --repo "cozystack/$REPO_NAME" --json author --jq '.author.login // empty' 2>/dev/null || echo "")
fi
if [ -z "$GITHUB_USERNAME" ]; then
GITHUB_USERNAME=$(gh api "repos/cozystack/$REPO_NAME/commits/$commit_hash" --jq '.author.login // empty' 2>/dev/null || echo "")
fi
if [ -n "$PR_NUMBER" ]; then
echo " $commit_hash|$subject|$author_name|$GITHUB_USERNAME|cozystack/$REPO_NAME#$PR_NUMBER"
else
echo " $commit_hash|$subject|$author_name|$GITHUB_USERNAME|cozystack/$REPO_NAME@${commit_hash:0:7}"
fi
done
else
echo "No tags found in $repo_name during release period"
# Check for commits by dates if no exact version tags
# Include merge commits to capture backports
COMMITS=$(git log --since="$RELEASE_START" --until="$RELEASE_END" --format="%H|%s|%an" 2>/dev/null || true)
if [ -n "$COMMITS" ]; then
echo ""
echo "Commits found by date range:"
echo "$COMMITS" | while IFS='|' read -r commit_hash subject author_name; do
if [ -z "$commit_hash" ]; then
continue
fi
# Get PR number from commit message
COMMIT_MSG=$(git log -1 --format=%B "$commit_hash" 2>/dev/null || echo "")
PR_NUMBER=$(echo "$COMMIT_MSG" | grep -oE '#[0-9]+' | head -1 | tr -d '#' || echo "")
# Get author: prioritize PR author, fallback to commit author
GITHUB_USERNAME=""
if [ -n "$PR_NUMBER" ]; then
GITHUB_USERNAME=$(gh pr view "$PR_NUMBER" --repo "cozystack/$REPO_NAME" --json author --jq '.author.login // empty' 2>/dev/null || echo "")
fi
if [ -z "$GITHUB_USERNAME" ]; then
GITHUB_USERNAME=$(gh api "repos/cozystack/$REPO_NAME/commits/$commit_hash" --jq '.author.login // empty' 2>/dev/null || echo "")
fi
if [ -n "$PR_NUMBER" ]; then
echo " $commit_hash|$subject|$author_name|$GITHUB_USERNAME|cozystack/$REPO_NAME#$PR_NUMBER"
else
echo " $commit_hash|$subject|$author_name|$GITHUB_USERNAME|cozystack/$REPO_NAME@${commit_hash:0:7}"
fi
done
else
echo "No commits found in $repo_name during release period"
fi
fi
echo ""
cd "$COZYSTACK_ROOT"
done
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Finished checking all optional repositories"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"

View File

@@ -1,8 +0,0 @@
#!/bin/sh
for node in 11 12 13; do
talosctl -n 192.168.123.${node} -e 192.168.123.${node} images ls >> images.tmp
talosctl -n 192.168.123.${node} -e 192.168.123.${node} images --namespace system ls >> images.tmp
done
while read _ name sha _ ; do echo $sha $name ; done < images.tmp | sort -u > images.txt

View File

@@ -1,147 +0,0 @@
#!/bin/sh
REPORT_DATE=$(date +%Y-%m-%d_%H-%M-%S)
REPORT_NAME=${1:-cozyreport-$REPORT_DATE}
REPORT_PDIR=$(mktemp -d)
REPORT_DIR=$REPORT_PDIR/$REPORT_NAME
# -- check dependencies
command -V kubectl >/dev/null || exit $?
command -V tar >/dev/null || exit $?
# -- cozystack module
echo "Collecting Cozystack information..."
mkdir -p $REPORT_DIR/cozystack
kubectl get deploy -n cozy-system cozystack -o jsonpath='{.spec.template.spec.containers[0].image}' > $REPORT_DIR/cozystack/image.txt 2>&1
kubectl get cm -n cozy-system --no-headers | awk '$1 ~ /^cozystack/' |
while read NAME _; do
DIR=$REPORT_DIR/cozystack/configs
mkdir -p $DIR
kubectl get cm -n cozy-system $NAME -o yaml > $DIR/$NAME.yaml 2>&1
done
# -- kubernetes module
echo "Collecting Kubernetes information..."
mkdir -p $REPORT_DIR/kubernetes
kubectl version > $REPORT_DIR/kubernetes/version.txt 2>&1
echo "Collecting nodes..."
kubectl get nodes -o wide > $REPORT_DIR/kubernetes/nodes.txt 2>&1
kubectl get nodes --no-headers | awk '$2 != "Ready"' |
while read NAME _; do
DIR=$REPORT_DIR/kubernetes/nodes/$NAME
mkdir -p $DIR
kubectl get node $NAME -o yaml > $DIR/node.yaml 2>&1
kubectl describe node $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting namespaces..."
kubectl get ns -o wide > $REPORT_DIR/kubernetes/namespaces.txt 2>&1
kubectl get ns --no-headers | awk '$2 != "Active"' |
while read NAME _; do
DIR=$REPORT_DIR/kubernetes/namespaces/$NAME
mkdir -p $DIR
kubectl get ns $NAME -o yaml > $DIR/namespace.yaml 2>&1
kubectl describe ns $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting helmreleases..."
kubectl get hr -A > $REPORT_DIR/kubernetes/helmreleases.txt 2>&1
kubectl get hr -A --no-headers | awk '$4 != "True"' | \
while read NAMESPACE NAME _; do
DIR=$REPORT_DIR/kubernetes/helmreleases/$NAMESPACE/$NAME
mkdir -p $DIR
kubectl get hr -n $NAMESPACE $NAME -o yaml > $DIR/hr.yaml 2>&1
kubectl describe hr -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting pods..."
kubectl get pod -A -o wide > $REPORT_DIR/kubernetes/pods.txt 2>&1
kubectl get pod -A --no-headers | awk '$4 !~ /Running|Succeeded|Completed/' |
while read NAMESPACE NAME _ STATE _; do
DIR=$REPORT_DIR/kubernetes/pods/$NAMESPACE/$NAME
mkdir -p $DIR
CONTAINERS=$(kubectl get pod -o jsonpath='{.spec.containers[*].name}' -n $NAMESPACE $NAME)
kubectl get pod -n $NAMESPACE $NAME -o yaml > $DIR/pod.yaml 2>&1
kubectl describe pod -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
if [ "$STATE" != "Pending" ]; then
for CONTAINER in $CONTAINERS; do
kubectl logs -n $NAMESPACE $NAME $CONTAINER > $DIR/logs-$CONTAINER.txt 2>&1
kubectl logs -n $NAMESPACE $NAME $CONTAINER --previous > $DIR/logs-$CONTAINER-previous.txt 2>&1
done
fi
done
echo "Collecting virtualmachines..."
kubectl get vm -A > $REPORT_DIR/kubernetes/vms.txt 2>&1
kubectl get vm -A --no-headers | awk '$5 != "True"' |
while read NAMESPACE NAME _; do
DIR=$REPORT_DIR/kubernetes/vm/$NAMESPACE/$NAME
mkdir -p $DIR
kubectl get vm -n $NAMESPACE $NAME -o yaml > $DIR/vm.yaml 2>&1
kubectl describe vm -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting virtualmachine instances..."
kubectl get vmi -A > $REPORT_DIR/kubernetes/vmis.txt 2>&1
kubectl get vmi -A --no-headers | awk '$4 != "Running"' |
while read NAMESPACE NAME _; do
DIR=$REPORT_DIR/kubernetes/vmi/$NAMESPACE/$NAME
mkdir -p $DIR
kubectl get vmi -n $NAMESPACE $NAME -o yaml > $DIR/vmi.yaml 2>&1
kubectl describe vmi -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting services..."
kubectl get svc -A > $REPORT_DIR/kubernetes/services.txt 2>&1
kubectl get svc -A --no-headers | awk '$4 == "<pending>"' |
while read NAMESPACE NAME _; do
DIR=$REPORT_DIR/kubernetes/services/$NAMESPACE/$NAME
mkdir -p $DIR
kubectl get svc -n $NAMESPACE $NAME -o yaml > $DIR/service.yaml 2>&1
kubectl describe svc -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting pvcs..."
kubectl get pvc -A > $REPORT_DIR/kubernetes/pvcs.txt 2>&1
kubectl get pvc -A --no-headers | awk '$3 != "Bound"' |
while read NAMESPACE NAME _; do
DIR=$REPORT_DIR/kubernetes/pvc/$NAMESPACE/$NAME
mkdir -p $DIR
kubectl get pvc -n $NAMESPACE $NAME -o yaml > $DIR/pvc.yaml 2>&1
kubectl describe pvc -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
done
# -- kamaji module
if kubectl get deploy -n cozy-linstor linstor-controller >/dev/null 2>&1; then
echo "Collecting kamaji resources..."
DIR=$REPORT_DIR/kamaji
mkdir -p $DIR
kubectl logs -n cozy-kamaji deployment/kamaji > $DIR/kamaji-controller.log 2>&1
kubectl get kamajicontrolplanes.controlplane.cluster.x-k8s.io -A > $DIR/kamajicontrolplanes.txt 2>&1
kubectl get kamajicontrolplanes.controlplane.cluster.x-k8s.io -A -o yaml > $DIR/kamajicontrolplanes.yaml 2>&1
kubectl get tenantcontrolplanes.kamaji.clastix.io -A > $DIR/tenantcontrolplanes.txt 2>&1
kubectl get tenantcontrolplanes.kamaji.clastix.io -A -o yaml > $DIR/tenantcontrolplanes.yaml 2>&1
fi
# -- linstor module
if kubectl get deploy -n cozy-linstor linstor-controller >/dev/null 2>&1; then
echo "Collecting linstor resources..."
DIR=$REPORT_DIR/linstor
mkdir -p $DIR
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor --no-color n l > $DIR/nodes.txt 2>&1
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor --no-color sp l > $DIR/storage-pools.txt 2>&1
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor --no-color r l > $DIR/resources.txt 2>&1
fi
# -- finalization
echo "Creating archive..."
tar -czf $REPORT_NAME.tgz -C $REPORT_PDIR .
echo "Report created: $REPORT_NAME.tgz"
echo "Cleaning up..."
rm -rf $REPORT_PDIR

View File

@@ -24,7 +24,7 @@ run_one() {
echo "╭ » Run test: $title"
START=$(date +%s)
skip_next="+ $fn"
skip_next="+ $fn" # первую строку трассировки с именем функции пропустим
{
(
@@ -83,11 +83,11 @@ awk '
}
printf("### %s\n", title)
printf("%s() {\n", fname)
print " set -e"
print " set -e" # ошибка → падение теста
next
}
/^}$/ {
print " return 0"
print " return 0" # если автор не сделал exit 1 — тест ОК
print "}"
next
}

View File

@@ -81,7 +81,6 @@ modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//main/capacity-p
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//flux/flux-control-plane.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//flux/flux-stats.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//kafka/strimzi-kafka.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//seaweedfs/seaweedfs.json
modules/340-monitoring-kubernetes/monitoring/grafana-dashboards//goldpinger/goldpinger.json
EOT

94
hack/e2e-apps.bats Executable file
View File

@@ -0,0 +1,94 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Create tenant with isolated mode enabled" {
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Tenant
metadata:
name: test
namespace: tenant-root
spec:
etcd: false
host: ""
ingress: false
isolated: true
monitoring: false
resourceQuotas: {}
seaweedfs: false
EOF
kubectl wait hr/tenant-test -n tenant-root --timeout=1m --for=condition=ready
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
}
@test "Create a tenant Kubernetes control plane" {
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes
metadata:
name: test
namespace: tenant-test
spec:
addons:
certManager:
enabled: false
valuesOverride: {}
cilium:
valuesOverride: {}
fluxcd:
enabled: false
valuesOverride: {}
gatewayAPI:
enabled: false
gpuOperator:
enabled: false
valuesOverride: {}
ingressNginx:
enabled: true
hosts: []
valuesOverride: {}
monitoringAgents:
enabled: false
valuesOverride: {}
verticalPodAutoscaler:
valuesOverride: {}
controlPlane:
apiServer:
resources: {}
resourcesPreset: small
controllerManager:
resources: {}
resourcesPreset: micro
konnectivity:
server:
resources: {}
resourcesPreset: micro
replicas: 2
scheduler:
resources: {}
resourcesPreset: micro
host: ""
nodeGroups:
md0:
ephemeralStorage: 20Gi
gpus: []
instanceType: u1.medium
maxReplicas: 10
minReplicas: 0
resources:
cpu: ""
memory: ""
roles:
- ingress-nginx
storageClass: replicated
EOF
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
timeout 10 sh -ec 'until kubectl get kamajicontrolplane -n tenant-test kubernetes-test; do sleep 1; done'
kubectl wait --for=condition=TenantControlPlaneCreated kamajicontrolplane -n tenant-test kubernetes-test --timeout=4m
kubectl wait tcp -n tenant-test kubernetes-test --timeout=2m --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
kubectl wait deploy --timeout=4m --for=condition=available -n tenant-test kubernetes-test kubernetes-test-cluster-autoscaler kubernetes-test-kccm kubernetes-test-kcsi-controller
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=5m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
}

View File

@@ -1,47 +0,0 @@
#!/usr/bin/env bats
@test "Create and Verify Seeweedfs Bucket" {
# Create the bucket resource
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Bucket
metadata:
name: ${name}
namespace: tenant-test
spec: {}
EOF
# Wait for the bucket to be ready
kubectl -n tenant-test wait hr bucket-${name} --timeout=100s --for=condition=ready
kubectl -n tenant-test wait bucketclaims.objectstorage.k8s.io bucket-${name} --timeout=300s --for=jsonpath='{.status.bucketReady}'
kubectl -n tenant-test wait bucketaccesses.objectstorage.k8s.io bucket-${name} --timeout=300s --for=jsonpath='{.status.accessGranted}'
# Get and decode credentials
kubectl -n tenant-test get secret bucket-${name} -ojsonpath='{.data.BucketInfo}' | base64 -d > bucket-test-credentials.json
# Get credentials from the secret
ACCESS_KEY=$(jq -r '.spec.secretS3.accessKeyID' bucket-test-credentials.json)
SECRET_KEY=$(jq -r '.spec.secretS3.accessSecretKey' bucket-test-credentials.json)
BUCKET_NAME=$(jq -r '.spec.bucketName' bucket-test-credentials.json)
# Start port-forwarding
bash -c 'timeout 100s kubectl port-forward service/seaweedfs-s3 -n tenant-root 8333:8333 > /dev/null 2>&1 &'
# Wait for port-forward to be ready
timeout 30 sh -ec 'until nc -z localhost 8333; do sleep 1; done'
# Set up MinIO alias with error handling
mc alias set local https://localhost:8333 $ACCESS_KEY $SECRET_KEY --insecure
# Upload file to bucket
mc cp bucket-test-credentials.json $BUCKET_NAME/bucket-test-credentials.json
# Verify file was uploaded
mc ls $BUCKET_NAME/bucket-test-credentials.json
# Clean up uploaded file
mc rm $BUCKET_NAME/bucket-test-credentials.json
kubectl -n tenant-test delete bucket.apps.cozystack.io ${name}
}

View File

@@ -1,46 +0,0 @@
#!/usr/bin/env bats
@test "Create DB ClickHouse" {
name='test'
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: $name
namespace: tenant-test
spec:
size: 10Gi
logStorageSize: 2Gi
shards: 1
replicas: 2
storageClass: ""
logTTL: 15
users:
testuser:
password: xai7Wepo
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/clickhouse-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
clickhouseKeeper:
enabled: true
resourcesPreset: "micro"
size: "1Gi"
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr clickhouse-$name --timeout=20s --for=condition=ready
timeout 180 sh -ec "until kubectl -n tenant-test get svc chendpoint-clickhouse-$name -o jsonpath='{.spec.ports[*].port}' | grep -q '8123 9000'; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-0 --timeout=120s --for=jsonpath='{.status.replicas}'=1
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 100 sh -ec "until kubectl -n tenant-test get svc chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.spec.ports[*].port}' | grep -q '9000 8123 9009'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get sts chi-clickhouse-$name-clickhouse-0-1 ; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-1 --timeout=140s --for=jsonpath='{.status.replicas}'=1
kubectl -n tenant-test delete clickhouse $name
}

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env bats
@test "Create DB FerretDB" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: FerretDB
metadata:
name: $name
namespace: tenant-test
spec:
backup:
destinationPath: "s3://bucket/path/to/folder/"
enabled: false
endpointURL: "http://minio-gateway-service:9000"
retentionPolicy: "30d"
s3AccessKey: "<your-access-key>"
s3SecretKey: "<your-secret-key>"
schedule: "0 2 * * * *"
bootstrap:
enabled: false
external: false
quorum:
maxSyncReplicas: 0
minSyncReplicas: 0
replicas: 2
resources: {}
resourcesPreset: "micro"
size: "10Gi"
users:
testuser:
password: xai7Wepo
EOF
sleep 5
kubectl -n tenant-test wait hr ferretdb-$name --timeout=100s --for=condition=ready
timeout 40 sh -ec "until kubectl -n tenant-test get svc ferretdb-$name-postgres-r -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc ferretdb-$name-postgres-ro -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc ferretdb-$name-postgres-rw -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints ferretdb-$name-postgres-r -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
# for some reason it takes longer for the read-only endpoint to be ready
#timeout 120 sh -ec "until kubectl -n tenant-test get endpoints ferretdb-$name-postgres-ro -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints ferretdb-$name-postgres-rw -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete ferretdb.apps.cozystack.io $name
}

View File

@@ -1,121 +0,0 @@
#!/usr/bin/env bats
@test "Create DB FoundationDB" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: FoundationDB
metadata:
name: $name
namespace: tenant-test
spec:
cluster:
version: "7.3.63"
processCounts:
storage: 3
stateless: -1
cluster_controller: 1
redundancyMode: "double"
storageEngine: "ssd-2"
faultDomain:
key: "foundationdb.org/none"
valueFrom: "\$FDB_ZONE_ID"
storage:
size: "1Gi"
storageClass: ""
resourcesPreset: "small"
backup:
enabled: false
s3:
bucket: ""
endpoint: ""
region: ""
credentials:
accessKeyId: ""
secretAccessKey: ""
retentionPolicy: "7d"
monitoring:
enabled: true
customParameters:
- "knob_disable_posix_kernel_aio=1"
imageType: "unified"
automaticReplacements: true
EOF
sleep 15
# Wait for HelmRelease to be ready
kubectl -n tenant-test wait hr foundationdb-$name --timeout=300s --for=condition=ready
# Wait for FoundationDBCluster to be created (name has foundationdb- prefix)
timeout 300 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name; do sleep 15; done"
# Wait for cluster to become available (initial reconciliation takes time - allow 5 minutes)
timeout 300 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.usable_regions}' | grep -q '1'; do sleep 30; done"
# Check that storage processes are running
timeout 300 sh -ec "until [ \$(kubectl -n tenant-test get pods -l foundationdb.org/fdb-cluster-name=foundationdb-$name,foundationdb.org/fdb-process-class=storage --field-selector=status.phase=Running --no-headers | wc -l) -eq 3 ]; do sleep 15; done"
# Check that log processes are running (these are the stateless processes)
timeout 300 sh -ec "until [ \$(kubectl -n tenant-test get pods -l foundationdb.org/fdb-cluster-name=foundationdb-$name,foundationdb.org/fdb-process-class=log --field-selector=status.phase=Running --no-headers | wc -l) -ge 1 ]; do sleep 15; done"
# Check that cluster controller is running
timeout 300 sh -ec "until [ \$(kubectl -n tenant-test get pods -l foundationdb.org/fdb-cluster-name=foundationdb-$name,foundationdb.org/fdb-process-class=cluster_controller --field-selector=status.phase=Running --no-headers | wc -l) -eq 1 ]; do sleep 15; done"
# Check WorkloadMonitor is created and configured
timeout 120 sh -ec "until kubectl -n tenant-test get workloadmonitor foundationdb-$name; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get workloadmonitor foundationdb-$name -o jsonpath='{.spec.replicas}' | grep -q '3'; do sleep 5; done"
# Check dashboard resource map is created
kubectl -n tenant-test get configmap foundationdb-$name-resourcemap
# Verify cluster is healthy (check cluster status) - allow extra time for initial setup
timeout 300 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.health.available}' | grep -q 'true'; do sleep 20; done"
# Validate status.configured field
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.configured}' | grep -q 'true'; do sleep 10; done"
# Validate status.connectionString field exists and contains expected format
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.connectionString}' | grep -q '@.*\.svc\.cozy\.local'; do sleep 10; done"
# Validate comprehensive status.databaseConfiguration fields
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.logs}' | grep -q '3'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.proxies}' | grep -q '3'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.redundancy_mode}' | grep -q 'double'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.resolvers}' | grep -q '1'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.storage_engine}' | grep -q 'ssd-2'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.databaseConfiguration.usable_regions}' | grep -q '1'; do sleep 10; done"
# Validate status.desiredProcessGroups field
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.desiredProcessGroups}' | grep -q '^[0-9][0-9]*$'; do sleep 10; done"
# Validate status.generations.reconciled field
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.generations.reconciled}' | grep -q '^[0-9][0-9]*$'; do sleep 10; done"
# Validate status.hasListenIPsForAllPods field
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.hasListenIPsForAllPods}' | grep -q 'true'; do sleep 10; done"
# Validate comprehensive status.health fields
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.health.fullReplication}' | grep -q 'true'; do sleep 10; done"
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.status.health.healthy}' | grep -q 'true'; do sleep 10; done"
# Verify security context is applied correctly (non-root user)
storage_pod=$(kubectl -n tenant-test get pods -l foundationdb.org/fdb-cluster-name=foundationdb-$name,foundationdb.org/fdb-process-class=storage --no-headers | head -n1 | awk '{print $1}')
kubectl -n tenant-test get pod "$storage_pod" -o jsonpath='{.spec.containers[0].securityContext.runAsUser}' | grep -q '4059'
kubectl -n tenant-test get pod "$storage_pod" -o jsonpath='{.spec.containers[0].securityContext.runAsGroup}' | grep -q '4059'
# Verify volumeClaimTemplate is properly configured in FoundationDBCluster CRD
timeout 60 sh -ec "until kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name -o jsonpath='{.spec.processes.general.volumeClaimTemplate.spec.resources.requests.storage}' | grep -q '1Gi'; do sleep 10; done"
# Verify PVCs are created with correct storage size (1Gi as specified in test)
timeout 120 sh -ec "until [ \$(kubectl -n tenant-test get pvc -l foundationdb.org/fdb-cluster-name=foundationdb-$name --no-headers | wc -l) -ge 3 ]; do sleep 10; done"
kubectl -n tenant-test get pvc -l foundationdb.org/fdb-cluster-name=foundationdb-$name -o jsonpath='{.items[*].spec.resources.requests.storage}' | grep -q '1Gi'
# Verify actual PVC storage capacity matches requested size
kubectl -n tenant-test get pvc -l foundationdb.org/fdb-cluster-name=foundationdb-$name -o jsonpath='{.items[*].status.capacity.storage}' | grep -q '1Gi'
# Clean up
kubectl -n tenant-test delete foundationdb $name
# Wait for cleanup to complete
timeout 120 sh -ec "while kubectl -n tenant-test get foundationdbclusters.apps.foundationdb.org foundationdb-$name 2>/dev/null; do sleep 10; done"
}

View File

@@ -1,51 +0,0 @@
#!/usr/bin/env bats
@test "Create Kafka" {
name='test'
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kafka
metadata:
name: $name
namespace: tenant-test
spec:
external: false
kafka:
size: 10Gi
replicas: 2
storageClass: ""
resources: {}
resourcesPreset: "nano"
zookeeper:
size: 5Gi
replicas: 2
storageClass: ""
resources:
resourcesPreset: "nano"
topics:
- name: testResults
partitions: 1
replicas: 2
config:
min.insync.replicas: 2
- name: testOrders
config:
cleanup.policy: compact
segment.ms: 3600000
max.compaction.lag.ms: 5400000
min.insync.replicas: 2
partitions: 1
replicas: 2
EOF
sleep 5
kubectl -n tenant-test wait hr kafka-$name --timeout=30s --for=condition=ready
kubectl wait kafkas -n tenant-test test --timeout=60s --for=condition=ready
timeout 60 sh -ec "until kubectl -n tenant-test get pvc data-kafka-$name-zookeeper-0; do sleep 10; done"
kubectl -n tenant-test wait pvc data-kafka-$name-zookeeper-0 --timeout=50s --for=jsonpath='{.status.phase}'=Bound
timeout 40 sh -ec "until kubectl -n tenant-test get svc kafka-$name-zookeeper-client -o jsonpath='{.spec.ports[0].port}' | grep -q '2181'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc kafka-$name-zookeeper-nodes -o jsonpath='{.spec.ports[*].port}' | grep -q '2181 2888 3888'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints kafka-$name-zookeeper-nodes -o jsonpath='{.subsets[*].addresses[0].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete kafka.apps.cozystack.io $name
kubectl -n tenant-test delete pvc data-kafka-$name-zookeeper-0
kubectl -n tenant-test delete pvc data-kafka-$name-zookeeper-1
}

View File

@@ -1,6 +0,0 @@
#!/usr/bin/env bats
@test "Create a tenant Kubernetes control plane with latest version" {
. hack/e2e-apps/run-kubernetes.sh
run_kubernetes_test 'keys | sort_by(.) | .[-1]' 'test-latest-version' '59991'
}

View File

@@ -1,6 +0,0 @@
#!/usr/bin/env bats
@test "Create a tenant Kubernetes control plane with previous version" {
. hack/e2e-apps/run-kubernetes.sh
run_kubernetes_test 'keys | sort_by(.) | .[-2]' 'test-previous-version' '59992'
}

View File

@@ -1,46 +0,0 @@
#!/usr/bin/env bats
@test "Create DB MySQL" {
name='test'
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: MySQL
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 2
storageClass: ""
users:
testuser:
maxUserConnections: 1000
password: xai7Wepo
databases:
testdb:
roles:
admin:
- testuser
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr mysql-$name --timeout=30s --for=condition=ready
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name -o jsonpath='{.spec.ports[0].port}' | grep -q '3306'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/mysql-$name --timeout=110s --for=jsonpath='{.status.replicas}'=2
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name-metrics -o jsonpath='{.spec.ports[0].port}' | grep -q '9104'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name-metrics -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test wait deployment.apps/mysql-$name-metrics --timeout=90s --for=jsonpath='{.status.replicas}'=1
kubectl -n tenant-test delete mysqls.apps.cozystack.io $name
}

View File

@@ -1,54 +0,0 @@
#!/usr/bin/env bats
@test "Create DB PostgreSQL" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Postgres
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 2
storageClass: ""
postgresql:
parameters:
max_connections: 100
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
users:
testuser:
password: xai7Wepo
databases:
testdb:
roles:
admin:
- testuser
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr postgres-$name --timeout=100s --for=condition=ready
kubectl -n tenant-test wait job.batch postgres-$name-init-job --timeout=50s --for=condition=Complete
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-r -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-ro -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-rw -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-r -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
# for some reason it takes longer for the read-only endpoint to be ready
#timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-ro -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-rw -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete postgreses.apps.cozystack.io $name
kubectl -n tenant-test delete job.batch/postgres-$name-init-job
}

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env bats
@test "Create Redis" {
name='test'
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Redis
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 1Gi
replicas: 2
storageClass: ""
authEnabled: true
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr redis-$name --timeout=20s --for=condition=ready
kubectl -n tenant-test wait pvc redisfailover-persistent-data-rfr-redis-$name-0 --timeout=50s --for=jsonpath='{.status.phase}'=Bound
kubectl -n tenant-test wait deploy rfs-redis-$name --timeout=90s --for=condition=available
kubectl -n tenant-test wait sts rfr-redis-$name --timeout=90s --for=jsonpath='{.status.replicas}'=2
kubectl -n tenant-test delete redis.apps.cozystack.io $name
}

View File

@@ -1,137 +0,0 @@
run_kubernetes_test() {
local version_expr="$1"
local test_name="$2"
local port="$3"
local k8s_version=$(yq "$version_expr" packages/apps/kubernetes/files/versions.yaml)
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes
metadata:
name: "${test_name}"
namespace: tenant-test
spec:
addons:
certManager:
enabled: false
valuesOverride: {}
cilium:
valuesOverride: {}
fluxcd:
enabled: false
valuesOverride: {}
gatewayAPI:
enabled: false
gpuOperator:
enabled: false
valuesOverride: {}
ingressNginx:
enabled: true
hosts: []
valuesOverride: {}
monitoringAgents:
enabled: false
valuesOverride: {}
verticalPodAutoscaler:
valuesOverride: {}
controlPlane:
apiServer:
resources: {}
resourcesPreset: small
controllerManager:
resources: {}
resourcesPreset: micro
konnectivity:
server:
resources: {}
resourcesPreset: micro
replicas: 2
scheduler:
resources: {}
resourcesPreset: micro
host: ""
nodeGroups:
md0:
ephemeralStorage: 20Gi
gpus: []
instanceType: u1.medium
maxReplicas: 10
minReplicas: 0
roles:
- ingress-nginx
storageClass: replicated
version: "${k8s_version}"
EOF
# Wait for the tenant-test namespace to be active
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
# Wait for the Kamaji control plane to be created (retry for up to 10 seconds)
timeout 10 sh -ec 'until kubectl get kamajicontrolplane -n tenant-test kubernetes-'"${test_name}"'; do sleep 1; done'
# Wait for the tenant control plane to be fully created (timeout after 4 minutes)
kubectl wait --for=condition=TenantControlPlaneCreated kamajicontrolplane -n tenant-test kubernetes-${test_name} --timeout=4m
# Wait for Kubernetes resources to be ready (timeout after 2 minutes)
kubectl wait tcp -n tenant-test kubernetes-${test_name} --timeout=2m --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
# Wait for all required deployments to be available (timeout after 4 minutes)
kubectl wait deploy --timeout=4m --for=condition=available -n tenant-test kubernetes-${test_name} kubernetes-${test_name}-cluster-autoscaler kubernetes-${test_name}-kccm kubernetes-${test_name}-kcsi-controller
# Wait for the machine deployment to scale to 2 replicas (timeout after 1 minute)
kubectl wait machinedeployment kubernetes-${test_name}-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
# Get the admin kubeconfig and save it to a file
kubectl get secret kubernetes-${test_name}-admin-kubeconfig -ojsonpath='{.data.super-admin\.conf}' -n tenant-test | base64 -d > tenantkubeconfig-${test_name}
# Update the kubeconfig to use localhost for the API server
yq -i ".clusters[0].cluster.server = \"https://localhost:${port}\"" tenantkubeconfig-${test_name}
# Set up port forwarding to the Kubernetes API server for a 200 second timeout
bash -c 'timeout 300s kubectl port-forward service/kubernetes-'"${test_name}"' -n tenant-test '"${port}"':6443 > /dev/null 2>&1 &'
# Verify the Kubernetes version matches what we expect (retry for up to 20 seconds)
timeout 20 sh -ec 'until kubectl --kubeconfig tenantkubeconfig-'"${test_name}"' version 2>/dev/null | grep -Fq "Server Version: ${k8s_version}"; do sleep 5; done'
# Wait for the nodes to be ready (timeout after 2 minutes)
timeout 3m bash -c '
until [ "$(kubectl --kubeconfig tenantkubeconfig-'"${test_name}"' get nodes -o jsonpath="{.items[*].metadata.name}" | wc -w)" -eq 2 ]; do
sleep 2
done
'
# Verify the nodes are ready
kubectl --kubeconfig tenantkubeconfig-${test_name} wait node --all --timeout=2m --for=condition=Ready
kubectl --kubeconfig tenantkubeconfig-${test_name} get nodes -o wide
# Verify the kubelet version matches what we expect
versions=$(kubectl --kubeconfig "tenantkubeconfig-${test_name}" \
get nodes -o jsonpath='{.items[*].status.nodeInfo.kubeletVersion}')
node_ok=true
for v in $versions; do
case "$v" in
"${k8s_version}" | "${k8s_version}".* | "${k8s_version}"-*)
# acceptable
;;
*)
node_ok=false
break
;;
esac
done
if [ "$node_ok" != true ]; then
echo "Kubelet versions did not match expected ${k8s_version}" >&2
exit 1
fi
# Wait for all machine deployment replicas to be ready (timeout after 10 minutes)
kubectl wait machinedeployment kubernetes-${test_name}-md0 -n tenant-test --timeout=10m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
for component in cilium coredns csi ingress-nginx vsnap-crd; do
kubectl wait hr kubernetes-${test_name}-${component} -n tenant-test --timeout=1m --for=condition=ready
done
# Clean up by deleting the Kubernetes resource
kubectl -n tenant-test delete kuberneteses.apps.cozystack.io $test_name
}

View File

@@ -1,45 +0,0 @@
#!/usr/bin/env bats
@test "Create a Virtual Machine" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VirtualMachine
metadata:
name: $name
namespace: tenant-test
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
instanceType: "u1.medium"
instanceProfile: ubuntu
systemDisk:
image: ubuntu
storage: 5Gi
storageClass: replicated
gpus: []
resources: {}
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
sleep 5
kubectl -n tenant-test wait hr virtual-machine-$name --timeout=10s --for=condition=ready
kubectl -n tenant-test wait dv virtual-machine-$name --timeout=150s --for=condition=ready
kubectl -n tenant-test wait pvc virtual-machine-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
kubectl -n tenant-test wait vm virtual-machine-$name --timeout=100s --for=condition=ready
timeout 120 sh -ec "until kubectl -n tenant-test get vmi virtual-machine-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete virtualmachines.apps.cozystack.io $name
}

View File

@@ -1,65 +0,0 @@
#!/usr/bin/env bats
@test "Create a VM Disk" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VMDisk
metadata:
name: $name
namespace: tenant-test
spec:
source:
http:
url: https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
optical: false
storage: 5Gi
storageClass: replicated
EOF
sleep 5
kubectl -n tenant-test wait hr vm-disk-$name --timeout=5s --for=condition=ready
kubectl -n tenant-test wait dv vm-disk-$name --timeout=250s --for=condition=ready
kubectl -n tenant-test wait pvc vm-disk-$name --timeout=200s --for=jsonpath='{.status.phase}'=Bound
}
@test "Create a VM Instance" {
diskName='test'
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VMInstance
metadata:
name: $name
namespace: tenant-test
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
running: true
instanceType: "u1.medium"
instanceProfile: ubuntu
disks:
- name: $diskName
gpus: []
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
sleep 5
timeout 20 sh -ec "until kubectl -n tenant-test get vmi vm-instance-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 5; done"
kubectl -n tenant-test wait hr vm-instance-$name --timeout=5s --for=condition=ready
kubectl -n tenant-test wait vm vm-instance-$name --timeout=20s --for=condition=ready
kubectl -n tenant-test delete vminstances.apps.cozystack.io $name
kubectl -n tenant-test delete vmdisks.apps.cozystack.io $diskName
}

391
hack/e2e-cluster.bats Executable file
View File

@@ -0,0 +1,391 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Required installer assets exist" {
if [ ! -f _out/assets/cozystack-installer.yaml ]; then
echo "Missing: _out/assets/cozystack-installer.yaml" >&2
exit 1
fi
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing: _out/assets/nocloud-amd64.raw.xz" >&2
exit 1
fi
}
@test "IPv4 forwarding is enabled" {
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
echo "IPv4 forwarding is disabled!" >&2
echo >&2
echo "Enable it with:" >&2
echo " echo 1 > /proc/sys/net/ipv4/ip_forward" >&2
exit 1
fi
}
@test "Clean previous VMs" {
kill $(cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid 2>/dev/null) 2>/dev/null || true
rm -rf srv1 srv2 srv3
}
@test "Prepare networking and masquerading" {
ip link del cozy-br0 2>/dev/null || true
ip link add cozy-br0 type bridge
ip link set cozy-br0 up
ip address add 192.168.123.1/24 dev cozy-br0
# Masquerading rule idempotent (delete first, then add)
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
}
@test "Prepare cloudinit drive for VMs" {
mkdir -p srv1 srv2 srv3
# Generate cloudinit ISOs
for i in 1 2 3; do
echo "hostname: srv${i}" > "srv${i}/meta-data"
cat > "srv${i}/user-data" <<'EOF'
#cloud-config
EOF
cat > "srv${i}/network-config" <<EOF
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- "192.168.123.1${i}/26"
gateway4: "192.168.123.1"
nameservers:
search: [cluster.local]
addresses: [8.8.8.8]
EOF
( cd "srv${i}" && genisoimage \
-output seed.img \
-volid cidata -rational-rock -joliet \
user-data meta-data network-config )
done
}
@test "Use Talos NoCloud image from assets" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing _out/assets/nocloud-amd64.raw.xz" 2>&1
exit 1
fi
rm -f nocloud-amd64.raw
cp _out/assets/nocloud-amd64.raw.xz .
xz --decompress nocloud-amd64.raw.xz
}
@test "Prepare VM disks" {
for i in 1 2 3; do
cp nocloud-amd64.raw srv${i}/system.img
qemu-img resize srv${i}/system.img 20G
qemu-img create srv${i}/data.img 100G
done
}
@test "Create tap devices" {
for i in 1 2 3; do
ip link del cozy-srv${i} 2>/dev/null || true
ip tuntap add dev cozy-srv${i} mode tap
ip link set cozy-srv${i} up
ip link set cozy-srv${i} master cozy-br0
done
}
@test "Boot QEMU VMs" {
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 16384 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
-drive file=srv${i}/system.img,if=virtio,format=raw \
-drive file=srv${i}/seed.img,if=virtio,format=raw \
-drive file=srv${i}/data.img,if=virtio,format=raw \
-display none -daemonize -pidfile srv${i}/qemu.pid
done
# Give qemu a few seconds to start up networking
sleep 5
}
@test "Wait until Talos API port 50000 is reachable on all machines" {
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Generate Talos cluster configuration" {
# Clusterwide patches
cat > patch.yaml <<'EOF'
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.123.0/24
extraConfig:
maxPods: 512
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
- name: spl
registries:
mirrors:
docker.io:
endpoints:
- https://mirror.gcr.io
files:
- content: |
[plugins]
[plugins."io.containerd.cri.v1.runtime"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
apiServer:
extraArgs:
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
oidc-client-id: "kubernetes"
oidc-username-claim: "preferred_username"
oidc-groups-claim: "groups"
network:
cni:
name: none
dnsDomain: cozy.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOF
# Controlplaneonly patches
cat > patch-controlplane.yaml <<'EOF'
machine:
nodeLabels:
node.kubernetes.io/exclude-from-external-load-balancers:
$patch: delete
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.123.10
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.123.0/24
EOF
# Generate secrets once
if [ ! -f secrets.yaml ]; then
talosctl gen secrets
fi
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 \
--config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
}
@test "Apply Talos configuration to the node" {
# Apply the configuration to all three nodes
for node in 11 12 13; do
talosctl apply -f controlplane.yaml -n 192.168.123.${node} -e 192.168.123.${node} -i
done
# Wait for Talos services to come up again
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Bootstrap Talos cluster" {
# Bootstrap etcd on the first node
timeout 10 sh -ec 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
# Wait until etcd is healthy
timeout 180 sh -ec 'until talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 >/dev/null 2>&1; do sleep 1; done'
timeout 60 sh -ec 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep -q "rpc error"; do sleep 1; done'
# Retrieve kubeconfig
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
# Wait until all three nodes register in Kubernetes
timeout 60 sh -ec 'until [ $(kubectl get node --no-headers | wc -l) -eq 3 ]; do sleep 1; done'
}
@test "Install Cozystack" {
# Create namespace & configmap required by installer
kubectl create namespace cozy-system --dry-run=client -o yaml | kubectl apply -f -
kubectl create configmap cozystack -n cozy-system \
--from-literal=bundle-name=paas-full \
--from-literal=ipv4-pod-cidr=10.244.0.0/16 \
--from-literal=ipv4-pod-gateway=10.244.0.1 \
--from-literal=ipv4-svc-cidr=10.96.0.0/16 \
--from-literal=ipv4-join-cidr=100.64.0.0/16 \
--from-literal=root-host=example.org \
--from-literal=api-server-endpoint=https://192.168.123.10:6443 \
--dry-run=client -o yaml | kubectl apply -f -
# Apply installer manifests from file
kubectl apply -f _out/assets/cozystack-installer.yaml
# Wait for the installer deployment to become available
kubectl wait deployment/cozystack -n cozy-system --timeout=1m --for=condition=Available
# Wait until HelmReleases appear & reconcile them
timeout 60 sh -ec 'until kubectl get hr -A | grep -q cozys; do sleep 1; done'
sleep 5
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n "$1" hr/"$2" &"} END {print "wait"}' | sh -ex
# Fail the test if any HelmRelease is not Ready
if kubectl get hr -A | grep -v " True " | grep -v NAME; then
kubectl get hr -A
fail "Some HelmReleases failed to reconcile"
fi
}
@test "Wait for ClusterAPI provider deployments" {
# Wait for ClusterAPI provider deployments
timeout 60 sh -ec 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager >/dev/null 2>&1; do sleep 1; done'
kubectl wait deployment/capi-controller-manager deployment/capi-kamaji-controller-manager deployment/capi-kubeadm-bootstrap-controller-manager deployment/capi-operator-cluster-api-operator deployment/capk-controller-manager -n cozy-cluster-api --timeout=1m --for=condition=available
}
@test "Wait for LINSTOR and configure storage" {
# Linstor controller and nodes
kubectl wait deployment/linstor-controller -n cozy-linstor --timeout=5m --for=condition=available
timeout 60 sh -ec 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) -eq 3 ]; do sleep 1; done'
for node in srv1 srv2 srv3; do
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs ${node} /dev/vdc --pool-name data --storage-pool data
done
# Storage classes
kubectl apply -f - <<'EOF'
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOF
}
@test "Wait for MetalLB and configure address pool" {
# MetalLB address pool
kubectl apply -f - <<'EOF'
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools: [cozystack]
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses: [192.168.123.200-192.168.123.250]
autoAssign: true
avoidBuggyIPs: false
EOF
}
@test "Check Cozystack API service" {
kubectl wait --for=condition=Available apiservices/v1alpha1.apps.cozystack.io --timeout=2m
}
@test "Configure Tenant and wait for applications" {
# Patch root tenant and wait for its releases
kubectl patch tenants/root -n tenant-root --type merge -p '{"spec":{"host":"example.org","ingress":true,"monitoring":true,"etcd":true,"isolated":true}}'
timeout 60 sh -ec 'until kubectl get hr -n tenant-root etcd ingress monitoring tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/etcd hr/ingress hr/tenant-root -n tenant-root --timeout=2m --for=condition=ready
if ! kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready; then
flux reconcile hr monitoring -n tenant-root --force
kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready
fi
# Expose Cozystack services through ingress
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"expose-services":"api,dashboard,cdi-uploadproxy,vm-exportproxy,keycloak"}}'
# NGINX ingress controller
timeout 60 sh -ec 'until kubectl get deploy root-ingress-controller -n tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait deploy/root-ingress-controller -n tenant-root --timeout=5m --for=condition=available
# etcd statefulset
kubectl wait sts/etcd -n tenant-root --for=jsonpath='{.status.readyReplicas}'=3 --timeout=5m
# VictoriaMetrics components
kubectl wait vmalert/vmalert-shortterm vmalertmanager/alertmanager -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vlogs/generic -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vmcluster/shortterm vmcluster/longterm -n tenant-root --for=jsonpath='{.status.clusterStatus}'=operational --timeout=5m
# Grafana
kubectl wait clusters.postgresql.cnpg.io/grafana-db -n tenant-root --for=condition=ready --timeout=5m
kubectl wait deploy/grafana-deployment -n tenant-root --for=condition=available --timeout=5m
# Verify Grafana via ingress
ingress_ip=$(kubectl get svc root-ingress-controller -n tenant-root -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
if ! curl -sS -k "https://${ingress_ip}" -H 'Host: grafana.example.org' --max-time 30 | grep -q Found; then
echo "Failed to access Grafana via ingress at ${ingress_ip}" >&2
exit 1
fi
}
@test "Keycloak OIDC stack is healthy" {
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"oidc-enabled":"true"}}'
timeout 120 sh -ec 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/keycloak hr/keycloak-configure hr/keycloak-operator -n cozy-keycloak --timeout=10m --for=condition=ready
}

View File

@@ -1,210 +0,0 @@
#!/usr/bin/env bats
@test "Required installer assets exist" {
if [ ! -f _out/assets/cozystack-installer.yaml ]; then
echo "Missing: _out/assets/cozystack-installer.yaml" >&2
exit 1
fi
}
@test "Install Cozystack" {
# Create namespace & configmap required by installer
kubectl create namespace cozy-system --dry-run=client -o yaml | kubectl apply -f -
kubectl create configmap cozystack -n cozy-system \
--from-literal=bundle-name=paas-full \
--from-literal=ipv4-pod-cidr=10.244.0.0/16 \
--from-literal=ipv4-pod-gateway=10.244.0.1 \
--from-literal=ipv4-svc-cidr=10.96.0.0/16 \
--from-literal=ipv4-join-cidr=100.64.0.0/16 \
--from-literal=root-host=example.org \
--from-literal=api-server-endpoint=https://192.168.123.10:6443 \
--dry-run=client -o yaml | kubectl apply -f -
# Apply installer manifests from file
kubectl apply -f _out/assets/cozystack-installer.yaml
# Wait for the installer deployment to become available
kubectl wait deployment/cozystack -n cozy-system --timeout=1m --for=condition=Available
# Wait until HelmReleases appear & reconcile them
timeout 60 sh -ec 'until kubectl get hr -A -l cozystack.io/system-app=true | grep -q cozys; do sleep 1; done'
sleep 5
kubectl get hr -A -l cozystack.io/system-app=true | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n "$1" hr/"$2" &"} END {print "wait"}' | sh -ex
# Fail the test if any HelmRelease is not Ready
if kubectl get hr -A | grep -v " True " | grep -v NAME; then
kubectl get hr -A
echo "Some HelmReleases failed to reconcile" >&2
fi
}
@test "Wait for ClusterAPI provider deployments" {
# Wait for ClusterAPI provider deployments
timeout 60 sh -ec 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager >/dev/null 2>&1; do sleep 1; done'
kubectl wait deployment/capi-controller-manager deployment/capi-kamaji-controller-manager deployment/capi-kubeadm-bootstrap-controller-manager deployment/capi-operator-cluster-api-operator deployment/capk-controller-manager -n cozy-cluster-api --timeout=1m --for=condition=available
}
@test "Wait for LINSTOR and configure storage" {
# Linstor controller and nodes
kubectl wait deployment/linstor-controller -n cozy-linstor --timeout=5m --for=condition=available
timeout 60 sh -ec 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) -eq 3 ]; do sleep 1; done'
created_pools=$(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor sp l -s data --pastable | awk '$2 == "data" {printf " " $4} END{printf " "}')
for node in srv1 srv2 srv3; do
case $created_pools in
*" $node "*) echo "Storage pool 'data' already exists on node $node"; continue;;
esac
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs ${node} /dev/vdc --pool-name data --storage-pool data
done
# Storage classes
kubectl apply -f - <<'EOF'
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: Immediate
allowVolumeExpansion: true
EOF
}
@test "Wait for MetalLB and configure address pool" {
# MetalLB address pool
kubectl apply -f - <<'EOF'
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools: [cozystack]
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses: [192.168.123.200-192.168.123.250]
autoAssign: true
avoidBuggyIPs: false
EOF
}
@test "Check Cozystack API service" {
kubectl wait --for=condition=Available apiservices/v1alpha1.apps.cozystack.io apiservices/v1alpha1.core.cozystack.io --timeout=2m
}
@test "Configure Tenant and wait for applications" {
# Patch root tenant and wait for its releases
kubectl patch tenants/root -n tenant-root --type merge -p '{"spec":{"host":"example.org","ingress":true,"monitoring":true,"etcd":true,"isolated":true, "seaweedfs": true}}'
timeout 60 sh -ec 'until kubectl get hr -n tenant-root etcd ingress monitoring seaweedfs tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/etcd hr/ingress hr/tenant-root hr/seaweedfs -n tenant-root --timeout=4m --for=condition=ready
# TODO: Workaround ingress unvailability issue
if ! kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready; then
flux reconcile hr monitoring -n tenant-root --force
kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready
fi
if ! kubectl wait hr/seaweedfs-system -n tenant-root --timeout=2m --for=condition=ready; then
flux reconcile hr seaweedfs-system -n tenant-root --force
kubectl wait hr/seaweedfs-system -n tenant-root --timeout=2m --for=condition=ready
fi
# Expose Cozystack services through ingress
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"expose-services":"api,dashboard,cdi-uploadproxy,vm-exportproxy,keycloak"}}'
# NGINX ingress controller
timeout 60 sh -ec 'until kubectl get deploy root-ingress-controller -n tenant-root >/dev/null 2>&1; do sleep 1; done'
kubectl wait deploy/root-ingress-controller -n tenant-root --timeout=5m --for=condition=available
# etcd statefulset
kubectl wait sts/etcd -n tenant-root --for=jsonpath='{.status.readyReplicas}'=3 --timeout=5m
# VictoriaMetrics components
kubectl wait vmalert/vmalert-shortterm vmalertmanager/alertmanager -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=15m
kubectl wait vlogs/generic -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
kubectl wait vmcluster/shortterm vmcluster/longterm -n tenant-root --for=jsonpath='{.status.clusterStatus}'=operational --timeout=5m
# Grafana
kubectl wait clusters.postgresql.cnpg.io/grafana-db -n tenant-root --for=condition=ready --timeout=5m
kubectl wait deploy/grafana-deployment -n tenant-root --for=condition=available --timeout=5m
# Verify Grafana via ingress
ingress_ip=$(kubectl get svc root-ingress-controller -n tenant-root -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
if ! curl -sS -k "https://${ingress_ip}" -H 'Host: grafana.example.org' --max-time 30 | grep -q Found; then
echo "Failed to access Grafana via ingress at ${ingress_ip}" >&2
exit 1
fi
}
@test "Keycloak OIDC stack is healthy" {
kubectl patch configmap/cozystack -n cozy-system --type merge -p '{"data":{"oidc-enabled":"true"}}'
timeout 120 sh -ec 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/keycloak hr/keycloak-configure hr/keycloak-operator -n cozy-keycloak --timeout=10m --for=condition=ready
}
@test "Create tenant with isolated mode enabled" {
kubectl -n tenant-root get tenants.apps.cozystack.io test ||
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Tenant
metadata:
name: test
namespace: tenant-root
spec:
etcd: false
host: ""
ingress: false
isolated: true
monitoring: false
resourceQuotas:
cpu: "60"
memory: "128Gi"
storage: "100Gi"
seaweedfs: false
EOF
kubectl wait hr/tenant-test -n tenant-root --timeout=1m --for=condition=ready
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
# Wait for ResourceQuota to appear and assert values
timeout 60 sh -ec 'until [ "$(kubectl get quota -n tenant-test --no-headers 2>/dev/null | wc -l)" -ge 1 ]; do sleep 1; done'
kubectl get quota -n tenant-test \
-o jsonpath='{range .items[*]}{.spec.hard.requests\.memory}{" "}{.spec.hard.requests\.storage}{"\n"}{end}' \
| grep -qx '137438953472 100Gi'
# Assert LimitRange defaults for containers
kubectl get limitrange -n tenant-test \
-o jsonpath='{range .items[*].spec.limits[*]}{.default.cpu}{" "}{.default.memory}{" "}{.defaultRequest.cpu}{" "}{.defaultRequest.memory}{"\n"}{end}' \
| grep -qx '250m 128Mi 25m 128Mi'
}

View File

@@ -1,248 +0,0 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Required installer assets exist" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing: _out/assets/nocloud-amd64.raw.xz" >&2
exit 1
fi
}
@test "IPv4 forwarding is enabled" {
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
echo "IPv4 forwarding is disabled!" >&2
echo >&2
echo "Enable it with:" >&2
echo " echo 1 > /proc/sys/net/ipv4/ip_forward" >&2
exit 1
fi
}
@test "Clean previous VMs" {
kill $(cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid 2>/dev/null) 2>/dev/null || true
rm -rf srv1 srv2 srv3
}
@test "Prepare networking and masquerading" {
ip link del cozy-br0 2>/dev/null || true
ip link add cozy-br0 type bridge
ip link set cozy-br0 up
ip address add 192.168.123.1/24 dev cozy-br0
# Masquerading rule idempotent (delete first, then add)
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
}
@test "Prepare cloudinit drive for VMs" {
mkdir -p srv1 srv2 srv3
# Generate cloudinit ISOs
for i in 1 2 3; do
echo "hostname: srv${i}" > "srv${i}/meta-data"
cat > "srv${i}/user-data" <<'EOF'
#cloud-config
EOF
cat > "srv${i}/network-config" <<EOF
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- "192.168.123.1${i}/26"
gateway4: "192.168.123.1"
nameservers:
search: [cluster.local]
addresses: [8.8.8.8]
EOF
( cd "srv${i}" && genisoimage \
-output seed.img \
-volid cidata -rational-rock -joliet \
user-data meta-data network-config )
done
}
@test "Use Talos NoCloud image from assets" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing _out/assets/nocloud-amd64.raw.xz" 2>&1
exit 1
fi
rm -f nocloud-amd64.raw
cp _out/assets/nocloud-amd64.raw.xz .
xz --decompress nocloud-amd64.raw.xz
}
@test "Prepare VM disks" {
for i in 1 2 3; do
cp nocloud-amd64.raw srv${i}/system.img
qemu-img resize srv${i}/system.img 50G
qemu-img create srv${i}/data.img 200G
done
}
@test "Create tap devices" {
for i in 1 2 3; do
ip link del cozy-srv${i} 2>/dev/null || true
ip tuntap add dev cozy-srv${i} mode tap
ip link set cozy-srv${i} up
ip link set cozy-srv${i} master cozy-br0
done
}
@test "Boot QEMU VMs" {
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 24576 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
-drive file=srv${i}/system.img,if=virtio,format=raw \
-drive file=srv${i}/seed.img,if=virtio,format=raw \
-drive file=srv${i}/data.img,if=virtio,format=raw \
-display none -daemonize -pidfile srv${i}/qemu.pid
done
# Give qemu a few seconds to start up networking
sleep 5
}
@test "Wait until Talos API port 50000 is reachable on all machines" {
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Generate Talos cluster configuration" {
# Clusterwide patches
cat > patch.yaml <<'EOF'
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.123.0/24
extraConfig:
maxPods: 512
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
- name: spl
registries:
mirrors:
docker.io:
endpoints:
- https://dockerio.nexus.aenix.org
cr.fluentbit.io:
endpoints:
- https://fluentbit.nexus.aenix.org
docker-registry3.mariadb.com:
endpoints:
- https://mariadb.nexus.aenix.org
gcr.io:
endpoints:
- https://gcr.nexus.aenix.org
ghcr.io:
endpoints:
- https://ghcr.nexus.aenix.org
quay.io:
endpoints:
- https://quay.nexus.aenix.org
registry.k8s.io:
endpoints:
- https://k8s.nexus.aenix.org
files:
- content: |
[plugins]
[plugins."io.containerd.cri.v1.runtime"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
apiServer:
extraArgs:
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
oidc-client-id: "kubernetes"
oidc-username-claim: "preferred_username"
oidc-groups-claim: "groups"
network:
cni:
name: none
dnsDomain: cozy.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOF
# Controlplaneonly patches
cat > patch-controlplane.yaml <<'EOF'
machine:
nodeLabels:
node.kubernetes.io/exclude-from-external-load-balancers:
$patch: delete
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.123.10
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.123.0/24
EOF
# Generate secrets once
if [ ! -f secrets.yaml ]; then
talosctl gen secrets
fi
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 \
--config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
}
@test "Apply Talos configuration to the node" {
# Apply the configuration to all three nodes
for node in 11 12 13; do
talosctl apply -f controlplane.yaml -n 192.168.123.${node} -e 192.168.123.${node} -i
done
# Wait for Talos services to come up again
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Bootstrap Talos cluster" {
# Bootstrap etcd on the first node
timeout 10 sh -ec 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
# Wait until etcd is healthy
timeout 180 sh -ec 'until talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 >/dev/null 2>&1; do sleep 1; done'
timeout 60 sh -ec 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep -q "rpc error"; do sleep 1; done'
# Retrieve kubeconfig
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
# Wait until all three nodes register in Kubernetes
timeout 60 sh -ec 'until [ $(kubectl get node --no-headers | wc -l) -eq 3 ]; do sleep 1; done'
}

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Test OpenAPI endpoints in a Kubernetes cluster
# -----------------------------------------------------------------------------
@test "Test OpenAPI v2 endpoint" {
kubectl get -v7 --raw '/openapi/v2?timeout=32s' > /dev/null
}
@test "Test OpenAPI v3 endpoint" {
kubectl get -v7 --raw '/openapi/v3/apis/apps.cozystack.io/v1alpha1' > /dev/null
kubectl get -v7 --raw '/openapi/v3/apis/core.cozystack.io/v1alpha1' > /dev/null
}
@test "Test OpenAPI v2 endpoint (protobuf)" {
(
kubectl proxy --port=21234 & sleep 0.5
trap "kill $!" EXIT
curl -sS --fail 'http://localhost:21234/openapi/v2?timeout=32s' -H 'Accept: application/com.github.proto-openapi.spec.v2@v1.0+protobuf' > /dev/null
)
}
@test "Test kinds" {
val=$(kubectl get --raw /apis/apps.cozystack.io/v1alpha1/tenants | jq -r '.kind')
if [ "$val" != "TenantList" ]; then
echo "Expected kind to be TenantList, got $val"
exit 1
fi
val=$(kubectl get --raw /apis/apps.cozystack.io/v1alpha1/tenants | jq -r '.items[0].kind')
if [ "$val" != "Tenant" ]; then
echo "Expected kind to be Tenant, got $val"
exit 1
fi
val=$(kubectl get --raw /apis/apps.cozystack.io/v1alpha1/ingresses | jq -r '.kind')
if [ "$val" != "IngressList" ]; then
echo "Expected kind to be IngressList, got $val"
exit 1
fi
val=$(kubectl get --raw /apis/apps.cozystack.io/v1alpha1/ingresses | jq -r '.items[0].kind')
if [ "$val" != "Ingress" ]; then
echo "Expected kind to be Ingress, got $val"
exit 1
fi
}

64
hack/gen_versions_map.sh Executable file
View File

@@ -0,0 +1,64 @@
#!/bin/sh
set -e
file=versions_map
charts=$(find . -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")')
new_map=$(
for chart in $charts; do
awk '/^name:/ {chart=$2} /^version:/ {version=$2} END{printf "%s %s %s\n", chart, version, "HEAD"}' "$chart/Chart.yaml"
done
)
if [ ! -f "$file" ] || [ ! -s "$file" ]; then
echo "$new_map" > "$file"
exit 0
fi
miss_map=$(mktemp)
trap 'rm -f "$miss_map"' EXIT
echo -n "$new_map" | awk 'NR==FNR { nm[$1 " " $2] = $3; next } { if (!($1 " " $2 in nm)) print $1, $2, $3}' - "$file" > $miss_map
# search accross all tags sorted by version
search_commits=$(git ls-remote --tags origin | awk -F/ '$3 ~ /v[0-9]+.[0-9]+.[0-9]+/ {print}' | sort -k2,2 -rV | awk '{print $1}')
resolved_miss_map=$(
while read -r chart version commit; do
# if version is found in HEAD, it's HEAD
if [ "$(awk '$1 == "version:" {print $2}' ./${chart}/Chart.yaml)" = "${version}" ]; then
echo "$chart $version HEAD"
continue
fi
# if commit is not HEAD, check if it's valid
if [ "$commit" != "HEAD" ]; then
if [ "$(git show "${commit}:./${chart}/Chart.yaml" | awk '$1 == "version:" {print $2}')" != "${version}" ]; then
echo "Commit $commit for $chart $version is not valid" >&2
exit 1
fi
commit=$(git rev-parse --short "$commit")
echo "$chart $version $commit"
continue
fi
# if commit is HEAD, but version is not found in HEAD, check all tags
found_tag=""
for tag in $search_commits; do
if [ "$(git show "${tag}:./${chart}/Chart.yaml" | awk '$1 == "version:" {print $2}')" = "${version}" ]; then
found_tag=$(git rev-parse --short "${tag}")
break
fi
done
if [ -z "$found_tag" ]; then
echo "Can't find $chart $version in any version tag, removing it" >&2
continue
fi
echo "$chart $version $found_tag"
done < $miss_map
)
printf "%s\n" "$new_map" "$resolved_miss_map" | sort -k1,1 -k2,2 -V | awk '$1' > "$file"

View File

@@ -1,59 +0,0 @@
#!/bin/sh
set -eu
# Script to run unit tests for all Helm charts.
# It iterates through directories in packages/apps, packages/extra,
# packages/system, and packages/library and runs the 'test' Makefile
# target if it exists.
FAILED_DIRS_FILE="$(mktemp)"
trap 'rm -f "$FAILED_DIRS_FILE"' EXIT
tests_found=0
check_and_run_test() {
dir="$1"
makefile="$dir/Makefile"
if [ ! -f "$makefile" ]; then
return 0
fi
if make -C "$dir" -n test >/dev/null 2>&1; then
echo "Running tests in $dir"
tests_found=$((tests_found + 1))
if ! make -C "$dir" test; then
printf '%s\n' "$dir" >> "$FAILED_DIRS_FILE"
return 1
fi
fi
return 0
}
for package_dir in packages/apps packages/extra packages/system packages/library; do
if [ ! -d "$package_dir" ]; then
echo "Warning: Directory $package_dir does not exist, skipping..." >&2
continue
fi
for dir in "$package_dir"/*; do
[ -d "$dir" ] || continue
check_and_run_test "$dir" || true
done
done
if [ "$tests_found" -eq 0 ]; then
echo "No directories with 'test' Makefile targets found."
exit 0
fi
if [ -s "$FAILED_DIRS_FILE" ]; then
echo "ERROR: Tests failed in the following directories:" >&2
while IFS= read -r dir; do
echo " - $dir" >&2
done < "$FAILED_DIRS_FILE"
exit 1
fi
echo "All Helm unit tests passed."

65
hack/package_chart.sh Executable file
View File

@@ -0,0 +1,65 @@
#!/bin/sh
set -e
usage() {
printf "%s\n" "Usage:" >&2 ;
printf -- "%s\n" '---' >&2 ;
printf "%s %s\n" "$0" "INPUT_DIR OUTPUT_DIR TMP_DIR [DEPENDENCY_DIR]" >&2 ;
printf -- "%s\n" '---' >&2 ;
printf "%s\n" "Takes a helm repository from INPUT_DIR, with an optional library repository in" >&2 ;
printf "%s\n" "DEPENDENCY_DIR, prepares a view of the git archive at select points in history" >&2 ;
printf "%s\n" "in TMP_DIR and packages helm charts, outputting the tarballs to OUTPUT_DIR" >&2 ;
}
if [ "x$(basename $PWD)" != "xpackages" ]
then
echo "Error: This script must run from the ./packages/ directory" >&2
echo >&2
usage
exit 1
fi
if [ "x$#" != "x3" ] && [ "x$#" != "x4" ]
then
echo "Error: This script takes 3 or 4 arguments" >&2
echo "Got $# arguments:" "$@" >&2
echo >&2
usage
exit 1
fi
input_dir=$1
output_dir=$2
tmp_dir=$3
if [ "x$#" = "x4" ]
then
dependency_dir=$4
fi
rm -rf "${output_dir:?}"
mkdir -p "${output_dir}"
while read package _ commit
do
# this lets devs build the packages from a dirty repo for quick local testing
if [ "x$commit" = "xHEAD" ]
then
helm package "${input_dir}/${package}" -d "${output_dir}"
continue
fi
git archive --format tar "${commit}" "${input_dir}/${package}" | tar -xf- -C "${tmp_dir}/"
# the library chart is not present in older commits and git archive doesn't fail gracefully if the path is not found
if [ "x${dependency_dir}" != "x" ] && git ls-tree --name-only "${commit}" "${dependency_dir}" | grep -qx "${dependency_dir}"
then
git archive --format tar "${commit}" "${dependency_dir}" | tar -xf- -C "${tmp_dir}/"
fi
helm package "${tmp_dir}/${input_dir}/${package}" -d "${output_dir}"
rm -rf "${tmp_dir:?}/${input_dir:?}/${package:?}"
if [ "x${dependency_dir}" != "x" ]
then
rm -rf "${tmp_dir:?}/${dependency_dir:?}"
fi
done < "${input_dir}/versions_map"
helm repo index "${output_dir}"

View File

@@ -32,10 +32,6 @@ kube::codegen::gen_helpers \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
"${SCRIPT_ROOT}/pkg/apis"
kube::codegen::gen_helpers \
--boilerplate "${SCRIPT_ROOT}/hack/boilerplate.go.txt" \
"${SCRIPT_ROOT}/api"
if [[ -n "${API_KNOWN_VIOLATIONS_DIR:-}" ]]; then
report_filename="${API_KNOWN_VIOLATIONS_DIR}/cozystack_api_violation_exceptions.list"
if [[ "${UPDATE_API_KNOWN_VIOLATIONS:-}" == "true" ]]; then
@@ -53,6 +49,4 @@ kube::codegen::gen_openapi \
"${SCRIPT_ROOT}/pkg/apis"
$CONTROLLER_GEN object:headerFile="hack/boilerplate.go.txt" paths="./api/..."
$CONTROLLER_GEN rbac:roleName=manager-role crd paths="./api/..." output:crd:artifacts:config=packages/system/cozystack-controller/crds
mv packages/system/cozystack-controller/crds/cozystack.io_cozystackresourcedefinitions.yaml \
packages/system/cozystack-resource-definition-crd/definition/cozystack.io_cozystackresourcedefinitions.yaml
$CONTROLLER_GEN rbac:roleName=manager-role crd paths="./api/..." output:crd:artifacts:config=packages/system/cozystack-controller/templates/crds

View File

@@ -1,139 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# Requirements: yq (v4), jq, base64
need() { command -v "$1" >/dev/null 2>&1 || { echo "need $1"; exit 1; }; }
need yq; need jq; need base64
CHART_YAML="${CHART_YAML:-Chart.yaml}"
VALUES_YAML="${VALUES_YAML:-values.yaml}"
SCHEMA_JSON="${SCHEMA_JSON:-values.schema.json}"
CRD_DIR="../../system/cozystack-resource-definitions/cozyrds"
[[ -f "$CHART_YAML" ]] || { echo "No $CHART_YAML found"; exit 1; }
[[ -f "$SCHEMA_JSON" ]] || { echo "No $SCHEMA_JSON found"; exit 1; }
# Read basics from Chart.yaml
NAME="$(yq -r '.name // ""' "$CHART_YAML")"
DESC="$(yq -r '.description // ""' "$CHART_YAML")"
ICON_PATH_RAW="$(yq -r '.icon // ""' "$CHART_YAML")"
if [[ -z "$NAME" ]]; then
echo "Chart.yaml: .name is empty"; exit 1
fi
# Resolve icon path
# Accepts:
# /logos/foo.svg -> ./logos/foo.svg
# logos/foo.svg -> logos/foo.svg
# ./logos/foo.svg -> ./logos/foo.svg
# Fallback: ./logos/${NAME}.svg
resolve_icon_path() {
local p="$1"
if [[ -z "$p" || "$p" == "null" ]]; then
echo "./logos/${NAME}.svg"; return
fi
if [[ "$p" == /* ]]; then
echo ".${p}"
else
echo "$p"
fi
}
ICON_PATH="$(resolve_icon_path "$ICON_PATH_RAW")"
if [[ ! -f "$ICON_PATH" ]]; then
# try fallback
ALT="./logos/${NAME}.svg"
if [[ -f "$ALT" ]]; then
ICON_PATH="$ALT"
else
echo "Icon not found: $ICON_PATH"; exit 1
fi
fi
# Base64 (portable: no -w / -b options)
ICON_B64="$(base64 < "$ICON_PATH" | tr -d '\n' | tr -d '\r')"
# Decide which HelmRepository name to use based on path
# .../apps/... -> cozystack-apps
# .../extra/... -> cozystack-extra
# default: cozystack-apps
SOURCE_NAME="cozystack-apps"
case "$PWD" in
*"/apps/"*) SOURCE_NAME="cozystack-apps" ;;
*"/extra/"*) SOURCE_NAME="cozystack-extra" ;;
esac
# If file doesn't exist, create a minimal skeleton
OUT="${OUT:-$CRD_DIR/$NAME.yaml}"
if [[ ! -f "$OUT" ]]; then
cat >"$OUT" <<EOF
apiVersion: cozystack.io/v1alpha1
kind: CozystackResourceDefinition
metadata:
name: ${NAME}
spec: {}
EOF
fi
# Export vars for yq env()
export RES_NAME="$NAME"
export PREFIX="$NAME-"
if [ "$SOURCE_NAME" == "cozystack-extra" ]; then
export PREFIX=""
fi
export DESCRIPTION="$DESC"
export ICON_B64="$ICON_B64"
export SOURCE_NAME="$SOURCE_NAME"
export SCHEMA_JSON_MIN="$(jq -c . "$SCHEMA_JSON")"
# Generate keysOrder from values.yaml
export KEYS_ORDER="$(
yq -o=json '.' "$VALUES_YAML" | jq -c '
def get_paths_recursive(obj; path):
obj | to_entries | map(
.key as $key |
.value as $value |
if $value | type == "object" then
[path + [$key]] + get_paths_recursive($value; path + [$key])
else
[path + [$key]]
end
) | flatten(1)
;
(
[ ["apiVersion"], ["appVersion"], ["kind"], ["metadata"], ["metadata","name"] ]
)
+
(
get_paths_recursive(.; []) # get all paths in order
| map(select(length>0)) # drop root
| map(map(select(type != "number"))) # drop array indices
| map(["spec"] + .) # prepend "spec"
)
'
)"
# Update only necessary fields in-place
# - openAPISchema is loaded from file as a multi-line string (block scalar)
# - labels ensure cozystack.io/ui: "true"
# - prefix = "<name>-"
# - sourceRef derived from directory (apps|extra)
yq -i '
.apiVersion = (.apiVersion // "cozystack.io/v1alpha1") |
.kind = (.kind // "CozystackResourceDefinition") |
.metadata.name = strenv(RES_NAME) |
.spec.application.openAPISchema = strenv(SCHEMA_JSON_MIN) |
(.spec.application.openAPISchema style="literal") |
.spec.release.prefix = (strenv(PREFIX)) |
.spec.release.labels."cozystack.io/ui" = "true" |
.spec.release.chart.name = strenv(RES_NAME) |
.spec.release.chart.sourceRef.kind = "HelmRepository" |
.spec.release.chart.sourceRef.name = strenv(SOURCE_NAME) |
.spec.release.chart.sourceRef.namespace = "cozy-public" |
.spec.dashboard.description = strenv(DESCRIPTION) |
.spec.dashboard.icon = strenv(ICON_B64) |
.spec.dashboard.keysOrder = env(KEYS_ORDER)
' "$OUT"
echo "Updated $OUT"

Some files were not shown because too many files have changed in this diff Show More