Compare commits

..

34 Commits

Author SHA1 Message Date
Timofei Larkin
69c3bff41d Fix virtual machine resource tracking (#904) (#916)
* Count Workload resources for pods by requests, not limits
* Do not count init container requests
* Prefix Workloads for pods with `pod-`, just like the other types to
prevent possible name collisions (closes #787)

The previous version of the WorkloadMonitor controller incorrectly
summed resource limits on pods, rather than requests. This prevented it
from tracking the resource allocation for pods, which only had requests
specified, which is particularly the case for kubevirt's virtual machine
pods. Additionally, it counted the limits for all containers, including
init containers, which are short-lived and do not contribute much to the
total resource usage.

(cherry picked from commit 1e59e5fbb6)
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-05-05 18:16:50 +04:00
Timofei Larkin
34991d2cdb Fix virtual machine resource tracking (#904)
* Count Workload resources for pods by requests, not limits
* Do not count init container requests
* Prefix Workloads for pods with `pod-`, just like the other types to
prevent possible name collisions (closes #787)

The previous version of the WorkloadMonitor controller incorrectly
summed resource limits on pods, rather than requests. This prevented it
from tracking the resource allocation for pods, which only had requests
specified, which is particularly the case for kubevirt's virtual machine
pods. Additionally, it counted the limits for all containers, including
init containers, which are short-lived and do not contribute much to the
total resource usage.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved handling of workloads with unrecognized prefixes by ensuring
they are properly deleted and not processed further.
- Corrected resource aggregation for Pods to sum container resource
requests instead of limits, and now only includes normal containers.

- **New Features**
	- Added support for monitoring workloads with names prefixed by "pod-".

- **Tests**
- Introduced unit tests to verify correct handling of workload name
prefixes and monitored object creation.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

(cherry picked from commit 1e59e5fbb6)
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-05-05 13:55:06 +03:00
Andrei Kvapil
32c12ae8f7 Release v0.30.4 (#881)
This PR prepares the release `v0.30.4`.
2025-04-24 15:16:14 +02:00
github-actions
75f9aacecc Prepare release v0.30.4
Signed-off-by: github-actions <github-actions@github.com>
2025-04-24 12:50:06 +00:00
Andrei Kvapil
630bd55b1a [Backport release-0.30] [ci] Fix uploading assets to release (#877)
Backport of #876 to branch `release-0.30`
2025-04-24 14:25:08 +02:00
Andrei Kvapil
7627b1e47e [ci] Fix uploading assets to release
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 59ef3296f0)
2025-04-24 15:03:32 +03:00
Andrei Kvapil
d70cdfd854 [Backport release-0.30] [postgres] remove douplicated template from backup manifest (#874)
# Description
Backport of #872 to `release-0.30`.
2025-04-24 11:40:08 +02:00
Ian Simon
dfe5b937ac [postgres] remove douplicated template from backup manifest
Signed-off-by: Ian Simon <cheatmaster114@gmail.com>
(cherry picked from commit 19409d801d)
2025-04-24 09:34:46 +00:00
Andrei Kvapil
cde49eb055 [Backport release-0.30] [ci,dx] Suppress wget progress bar (#868)
Backport of #865 to release 0.30
2025-04-23 18:08:26 +02:00
Timofei Larkin
77648f1716 Suppress wget progress bar (#865)
In our CI wget spams thousands of lines of the progress bar into the
output, making it hard to read. Turns out, it doesn't have an option to
just remove the progress bar, but explicitly directing wget's log to
stdout and invoking --show-progress sends that to stderr which we
redirect to dev/null. The downloaded size is still reported at regular
intervals, but --progress=dot:giga shortens that to one line per 32M
which is manageable.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Improved file download process to display clearer progress updates
during downloads.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

(cherry picked from commit 07d7fadb1a)
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-23 19:01:19 +03:00
Andrei Kvapil
a9cbed9617 [Backport release-0.30] [virtual-machine] Fix: Add GPU names to virtual machines spec (#864)
# Description
Backport of #862 to `release-0.30`.
2025-04-23 16:41:10 +02:00
Nick Volynkin
05729ebb07 [backport] Backport several patches to 0.30.x (#852)
Cherry-picking patches that came before
https://github.com/cozystack/cozystack/pull/841
was merged. 

* Used `git cherry-pick -x -m1 <sha1>` on merge commits of respective
pull requests.
* Added `Co-authored-by` where the author of the changes was not the one
who merged the PR (and authored the merge commit).

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-23 17:34:37 +03:00
Andrei Kvapil
baf1bd9bfe [virtual-machine] Fix: Add GPU names to virtual machines spec
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 8547dc3b21)
2025-04-23 14:26:23 +00:00
klinch0
6f3aa9abbe [kubernetes] Fix tenant addons removal (#835)
Backport of #835

**New Features**
- Expanded the pre-delete operation to target additional components,
including cert-manager and vertical pod autoscaler resources.

**Chores**
- Updated chart version to 0.18.1 and revised version mappings for
improved tracking.

(cherry picked from commit ccedcb7419)

Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-23 17:03:56 +03:00
Andrei Kvapil
b70df68a5d [monitoring] Drop legacy label condition. (#826)
Backport of #826

Updated dashboard metrics filters to exclude containers with empty
names instead of specifically excluding containers named "POD". This
change applies to all relevant CPU, memory, network, and storage metrics
across capacity planning, controller, namespace, namespaces, and pod
dashboards. No other dashboard functionality or structure was changed.

(cherry picked from commit 277b438f68)

Co-authored-by: Denis Seleznev <kto.3decb@gmail.com>
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-23 17:03:56 +03:00
Andrei Kvapil
9257dfe230 [ci] Fix checkout and improve error output for gen_versions_map.sh (#845)
Backport of #845 to release-v0.30

Third attempt to fix https://github.com/cozystack/cozystack/pull/842 and
https://github.com/cozystack/cozystack/pull/836

tested in
https://github.com/cozystack/cozystack/actions/runs/14599981710/job/40955508728?pr=808

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

**Chores**
- Improved GitHub Actions workflow to fetch full git history and tags
during pre-commit checks.

**Refactor**
- Updated script behavior to display error messages when version
extraction from git fails, making troubleshooting easier.

(cherry picked from commit a6b02bf381)
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-23 17:03:56 +03:00
Andrei Kvapil
4a72cc4fa6 [ci] Fix escaping for gen_versions_map.sh script (#842)
Backport of #842 to release-v0.30

second attept of https://github.com/cozystack/cozystack/pull/836

- Improved reliability of version generation by handling empty or
special values safely in the process.

(cherry picked from commit e505857832)
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-23 17:03:55 +03:00
klinch0
9ca2595bab [ci] Fix escaping for gen_versions_map.sh script (#836)
Backport of #836

fixes errors like this:

```
make: Entering directory '/home/runner/work/cozystack/cozystack/packages/apps'
find . -maxdepth 2 -name Chart.yaml  | awk -F/ '{print $2}' | while read i; do sed -i "s/^name: .*/name: $i/" "$i/Chart.yaml"; done
../../hack/gen_versions_map.sh
../../hack/gen_versions_map.sh: 34: [: !=: unexpected operator
fatal: Needed a single revision
make: *** [Makefile:17: gen-versions-map] Error 128
make: Leaving directory '/home/runner/work/cozystack/cozystack/packages/apps'
```
https://github.com/cozystack/cozystack/actions/runs/14591720553/job/40928276862?pr=835

Improved reliability of version generation by handling empty or
special values safely in the process.

(cherry picked from commit 7a9a1fcba4)

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-23 17:03:52 +03:00
Andrei Kvapil
2ba6059dbe [Backport release-0.30] [tenant] Fix networkpolicy for accessing externalIPs from the cluster (#861)
# Description
Backport of #854 to `release-0.30`.
2025-04-23 14:48:40 +02:00
Andrei Kvapil
6f5e307415 Fix: networkpolicy for tenant to access from cluster
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 7bfad655c2)
2025-04-23 12:48:04 +00:00
Andrei Kvapil
6c8d1138cd [Backport release-0.30] [e2e] fix timeouts for capi and keycloak (#860)
# Description
Backport of #858 to `release-0.30`.
2025-04-23 14:26:51 +02:00
Andrei Kvapil
2c4bd23f9f [e2e] fix timeouts for capi and keycloak
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 1c53a6f9f6)
2025-04-23 12:26:27 +00:00
Andrei Kvapil
3445e2d23f [Backport release-0.30] [ci] Enable release-candidates and backport functionality (#853)
# Description
Backport of #841 to `release-0.30`.
2025-04-23 12:23:54 +02:00
Andrei Kvapil
abddefb1b0 [ci] Enable release-candidates and backport functionality
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
(cherry picked from commit 63ebab5c2a)
2025-04-23 10:07:01 +00:00
Andrei Kvapil
f78aefda8f [platform]: make lower resource request for capi-kamaji-controller-manager (#839)
Backport of #825 

cherry picked from commit a14bcf98dd
2025-04-22 17:47:47 +02:00
Andrei Kvapil
ad3684508f [platform]: make lower resource request for capi-kamaji-controller-manager (#825)
(cherry picked from commit a14bcf98dd)

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-04-22 15:09:37 +03:00
Andrei Kvapil
7ca8ff0e69 [ci] Fix matching tag for release branch (#805)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated the automated release process to format version tags with a
"v" prefix for consistent version naming.
  - Performed minor cleanup to improve overall code clarity.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-18 00:49:09 +02:00
Andrei Kvapil
721c12a758 Release v0.30.3 (#821)
This PR prepares the release `v0.30.3`.
(Please merge it before releasing draft)
2025-04-18 00:44:01 +02:00
kvaps
9f63cbbb5a Prepare release v0.30.3
Signed-off-by: github-actions <github-actions@github.com>
2025-04-17 21:59:15 +00:00
Andrei Kvapil
e8e911fea1 [ci] Fix: do not run tests in case of release skipped (#822)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-04-17 23:32:25 +02:00
Andrei Kvapil
2b23300f25 [ci] Revert: Workflows: Use real username to commit changes and fix assets (#823)
Let's revert 3c511023f3, because DCO don't
like such commits
2025-04-17 23:32:21 +02:00
Andrei Kvapil
53c5c8223c [ci] Update pipeline for patch releases (#816)
This PR includes the following changes:

* Do not remove version tag as part of releasing pipeline
* Overwrite tag only by fact of merging releasing pull request
* Automatically detect merge base and prepare pull request for this base
* Allow to run pipeline only for tags created on `main` and
`release-X.Y` branches


Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Improved workflow reliability by forcing Git tag creation and push to
overwrite existing tags if necessary.
- Enhanced workflow documentation with detailed, numbered comments for
greater clarity.
- Updated tag-based workflow to dynamically determine the base branch,
ensuring only valid branches are used.
	- Removed the automatic deletion of pushed tags in the workflow.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-17 23:32:09 +02:00
Andrei Kvapil
96ea3a5d1f [monitoring] fix vpa for vmagent delete resources (#820)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated resource allocation settings for monitoring agents by removing
predefined CPU and memory limits.
- Added an option to specify separate resource settings for the config
reloader component.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-04-17 23:20:16 +02:00
Andrei Kvapil
159b87d593 Release v0.30.2 (#813)
This PR prepares the release `v0.30.2`.
(Please merge it before releasing draft)
2025-04-17 23:19:03 +02:00
65 changed files with 452 additions and 647 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1 +1 @@
* @kvaps @lllamnyp @klinch0
* @kvaps @lllamnyp

49
.github/workflows/backport.yaml vendored Normal file
View File

@@ -0,0 +1,49 @@
name: Automatic Backport
on:
pull_request_target:
types: [closed] # fires when PR is closed (merged)
permissions:
contents: write
pull-requests: write
jobs:
backport:
if: |
github.event.pull_request.merged == true &&
contains(github.event.pull_request.labels.*.name, 'backport')
runs-on: [self-hosted]
steps:
# 1. Decide which maintenance branch should receive the backport
- name: Determine target maintenance branch
id: target
uses: actions/github-script@v7
with:
script: |
let rel;
try {
rel = await github.rest.repos.getLatestRelease({
owner: context.repo.owner,
repo: context.repo.repo
});
} catch (e) {
core.setFailed('No existing releases found; cannot determine backport target.');
return;
}
const [maj, min] = rel.data.tag_name.replace(/^v/, '').split('.');
const branch = `release-${maj}.${min}`;
core.setOutput('branch', branch);
console.log(`Latest release ${rel.data.tag_name}; backporting to ${branch}`);
# 2. Checkout (required by backportaction)
- name: Checkout repository
uses: actions/checkout@v4
# 3. Create the backport pull request
- name: Create backport PR
uses: korthout/backport-action@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
label_pattern: '' # don't read labels for targets
target_branches: ${{ steps.target.outputs.branch }}

View File

@@ -39,38 +39,82 @@ jobs:
runs-on: [self-hosted]
permissions:
contents: write
if: |
github.event.pull_request.merged == true &&
contains(github.event.pull_request.labels.*.name, 'release')
steps:
# Extract tag from branch name (branch = release-X.Y.Z*)
- name: Extract tag from branch name
id: get_tag
uses: actions/github-script@v7
with:
script: |
const branch = context.payload.pull_request.head.ref;
const match = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!match) {
core.setFailed(`Branch '${branch}' does not match expected format 'release-X.Y.Z[-suffix]'`);
} else {
const tag = `v${match[1]}`;
core.setOutput('tag', tag);
console.log(`✅ Extracted tag: ${tag}`);
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!m) {
core.setFailed(`Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
return;
}
const tag = `v${m[1]}`;
core.setOutput('tag', tag);
console.log(`✅ Tag to publish: ${tag}`);
# Checkout repo & create / push annotated tag
- name: Checkout repo
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Create tag on merged commit
- name: Create tag on merge commit
run: |
git tag ${{ steps.get_tag.outputs.tag }} ${{ github.sha }} --force
git push origin ${{ steps.get_tag.outputs.tag }} --force
git tag -f ${{ steps.get_tag.outputs.tag }} ${{ github.sha }}
git push -f origin ${{ steps.get_tag.outputs.tag }}
# Get the latest published release
- name: Get the latest published release
id: latest_release
uses: actions/github-script@v7
with:
script: |
try {
const rel = await github.rest.repos.getLatestRelease({
owner: context.repo.owner,
repo: context.repo.repo
});
core.setOutput('tag', rel.data.tag_name);
} catch (_) {
core.setOutput('tag', '');
}
# Compare current tag vs latest using semver-utils
- name: Semver compare
id: semver
uses: madhead/semver-utils@v4.3.0
with:
version: ${{ steps.get_tag.outputs.tag }}
compare-to: ${{ steps.latest_release.outputs.tag }}
# Derive flags: prerelease? make_latest?
- name: Calculate publish flags
id: flags
uses: actions/github-script@v7
with:
script: |
const tag = '${{ steps.get_tag.outputs.tag }}'; // v0.31.5-rc1
const m = tag.match(/^v(\d+\.\d+\.\d+)(-rc\d+)?$/);
if (!m) {
core.setFailed(`❌ tag '${tag}' must match 'vX.Y.Z' or 'vX.Y.Z-rcN'`);
return;
}
const version = m[1] + (m[2] ?? ''); // 0.31.5rc1
const isRc = Boolean(m[2]);
core.setOutput('is_rc', isRc);
const outdated = '${{ steps.semver.outputs.comparison-result }}' === '<';
core.setOutput('make_latest', isRc || outdated ? 'false' : 'legacy');
# Publish draft release with correct flags
- name: Publish draft release
uses: actions/github-script@v7
with:
@@ -78,19 +122,17 @@ jobs:
const tag = '${{ steps.get_tag.outputs.tag }}';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo
repo: context.repo.repo
});
const release = releases.data.find(r => r.tag_name === tag && r.draft);
if (!release) {
throw new Error(`Draft release with tag ${tag} not found`);
}
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) throw new Error(`Draft release for ${tag} not found`);
await github.rest.repos.updateRelease({
owner: context.repo.owner,
repo: context.repo.repo,
release_id: release.id,
draft: false
owner: context.repo.owner,
repo: context.repo.repo,
release_id: draft.id,
draft: false,
prerelease: ${{ steps.flags.outputs.is_rc }},
make_latest: '${{ steps.flags.outputs.make_latest }}'
});
console.log(` Published release for ${tag}`);
console.log(`🚀 Published release for ${tag}`);

View File

@@ -12,9 +12,20 @@ jobs:
contents: read
packages: write
# ─────────────────────────────────────────────────────────────
# Run automatically for internal PRs (same repo).
# For external PRs (forks) require the "oktotest" label.
# Never run when the PR carries the "release" label.
# ─────────────────────────────────────────────────────────────
if: |
contains(github.event.pull_request.labels.*.name, 'ok-to-test') &&
!contains(github.event.pull_request.labels.*.name, 'release')
!contains(github.event.pull_request.labels.*.name, 'release') &&
(
github.event.pull_request.head.repo.full_name == github.repository ||
(
github.event.pull_request.head.repo.full_name != github.repository &&
contains(github.event.pull_request.labels.*.name, 'ok-to-test')
)
)
steps:
- name: Checkout code
@@ -30,10 +41,8 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
- name: make build
run: |
make build
- name: Build
run: make build
- name: make test
run: |
make test
- name: Test
run: make test

View File

@@ -1,10 +1,9 @@
name: Versioned Tag
on:
# Trigger on push if it includes a tag like vX.Y.Z
push:
tags:
- 'v*.*.*'
- 'v*.*.*' # vX.Y.Z or vX.Y.Z-rcN
jobs:
prepare-release:
@@ -16,7 +15,7 @@ jobs:
pull-requests: write
steps:
# 1) Check if a non-draft release with this tag already exists
# Check if a non-draft release with this tag already exists
- name: Check if release already exists
id: check_release
uses: actions/github-script@v7
@@ -25,57 +24,67 @@ jobs:
const tag = context.ref.replace('refs/tags/', '');
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo
repo: context.repo.repo
});
const existing = releases.data.find(r => r.tag_name === tag && !r.draft);
if (existing) {
core.setOutput('skip', 'true');
} else {
core.setOutput('skip', 'false');
}
const exists = releases.data.some(r => r.tag_name === tag && !r.draft);
core.setOutput('skip', exists);
console.log(exists ? `Release ${tag} already published` : `No published release ${tag}`);
# If a published release already exists, skip the rest of the workflow
- name: Skip if release already exists
if: steps.check_release.outputs.skip == 'true'
run: echo "Release already exists, skipping workflow."
# 2) Determine the base branch from which the tag was pushed
# Parse tag metadata (rc?, maintenance line, etc.)
- name: Parse tag
if: steps.check_release.outputs.skip == 'false'
id: tag
uses: actions/github-script@v7
with:
script: |
const ref = context.ref.replace('refs/tags/', ''); // e.g. v0.31.5-rc1
const m = ref.match(/^v(\d+\.\d+\.\d+)(-rc\d+)?$/);
if (!m) {
core.setFailed(`❌ tag '${ref}' must match 'vX.Y.Z' or 'vX.Y.Z-rcN'`);
return;
}
const version = m[1] + (m[2] ?? ''); // 0.31.5rc1
const isRc = Boolean(m[2]);
const [maj, min] = m[1].split('.');
core.setOutput('tag', ref);
core.setOutput('version', version);
core.setOutput('is_rc', isRc);
core.setOutput('line', `${maj}.${min}`); // 0.31
# Detect base branch (main or releaseX.Y) the tag was pushed from
- name: Get base branch
if: steps.check_release.outputs.skip == 'false'
id: get_base
uses: actions/github-script@v7
with:
script: |
/*
For a push event with a tag, GitHub sets context.payload.base_ref
if the tag was pushed from a branch.
If it's empty, we can't determine the correct base branch and must fail.
*/
const baseRef = context.payload.base_ref;
if (!baseRef) {
core.setFailed(`❌ base_ref is empty. Make sure you push the tag from a branch (e.g. 'git push origin HEAD:refs/tags/vX.Y.Z').`);
core.setFailed(`❌ base_ref is empty. Push the tag via 'git push origin HEAD:refs/tags/<tag>'.`);
return;
}
const shortBranch = baseRef.replace("refs/heads/", "");
const releasePattern = /^release-\d+\.\d+$/;
if (shortBranch !== "main" && !releasePattern.test(shortBranch)) {
core.setFailed(`❌ Tagged commit must belong to branch 'main' or 'release-X.Y'. Got '${shortBranch}'`);
const branch = baseRef.replace('refs/heads/', '');
const ok = branch === 'main' || /^release-\d+\.\d+$/.test(branch);
if (!ok) {
core.setFailed(`❌ Tagged commit must belong to 'main' or 'release-X.Y'. Got '${branch}'`);
return;
}
core.setOutput('branch', branch);
core.setOutput('branch', shortBranch);
# 3) Checkout full git history and tags
# Checkout & login once
- name: Checkout code
if: steps.check_release.outputs.skip == 'false'
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
fetch-tags: true
# 4) Login to GitHub Container Registry
- name: Login to GitHub Container Registry
- name: Login to GHCR
if: steps.check_release.outputs.skip == 'false'
uses: docker/login-action@v3
with:
@@ -83,113 +92,160 @@ jobs:
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
# 5) Build project artifacts
# Build project artifacts
- name: Build
if: steps.check_release.outputs.skip == 'false'
run: make build
# 6) Optionally commit built artifacts to the repository
# Commit built artifacts
- name: Commit release artifacts
if: steps.check_release.outputs.skip == 'false'
env:
GIT_AUTHOR_NAME: ${{ github.actor }}
GIT_AUTHOR_EMAIL: ${{ github.actor }}@users.noreply.github.com
run: |
git config user.name "github-actions"
git config user.name "github-actions"
git config user.email "github-actions@github.com"
git add .
git commit -m "Prepare release ${GITHUB_REF#refs/tags/}" -s || echo "No changes to commit"
git push origin HEAD || true
# 7) Create a release branch like release-X.Y.Z
# Get `latest_version` from latest published release
- name: Get latest published release
if: steps.check_release.outputs.skip == 'false'
id: latest_release
uses: actions/github-script@v7
with:
script: |
try {
const rel = await github.rest.repos.getLatestRelease({
owner: context.repo.owner,
repo: context.repo.repo
});
core.setOutput('tag', rel.data.tag_name);
} catch (_) {
core.setOutput('tag', '');
}
# Compare tag (A) with latest (B)
- name: Semver compare
if: steps.check_release.outputs.skip == 'false'
id: semver
uses: madhead/semver-utils@v4.3.0
with:
version: ${{ steps.tag.outputs.tag }} # A
compare-to: ${{ steps.latest_release.outputs.tag }} # B
# Create or reuse DRAFT GitHub Release
- name: Create / reuse draft release
if: steps.check_release.outputs.skip == 'false'
id: release
uses: actions/github-script@v7
with:
script: |
const tag = '${{ steps.tag.outputs.tag }}';
const isRc = ${{ steps.tag.outputs.is_rc }};
const outdated = '${{ steps.semver.outputs.comparison-result }}' === '<';
const makeLatest = outdated ? false : 'legacy';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo
});
let rel = releases.data.find(r => r.tag_name === tag);
if (!rel) {
rel = await github.rest.repos.createRelease({
owner: context.repo.owner,
repo: context.repo.repo,
tag_name: tag,
name: tag,
draft: true,
prerelease: isRc,
make_latest: makeLatest
});
console.log(`Draft release created for ${tag}`);
} else {
console.log(`Reusing existing release ${tag}`);
}
core.setOutput('upload_url', rel.upload_url);
# Build + upload assets (optional)
- name: Build & upload assets
if: steps.check_release.outputs.skip == 'false'
run: |
make assets
make upload_assets VERSION=${{ steps.tag.outputs.tag }}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Ensure longlived maintenance branch releaseX.Y
- name: Ensure maintenance branch release${{ steps.tag.outputs.line }}
if: |
steps.check_release.outputs.skip == 'false' &&
steps.get_base.outputs.branch == 'main'
uses: actions/github-script@v7
with:
script: |
const branch = `release-${'${{ steps.tag.outputs.line }}'}`;
try {
await github.rest.repos.getBranch({
owner: context.repo.owner,
repo: context.repo.repo,
branch
});
console.log(`Branch '${branch}' already exists`);
} catch (_) {
await github.git.createRef({
owner: context.repo.owner,
repo: context.repo.repo,
ref: `refs/heads/${branch}`,
sha: context.sha
});
console.log(`Branch '${branch}' created at ${context.sha}`);
}
# Create releaseX.Y.Z branch and push (forceupdate)
- name: Create release branch
if: steps.check_release.outputs.skip == 'false'
run: |
BRANCH_NAME="release-${GITHUB_REF#refs/tags/v}"
git branch -f "$BRANCH_NAME"
git push origin "$BRANCH_NAME" --force
BRANCH="release-${GITHUB_REF#refs/tags/v}"
git branch -f "$BRANCH"
git push -f origin "$BRANCH"
# 8) Create a pull request from release-X.Y.Z to the original base branch
# Create pull request into original base branch (if absent)
- name: Create pull request if not exists
if: steps.check_release.outputs.skip == 'false'
uses: actions/github-script@v7
with:
script: |
const version = context.ref.replace('refs/tags/v', '');
const base = '${{ steps.get_base.outputs.branch }}';
const head = `release-${version}`;
const base = '${{ steps.get_base.outputs.branch }}';
const head = `release-${version}`;
const prs = await github.rest.pulls.list({
owner: context.repo.owner,
repo: context.repo.repo,
head: `${context.repo.owner}:${head}`,
repo: context.repo.repo,
head: `${context.repo.owner}:${head}`,
base
});
if (prs.data.length === 0) {
const newPr = await github.rest.pulls.create({
const pr = await github.rest.pulls.create({
owner: context.repo.owner,
repo: context.repo.repo,
repo: context.repo.repo,
head,
base,
title: `Release v${version}`,
body:
`This PR prepares the release \`v${version}\`.\n` +
`(Please merge it before releasing draft)`,
body: `This PR prepares the release \`v${version}\`.`,
draft: false
});
console.log(`Created pull request #${newPr.data.number} from ${head} to ${base}`);
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: newPr.data.number,
repo: context.repo.repo,
issue_number: pr.data.number,
labels: ['release']
});
console.log(`Created PR #${pr.data.number}`);
} else {
console.log(`Pull request already exists from ${head} to ${base}`);
console.log(`PR already exists from ${head} to ${base}`);
}
# 9) Create or reuse an existing draft GitHub release for this tag
- name: Create or reuse draft release
if: steps.check_release.outputs.skip == 'false'
id: create_release
uses: actions/github-script@v7
with:
script: |
const tag = context.ref.replace('refs/tags/', '');
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo
});
let release = releases.data.find(r => r.tag_name === tag);
if (!release) {
release = await github.rest.repos.createRelease({
owner: context.repo.owner,
repo: context.repo.repo,
tag_name: tag,
name: `${tag}`,
draft: true,
prerelease: false
});
}
core.setOutput('upload_url', release.upload_url);
# 10) Build additional assets for the release (if needed)
- name: Build assets
if: steps.check_release.outputs.skip == 'false'
run: make assets
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# 11) Upload assets to the draft release
- name: Upload assets
if: steps.check_release.outputs.skip == 'false'
run: make upload_assets VERSION=${GITHUB_REF#refs/tags/}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# 12) Run tests
- name: Run tests
# Run tests
- name: Test
if: steps.check_release.outputs.skip == 'false'
run: make test

View File

@@ -1,139 +0,0 @@
# Release Workflow
This section explains how Cozystack builds and releases are made.
## Regular Releases
When making regular releases, we take a commit in `main` and decide to make it a release `x.y.0`.
In this explanation, we'll use version `v0.42.0` as an example:
```mermaid
gitGraph
commit id: "feature"
commit id: "feature 2"
commit id: "feature 3" tag: "v0.42.0"
```
A regular release sequence starts in the following way:
1. Maintainer tags a commit in `main` with `v0.42.0` and pushes it to GitHub.
2. CI workflow triggers on tag push:
1. Creates a draft page for release `v0.42.0`, if it wasn't created before.
2. Takes code from tag `v0.42.0`, builds images, and pushes them to ghcr.io.
3. Makes a new commit `Prepare release v0.42.0` with updated digests, pushes it to the new branch `release-0.42.0`, and opens a PR to `main`.
4. Builds Cozystack release assets from the new commit `Prepare release v0.42.0` and uploads them to the release draft page.
3. Maintainer reviews PR, tests build artifacts, and edits changelogs on the release draft page.
```mermaid
gitGraph
commit id: "feature"
commit id: "feature 2"
commit id: "feature 3" tag: "v0.42.0"
branch release-0.42.0
checkout release-0.42.0
commit id: "Prepare release v0.42.0"
checkout main
merge release-0.42.0 id: "Pull Request"
```
When testing and editing are completed, the sequence goes on.
4. Maintainer merges the PR. GitHub removes the merged branch `release-0.42.0`.
5. CI workflow triggers on merge:
1. Moves the tag `v0.42.0` to the newly created merge commit by force-pushing a tag to GitHub.
2. Publishes the release page (`draft` → `latest`).
6. The maintainer can now announce the release to the community.
```mermaid
gitGraph
commit id: "feature"
commit id: "feature 2"
commit id: "feature 3"
branch release-0.42.0
checkout release-0.42.0
commit id: "Prepare release v0.42.0"
checkout main
merge release-0.42.0 id: "Release v0.42.0" tag: "v0.42.0"
```
## Patch Releases
Making a patch release has a lot in common with a regular release, with a couple of differences:
* A release branch is used instead of `main`
* Patch commits are cherry-picked to the release branch.
* A pull request is opened against the release branch.
Let's assume that we've released `v0.42.0` and that development is ongoing.
We have introduced a couple of new features and some fixes to features that we have released
in `v0.42.0`.
Once problems were found and fixed, a patch release is due.
```mermaid
gitGraph
commit id: "Release v0.42.0" tag: "v0.42.0"
checkout main
commit id: "feature 4"
commit id: "patch 1"
commit id: "feature 5"
commit id: "patch 2"
```
1. The maintainer creates a release branch, `release-0.42,` and cherry-picks patch commits from `main` to `release-0.42`.
These must be only patches to features that were present in version `v0.42.0`.
Cherry-picking can be done as soon as each patch is merged into `main`,
or directly before the release.
```mermaid
gitGraph
commit id: "Release v0.42.0" tag: "v0.42.0"
branch release-0.42
checkout main
commit id: "feature 4"
commit id: "patch 1"
commit id: "feature 5"
commit id: "patch 2"
checkout release-0.42
cherry-pick id: "patch 1"
cherry-pick id: "patch 2"
```
When all relevant patch commits are cherry-picked, the branch is ready for release.
2. The maintainer tags the `HEAD` commit of branch `release-0.42` as `v0.42.1` and then pushes it to GitHub.
3. CI workflow triggers on tag push:
1. Creates a draft page for release `v0.42.1`, if it wasn't created before.
2. Takes code from tag `v0.42.1`, builds images, and pushes them to ghcr.io.
3. Makes a new commit `Prepare release v0.42.1` with updated digests, pushes it to the new branch `release-0.42.1`, and opens a PR to `release-0.42`.
4. Builds Cozystack release assets from the new commit `Prepare release v0.42.1` and uploads them to the release draft page.
4. Maintainer reviews PR, tests build artifacts, and edits changelogs on the release draft page.
```mermaid
gitGraph
commit id: "Release v0.42.0" tag: "v0.42.0"
branch release-0.42
checkout main
commit id: "feature 4"
commit id: "patch 1"
commit id: "feature 5"
commit id: "patch 2"
checkout release-0.42
cherry-pick id: "patch 1"
cherry-pick id: "patch 2" tag: "v0.42.1"
branch release-0.42.1
commit id: "Prepare release v0.42.1"
checkout release-0.42
merge release-0.42.1 id: "Pull request"
```
Finally, when release is confirmed, the release sequence goes on.
5. Maintainer merges the PR. GitHub removes the merged branch `release-0.42.1`.
6. CI workflow triggers on merge:
1. Moves the tag `v0.42.1` to the newly created merge commit by force-pushing a tag to GitHub.
2. Publishes the release page (`draft` → `latest`).
7. The maintainer can now announce the release to the community.

View File

@@ -60,7 +60,7 @@ done
# Prepare system drive
if [ ! -f nocloud-amd64.raw ]; then
wget https://github.com/cozystack/cozystack/releases/latest/download/nocloud-amd64.raw.xz -O nocloud-amd64.raw.xz
wget https://github.com/cozystack/cozystack/releases/latest/download/nocloud-amd64.raw.xz -O nocloud-amd64.raw.xz --show-progress --output-file /dev/stdout --progress=dot:giga 2>/dev/null
rm -f nocloud-amd64.raw
xz --decompress nocloud-amd64.raw.xz
fi
@@ -234,8 +234,8 @@ sleep 5
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n " $1 " hr/" $2 " &"} END{print "wait"}' | sh -x
# Wait for Cluster-API providers
timeout 30 sh -c 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager; do sleep 1; done'
kubectl wait deploy --timeout=30s --for=condition=available -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager
timeout 60 sh -c 'until kubectl get deploy -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager; do sleep 1; done'
kubectl wait deploy --timeout=1m --for=condition=available -n cozy-cluster-api capi-controller-manager capi-kamaji-controller-manager capi-kubeadm-bootstrap-controller-manager capi-operator-cluster-api-operator capk-controller-manager
# Wait for linstor controller
kubectl wait deploy --timeout=5m --for=condition=available -n cozy-linstor linstor-controller
@@ -357,5 +357,5 @@ kubectl patch -n cozy-system cm/cozystack --type=merge -p '{"data":{
"oidc-enabled": "true"
}}'
timeout 60 sh -c 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator; do sleep 1; done'
timeout 120 sh -c 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator; do sleep 1; done'
kubectl wait --timeout=10m --for=condition=ready -n cozy-keycloak hr keycloak keycloak-configure keycloak-operator

View File

@@ -39,6 +39,15 @@ func (r *WorkloadReconciler) Reconcile(ctx context.Context, req ctrl.Request) (c
}
t := getMonitoredObject(w)
if t == nil {
err = r.Delete(ctx, w)
if err != nil {
logger.Error(err, "failed to delete workload")
}
return ctrl.Result{}, err
}
err = r.Get(ctx, types.NamespacedName{Name: t.GetName(), Namespace: t.GetNamespace()}, t)
// found object, nothing to do
@@ -68,20 +77,23 @@ func (r *WorkloadReconciler) SetupWithManager(mgr ctrl.Manager) error {
}
func getMonitoredObject(w *cozyv1alpha1.Workload) client.Object {
if strings.HasPrefix(w.Name, "pvc-") {
switch {
case strings.HasPrefix(w.Name, "pvc-"):
obj := &corev1.PersistentVolumeClaim{}
obj.Name = strings.TrimPrefix(w.Name, "pvc-")
obj.Namespace = w.Namespace
return obj
}
if strings.HasPrefix(w.Name, "svc-") {
case strings.HasPrefix(w.Name, "svc-"):
obj := &corev1.Service{}
obj.Name = strings.TrimPrefix(w.Name, "svc-")
obj.Namespace = w.Namespace
return obj
case strings.HasPrefix(w.Name, "pod-"):
obj := &corev1.Pod{}
obj.Name = strings.TrimPrefix(w.Name, "pod-")
obj.Namespace = w.Namespace
return obj
}
obj := &corev1.Pod{}
obj.Name = w.Name
obj.Namespace = w.Namespace
var obj client.Object
return obj
}

View File

@@ -0,0 +1,26 @@
package controller
import (
"testing"
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
corev1 "k8s.io/api/core/v1"
)
func TestUnprefixedMonitoredObjectReturnsNil(t *testing.T) {
w := &cozyv1alpha1.Workload{}
w.Name = "unprefixed-name"
obj := getMonitoredObject(w)
if obj != nil {
t.Errorf(`getMonitoredObject(&Workload{Name: "%s"}) == %v, want nil`, w.Name, obj)
}
}
func TestPodMonitoredObject(t *testing.T) {
w := &cozyv1alpha1.Workload{}
w.Name = "pod-mypod"
obj := getMonitoredObject(w)
if pod, ok := obj.(*corev1.Pod); !ok || pod.Name != "mypod" {
t.Errorf(`getMonitoredObject(&Workload{Name: "%s"}) == %v, want &Pod{Name: "mypod"}`, w.Name, obj)
}
}

View File

@@ -116,24 +116,15 @@ func (r *WorkloadMonitorReconciler) reconcileServiceForMonitor(
resources := make(map[string]resource.Quantity)
quantity := resource.MustParse("0")
q := resource.MustParse("0")
for _, ing := range svc.Status.LoadBalancer.Ingress {
if ing.IP != "" {
quantity.Add(resource.MustParse("1"))
q.Add(resource.MustParse("1"))
}
}
var resourceLabel string
if svc.Annotations != nil {
var ok bool
resourceLabel, ok = svc.Annotations["metallb.universe.tf/ip-allocated-from-pool"]
if !ok {
resourceLabel = "default"
}
}
resourceLabel = fmt.Sprintf("%s.ipaddresspool.metallb.io/requests.ipaddresses", resourceLabel)
resources[resourceLabel] = quantity
resources["public-ips"] = q
_, err := ctrl.CreateOrUpdate(ctx, r.Client, workload, func() error {
// Update owner references with the new monitor
@@ -174,12 +165,7 @@ func (r *WorkloadMonitorReconciler) reconcilePVCForMonitor(
resources := make(map[string]resource.Quantity)
for resourceName, resourceQuantity := range pvc.Status.Capacity {
storageClass := "default"
if pvc.Spec.StorageClassName != nil || *pvc.Spec.StorageClassName == "" {
storageClass = *pvc.Spec.StorageClassName
}
resourceLabel := fmt.Sprintf("%s.storageclass.storage.k8s.io/requests.%s", storageClass, resourceName.String())
resources[resourceLabel] = resourceQuantity
resources[resourceName.String()] = resourceQuantity
}
_, err := ctrl.CreateOrUpdate(ctx, r.Client, workload, func() error {
@@ -212,15 +198,12 @@ func (r *WorkloadMonitorReconciler) reconcilePodForMonitor(
) error {
logger := log.FromContext(ctx)
// Combine both init containers and normal containers to sum resources properly
combinedContainers := append(pod.Spec.InitContainers, pod.Spec.Containers...)
// totalResources will store the sum of all container resource limits
// totalResources will store the sum of all container resource requests
totalResources := make(map[string]resource.Quantity)
// Iterate over all containers to aggregate their Limits
for _, container := range combinedContainers {
for name, qty := range container.Resources.Limits {
// Iterate over all containers to aggregate their requests
for _, container := range pod.Spec.Containers {
for name, qty := range container.Resources.Requests {
if existing, exists := totalResources[name.String()]; exists {
existing.Add(qty)
totalResources[name.String()] = existing
@@ -249,7 +232,7 @@ func (r *WorkloadMonitorReconciler) reconcilePodForMonitor(
workload := &cozyv1alpha1.Workload{
ObjectMeta: metav1.ObjectMeta{
Name: pod.Name,
Name: fmt.Sprintf("pod-%s", pod.Name),
Namespace: pod.Namespace,
},
}

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.10.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
ghcr.io/cozystack/cozystack/postgres-backup:0.10.1@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.4.0@sha256:bef7344da098c4dc400a9e20ffad10ac991df67d09a30026207454abbc91f28b
ghcr.io/cozystack/cozystack/nginx-cache:0.4.0@sha256:4da14241052d2c4bd29d1766c4a569446f808a19538ef7f6acc05a981913df8e

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.18.0@sha256:85371c6aabf5a7fea2214556deac930c600e362f92673464fe2443784e2869c3
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.18.1@sha256:85371c6aabf5a7fea2214556deac930c600e362f92673464fe2443784e2869c3

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.18.0@sha256:795d8e1ef4b2b0df2aa1e09d96cd13476ebb545b4bf4b5779b7547a70ef64cf9
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.18.1@sha256:795d8e1ef4b2b0df2aa1e09d96cd13476ebb545b4bf4b5779b7547a70ef64cf9

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.18.0@sha256:6f9091c3e7e4951c5e43fdafd505705fcc9f1ead290ee3ae42e97e9ec2b87b20
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.18.1@sha256:5717919c75e609902c6d67138311a2a8fd07be822e2173f3802b67cf5f3486e9

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.30.1@sha256:07392e7a87a3d4ef1c86c1b146e6c5de5c2b524aed5a53bf48870dc8a296f99a
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.30.1@sha256:6359b7877f04c6ac6641c0ebcc2a1d03cabfe1718464cd43f82e97724ad6aad8

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.10.0
version: 0.10.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.10.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f
ghcr.io/cozystack/cozystack/postgres-backup:0.10.1@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f

View File

@@ -13,9 +13,6 @@ spec:
jobTemplate:
spec:
backoffLimit: 2
template:
spec:
restartPolicy: OnFailure
template:
metadata:
annotations:
@@ -24,7 +21,7 @@ spec:
spec:
imagePullSecrets:
- name: {{ .Release.Name }}-regsecret
restartPolicy: Never
restartPolicy: OnFailure
containers:
- name: pgdump
image: "{{ $.Files.Get "images/postgres-backup.tag" | trim }}"

View File

@@ -4,4 +4,4 @@ description: Separated tenant namespace
icon: /logos/tenant.svg
type: application
version: 1.10.0
version: 1.9.2

View File

@@ -1,7 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: cozy-tenant-configuration-hash
namespace: {{ include "tenant.name" . }}
data:
cozyTenantConfigurationHash: {{ sha256sum (toJson .Values) | quote }}

View File

@@ -24,6 +24,7 @@ spec:
ingress:
- fromEntities:
- world
- cluster
egress:
- toEntities:
- world

View File

@@ -89,7 +89,8 @@ postgres 0.7.0 4b90bf5a
postgres 0.7.1 1ec10165
postgres 0.8.0 4e68e65c
postgres 0.9.0 8267072d
postgres 0.10.0 HEAD
postgres 0.10.0 721c12a7
postgres 0.10.1 HEAD
rabbitmq 0.1.0 263e47be
rabbitmq 0.2.0 53f2365e
rabbitmq 0.3.0 6c5cf5bf
@@ -131,7 +132,7 @@ tenant 1.7.0 24fa7222
tenant 1.8.0 160e4e2a
tenant 1.9.0 728743db
tenant 1.9.1 721c12a7
tenant 1.10.0 HEAD
tenant 1.9.2 HEAD
virtual-machine 0.1.4 f2015d65
virtual-machine 0.1.5 263e47be
virtual-machine 0.2.0 c0685f43
@@ -144,7 +145,8 @@ virtual-machine 0.7.1 0ab39f20
virtual-machine 0.8.0 3fa4dd3a
virtual-machine 0.8.1 93c46161
virtual-machine 0.8.2 de19450f
virtual-machine 0.9.0 HEAD
virtual-machine 0.9.0 721c12a7
virtual-machine 0.9.1 HEAD
vm-disk 0.1.0 d971f2ff
vm-disk 0.1.1 HEAD
vm-instance 0.1.0 1ec10165
@@ -154,7 +156,8 @@ vm-instance 0.4.0 e23286a3
vm-instance 0.4.1 0ab39f20
vm-instance 0.5.0 3fa4dd3a
vm-instance 0.5.1 de19450f
vm-instance 0.6.0 HEAD
vm-instance 0.6.0 721c12a7
vm-instance 0.6.1 HEAD
vpn 0.1.0 263e47be
vpn 0.2.0 53f2365e
vpn 0.3.0 6c5cf5bf

View File

@@ -17,7 +17,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.9.0
version: 0.9.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -74,7 +74,8 @@ spec:
{{- if .Values.gpus }}
gpus:
{{- range $i, $gpu := .Values.gpus }}
- deviceName: {{ $gpu.name }}
- name: gpu{{ add $i 1 }}
deviceName: {{ $gpu.name }}
{{- end }}
{{- end }}
disks:

View File

@@ -17,7 +17,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.6.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -46,7 +46,8 @@ spec:
{{- if .Values.gpus }}
gpus:
{{- range $i, $gpu := .Values.gpus }}
- deviceName: {{ $gpu.name }}
- name: gpu{{ add $i 1 }}
deviceName: {{ $gpu.name }}
{{- end }}
{{- end }}
disks:

View File

@@ -30,8 +30,6 @@ FROM alpine:3.21
RUN apk add --no-cache make
RUN apk add helm kubectl --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community
RUN apk add yq
RUN apk add coreutils
COPY scripts /cozystack/scripts
COPY --from=builder /src/packages/core /cozystack/packages/core

View File

@@ -1,2 +1,2 @@
cozystack:
image: ghcr.io/cozystack/cozystack/installer:v0.30.2@sha256:59996588b5d59b5593fb34442b2f2ed8ef466d138b229a8d37beb6f70141a690
image: ghcr.io/cozystack/cozystack/installer:v0.30.4@sha256:d474e9c3f90dadb24f2fc325acfa42648053e2b21949c91169769795b8b8217c

View File

@@ -7,11 +7,7 @@ show:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS)
apply:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) \
| kubectl apply -f-
kubectl delete helmreleases.helm.toolkit.fluxcd.io -l cozystack.io/marked-for-deletion=true -A
reconcile: apply
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) | kubectl apply -f-
namespaces-show:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml

View File

@@ -270,10 +270,7 @@ releases:
{{- end }}
{{- end }}
{{- end }}
frontend:
resourcesPreset: "none"
dashboard:
resourcesPreset: "none"
{{- $cozystackBranding:= lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
{{- $branding := dig "data" "branding" "" $cozystackBranding }}
{{- if $branding }}

View File

@@ -168,10 +168,7 @@ releases:
{{- end }}
{{- end }}
{{- end }}
frontend:
resourcesPreset: "none"
dashboard:
resourcesPreset: "none"
{{- $cozystackBranding:= lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
{{- $branding := dig "data" "branding" "" $cozystackBranding }}
{{- if $branding }}

View File

@@ -54,12 +54,6 @@ spec:
namespace: cozy-public
values:
host: "{{ $host }}"
valuesFrom:
- kind: ConfigMap
name: "cozy-system-configuration-hash"
valuesKey: "cozyTenantConfigurationHash"
targetPath: "cozyTenantConfigurationHash"
optional: true
dependsOn:
{{- range $x := $bundle.releases }}
{{- if has $x.name (list "cilium" "kubeovn") }}

View File

@@ -1,14 +0,0 @@
{{- $rootTenantConfiguration := dict "values" .Values }}
{{- $cozyConfig := index (lookup "v1" "ConfigMap" "cozy-system" "cozystack" ) "data" }}
{{- $cozyScheduling := index (lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling") "data" }}
{{- $cozyBranding := index (lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" ) "data" }}
{{- $_ := set $rootTenantConfiguration "config" $cozyConfig }}
{{- $_ := set $rootTenantConfiguration "scheduling" $cozyScheduling }}
{{- $_ := set $rootTenantConfiguration "branding" $cozyBranding }}
apiVersion: v1
kind: ConfigMap
metadata:
name: cozy-system-configuration-hash
namespace: tenant-root
data:
cozyTenantConfigurationHash: {{ sha256sum (toJson $rootTenantConfiguration) | quote }}

View File

@@ -7,23 +7,12 @@
{{/* collect dependency namespaces from releases */}}
{{- range $x := $bundle.releases }}
{{- $_ := set $dependencyNamespaces $x.name $x.namespace }}
{{- $_ := set $dependencyNamespaces $x.name $x.namespace }}
{{- end }}
{{- range $x := $bundle.releases }}
{{- $shouldInstall := true }}
{{- $shouldDelete := false }}
{{- if or (has $x.name $disabledComponents) (and ($x.optional) (not (has $x.name $enabledComponents))) }}
{{- $shouldInstall = false }}
{{- if $.Capabilities.APIVersions.Has "helm.toolkit.fluxcd.io/v2" }}
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" $x.namespace $x.name }}
{{- $shouldDelete = true }}
{{- end }}
{{- end }}
{{- end }}
{{- if or $shouldInstall $shouldDelete }}
{{- if not (has $x.name $disabledComponents) }}
{{- if or (not $x.optional) (and ($x.optional) (has $x.name $enabledComponents)) }}
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
@@ -33,9 +22,6 @@ metadata:
labels:
cozystack.io/repository: system
cozystack.io/system-app: "true"
{{- if $shouldDelete }}
cozystack.io/marked-for-deletion: "true"
{{- end }}
spec:
interval: 5m
releaseName: {{ $x.releaseName | default $x.name }}
@@ -61,10 +47,10 @@ spec:
{{- end }}
{{- $values := dict }}
{{- with $x.values }}
{{- $values = merge . $values }}
{{- $values = merge . $values }}
{{- end }}
{{- with index $cozyConfig.data (printf "values-%s" $x.name) }}
{{- $values = merge (fromYaml .) $values }}
{{- $values = merge (fromYaml .) $values }}
{{- end }}
{{- with $values }}
values:
@@ -84,12 +70,13 @@ spec:
{{- with $x.dependsOn }}
dependsOn:
{{- range $dep := . }}
{{- if not (has $dep $disabledComponents) }}
{{- range $dep := . }}
{{- if not (has $dep $disabledComponents) }}
- name: {{ $dep }}
namespace: {{ index $dependencyNamespaces $dep }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,2 +1,2 @@
e2e:
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.30.2@sha256:31273d6b42dc88c2be2ff9ba64564d1b12e70ae8a5480953341b0d113ac7d4bd
image: ghcr.io/cozystack/cozystack/e2e-sandbox:v0.30.4@sha256:1f35a80c22b4ae3909216892e44f7ba50b00bd135b64081ffe5296eb936a5ca3

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/matchbox:v0.30.2@sha256:307d382f75f1dcb39820c73b93b2ce576cdb6d58032679bda7d926999c677900
ghcr.io/cozystack/cozystack/matchbox:v0.30.4@sha256:8eb6da7d616bd4f91fbe6a1bf3a4cb5448976c1ade2e1ecca9bf6a2bd1772851

View File

@@ -3,4 +3,4 @@ name: ingress
description: NGINX Ingress Controller
icon: /logos/ingress-nginx.svg
type: application
version: 1.5.0
version: 1.4.0

View File

@@ -4,13 +4,12 @@
### Common parameters
| Name | Description | Value |
| ----------------- | ----------------------------------------------------------------- | ------- |
| `replicas` | Number of ingress-nginx replicas | `2` |
| `externalIPs` | List of externalIPs for service. | `[]` |
| `whitelist` | List of client networks | `[]` |
| `clouflareProxy` | Restoring original visitor IPs when Cloudflare proxied is enabled | `false` |
| `dashboard` | Should ingress serve Cozystack service dashboard | `false` |
| `cdiUploadProxy` | Should ingress serve CDI upload proxy | `false` |
| `virtExportProxy` | Should ingress serve KubeVirt export proxy | `false` |
| Name | Description | Value |
| ---------------- | ----------------------------------------------------------------- | ------- |
| `replicas` | Number of ingress-nginx replicas | `2` |
| `externalIPs` | List of externalIPs for service. | `[]` |
| `whitelist` | List of client networks | `[]` |
| `clouflareProxy` | Restoring original visitor IPs when Cloudflare proxied is enabled | `false` |
| `dashboard` | Should ingress serve Cozystack service dashboard | `false` |
| `cdiUploadProxy` | Should ingress serve CDI upload proxy | `false` |

View File

@@ -35,11 +35,6 @@
"type": "boolean",
"description": "Should ingress serve CDI upload proxy",
"default": false
},
"virtExportProxy": {
"type": "boolean",
"description": "Should ingress serve KubeVirt export proxy",
"default": false
}
}
}

View File

@@ -30,6 +30,3 @@ dashboard: false
## @param cdiUploadProxy Should ingress serve CDI upload proxy
cdiUploadProxy: false
## @param virtExportProxy Should ingress serve KubeVirt export proxy
virtExportProxy: false

View File

@@ -1,37 +0,0 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- if .Values.virtExportProxy }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
cert-manager.io/cluster-issuer: letsencrypt-prod
{{- if eq $issuerType "cloudflare" }}
{{- else }}
acme.cert-manager.io/http01-ingress-class: {{ .Release.Namespace }}
{{- end }}
name: virt-exportproxy-{{ .Release.Namespace }}
namespace: cozy-kubevirt
spec:
ingressClassName: {{ .Release.Namespace }}
rules:
- host: virt-exportproxy.{{ $host }}
http:
paths:
- backend:
service:
name: virt-exportproxy
port:
number: 443
path: /
pathType: ImplementationSpecific
tls:
- hosts:
virt-exportproxy.{{ $host }}
secretName: virt-exportproxy-{{ .Release.Namespace }}-tls
{{- end }}

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/grafana:1.9.2@sha256:c63978e1ed0304e8518b31ddee56c4e8115541b997d8efbe1c0a74da57140399
ghcr.io/cozystack/cozystack/grafana:1.9.2@sha256:24382d445bf7a39ed988ef4dc7a0d9f084db891fcb5f42fd2e64622710b9457e

View File

@@ -16,8 +16,7 @@ ingress 1.0.0 d7cfa53c
ingress 1.1.0 5bbc488e
ingress 1.2.0 28fca4ef
ingress 1.3.0 fde4bcfa
ingress 1.4.0 fd240701
ingress 1.5.0 HEAD
ingress 1.4.0 HEAD
monitoring 1.0.0 d7cfa53c
monitoring 1.1.0 25221fdc
monitoring 1.2.0 f81be075

View File

@@ -79,7 +79,7 @@ annotations:
Pod IP Pool\n description: |\n CiliumPodIPPool defines an IP pool that can
be used for pooled IPAM (i.e. the multi-pool IPAM mode).\n"
apiVersion: v2
appVersion: 1.17.3
appVersion: 1.17.2
description: eBPF-based Networking, Security, and Observability
home: https://cilium.io/
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@main/Documentation/images/logo-solo.svg
@@ -95,4 +95,4 @@ kubeVersion: '>= 1.21.0-0'
name: cilium
sources:
- https://github.com/cilium/cilium
version: 1.17.3
version: 1.17.2

View File

@@ -1,6 +1,6 @@
# cilium
![Version: 1.17.3](https://img.shields.io/badge/Version-1.17.3-informational?style=flat-square) ![AppVersion: 1.17.3](https://img.shields.io/badge/AppVersion-1.17.3-informational?style=flat-square)
![Version: 1.17.2](https://img.shields.io/badge/Version-1.17.2-informational?style=flat-square) ![AppVersion: 1.17.2](https://img.shields.io/badge/AppVersion-1.17.2-informational?style=flat-square)
Cilium is open source software for providing and transparently securing
network connectivity and loadbalancing between application workloads such as
@@ -85,7 +85,7 @@ contributors across the globe, there is almost always someone available to help.
| authentication.mutual.spire.install.agent.tolerations | list | `[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"},{"key":"CriticalAddonsOnly","operator":"Exists"}]` | SPIRE agent tolerations configuration By default it follows the same tolerations as the agent itself to allow the Cilium agent on this node to connect to SPIRE. ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
| authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true |
| authentication.mutual.spire.install.existingNamespace | bool | `false` | SPIRE namespace already exists. Set to true if Helm should not create, manage, and import the SPIRE namespace. |
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:37f7b378a29ceb4c551b1b5582e27747b855bbfaa73fa11914fe0df028dc581f","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:498a000f370d8c37927118ed80afe8adc38d1edcbfc071627d17b25c88efcab0","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
| authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into |
| authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration |
| authentication.mutual.spire.install.server.annotations | object | `{}` | SPIRE server annotations |
@@ -197,7 +197,7 @@ contributors across the globe, there is almost always someone available to help.
| clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. |
| clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. |
| clustermesh.apiserver.healthPort | int | `9880` | TCP port for the clustermesh-apiserver health API. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:98d5feaf67dd9b5d8d219ff5990de10539566eedc5412bcf52df75920896ad42","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.17.3","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:981250ebdc6e66e190992eaf75cfca169113a8f08d5c3793fe15822176980398","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.17.2","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.kvstoremesh.enabled | bool | `true` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance. |
| clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. |
| clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. |
@@ -377,7 +377,7 @@ contributors across the globe, there is almost always someone available to help.
| envoy.healthPort | int | `9878` | TCP port for the health API. |
| envoy.httpRetryCount | int | `3` | Maximum number of retries for each HTTP request |
| envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s |
| envoy.image | object | `{"digest":"sha256:a01cadf7974409b5c5c92ace3d6afa298408468ca24cab1cb413c04f89d3d1f9","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.32.5-1744305768-f9ddca7dcd91f7ca25a505560e655c47d3dec2cf","useDigest":true}` | Envoy container image. |
| envoy.image | object | `{"digest":"sha256:377c78c13d2731f3720f931721ee309159e782d882251709cb0fac3b42c03f4b","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.31.5-1741765102-efed3defcc70ab5b263a0fc44c93d316b846a211","useDigest":true}` | Envoy container image. |
| envoy.initialFetchTimeoutSeconds | int | `30` | Time in seconds after which the initial fetch on an xDS stream is considered timed out |
| envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
| envoy.livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe |
@@ -518,7 +518,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. |
| hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay |
| hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay |
| hubble.relay.image | object | `{"digest":"sha256:f8674b5139111ac828a8818da7f2d344b4a5bfbaeb122c5dc9abed3e74000c55","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.17.3","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.image | object | `{"digest":"sha256:42a8db5c256c516cacb5b8937c321b2373ad7a6b0a1e5a5120d5028433d586cc","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.17.2","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. |
| hubble.relay.listenPort | string | `"4245"` | Port to listen to. |
| hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
@@ -625,7 +625,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. |
| identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd`, `kvstore` or `doublewrite-readkvstore` / `doublewrite-readcrd` for migrating between identity backends). |
| identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. |
| image | object | `{"digest":"sha256:1782794aeac951af139315c10eff34050aa7579c12827ee9ec376bb719b82873","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.3","useDigest":true}` | Agent container image. |
| image | object | `{"digest":"sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.2","useDigest":true}` | Agent container image. |
| imagePullSecrets | list | `[]` | Configure image pull secrets for pulling container images |
| ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set |
| ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. |
@@ -762,7 +762,7 @@ contributors across the globe, there is almost always someone available to help.
| operator.hostNetwork | bool | `true` | HostNetwork setting |
| operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. |
| operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. |
| operator.image | object | `{"alibabacloudDigest":"sha256:e9a9ab227c6e833985bde6537b4d1540b0907f21a84319de4b7d62c5302eed5c","awsDigest":"sha256:40f235111fb2bca209ee65b12f81742596e881a0a3ee4d159776d78e3091ba7f","azureDigest":"sha256:6a3294ec8a2107048254179c3ac5121866f90d20fccf12f1d70960e61f304713","genericDigest":"sha256:8bd38d0e97a955b2d725929d60df09d712fb62b60b930551a29abac2dd92e597","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.17.3","useDigest":true}` | cilium-operator image. |
| operator.image | object | `{"alibabacloudDigest":"sha256:7cb8c23417f65348bb810fe92fb05b41d926f019d77442f3fa1058d17fea7ffe","awsDigest":"sha256:955096183e22a203bbb198ca66e3266ce4dbc2b63f1a2fbd03f9373dcd97893c","azureDigest":"sha256:455fb88b558b1b8ba09d63302ccce76b4930581be89def027184ab04335c20e0","genericDigest":"sha256:81f2d7198366e8dec2903a3a8361e4c68d47d19c68a0d42f0b7b6e3f0523f249","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.17.2","useDigest":true}` | cilium-operator image. |
| operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. |
| operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods |
@@ -812,7 +812,7 @@ contributors across the globe, there is almost always someone available to help.
| preflight.extraEnv | list | `[]` | Additional preflight environment variables. |
| preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. |
| preflight.extraVolumes | list | `[]` | Additional preflight volumes. |
| preflight.image | object | `{"digest":"sha256:1782794aeac951af139315c10eff34050aa7579c12827ee9ec376bb719b82873","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.3","useDigest":true}` | Cilium pre-flight image. |
| preflight.image | object | `{"digest":"sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.17.2","useDigest":true}` | Cilium pre-flight image. |
| preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods |
| preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |

View File

@@ -191,10 +191,10 @@ image:
# @schema
override: ~
repository: "quay.io/cilium/cilium"
tag: "v1.17.3"
tag: "v1.17.2"
pullPolicy: "IfNotPresent"
# cilium-digest
digest: "sha256:1782794aeac951af139315c10eff34050aa7579c12827ee9ec376bb719b82873"
digest: "sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1"
useDigest: true
# -- Scheduling configurations for cilium pods
scheduling:
@@ -1440,9 +1440,9 @@ hubble:
# @schema
override: ~
repository: "quay.io/cilium/hubble-relay"
tag: "v1.17.3"
tag: "v1.17.2"
# hubble-relay-digest
digest: "sha256:f8674b5139111ac828a8818da7f2d344b4a5bfbaeb122c5dc9abed3e74000c55"
digest: "sha256:42a8db5c256c516cacb5b8937c321b2373ad7a6b0a1e5a5120d5028433d586cc"
useDigest: true
pullPolicy: "IfNotPresent"
# -- Specifies the resources for the hubble-relay pods
@@ -2351,9 +2351,9 @@ envoy:
# @schema
override: ~
repository: "quay.io/cilium/cilium-envoy"
tag: "v1.32.5-1744305768-f9ddca7dcd91f7ca25a505560e655c47d3dec2cf"
tag: "v1.31.5-1741765102-efed3defcc70ab5b263a0fc44c93d316b846a211"
pullPolicy: "IfNotPresent"
digest: "sha256:a01cadf7974409b5c5c92ace3d6afa298408468ca24cab1cb413c04f89d3d1f9"
digest: "sha256:377c78c13d2731f3720f931721ee309159e782d882251709cb0fac3b42c03f4b"
useDigest: true
# -- Additional containers added to the cilium Envoy DaemonSet.
extraContainers: []
@@ -2708,15 +2708,15 @@ operator:
# @schema
override: ~
repository: "quay.io/cilium/operator"
tag: "v1.17.3"
tag: "v1.17.2"
# operator-generic-digest
genericDigest: "sha256:8bd38d0e97a955b2d725929d60df09d712fb62b60b930551a29abac2dd92e597"
genericDigest: "sha256:81f2d7198366e8dec2903a3a8361e4c68d47d19c68a0d42f0b7b6e3f0523f249"
# operator-azure-digest
azureDigest: "sha256:6a3294ec8a2107048254179c3ac5121866f90d20fccf12f1d70960e61f304713"
azureDigest: "sha256:455fb88b558b1b8ba09d63302ccce76b4930581be89def027184ab04335c20e0"
# operator-aws-digest
awsDigest: "sha256:40f235111fb2bca209ee65b12f81742596e881a0a3ee4d159776d78e3091ba7f"
awsDigest: "sha256:955096183e22a203bbb198ca66e3266ce4dbc2b63f1a2fbd03f9373dcd97893c"
# operator-alibabacloud-digest
alibabacloudDigest: "sha256:e9a9ab227c6e833985bde6537b4d1540b0907f21a84319de4b7d62c5302eed5c"
alibabacloudDigest: "sha256:7cb8c23417f65348bb810fe92fb05b41d926f019d77442f3fa1058d17fea7ffe"
useDigest: true
pullPolicy: "IfNotPresent"
suffix: ""
@@ -2991,9 +2991,9 @@ preflight:
# @schema
override: ~
repository: "quay.io/cilium/cilium"
tag: "v1.17.3"
tag: "v1.17.2"
# cilium-digest
digest: "sha256:1782794aeac951af139315c10eff34050aa7579c12827ee9ec376bb719b82873"
digest: "sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1"
useDigest: true
pullPolicy: "IfNotPresent"
# -- The priority class to use for the preflight pod.
@@ -3140,9 +3140,9 @@ clustermesh:
# @schema
override: ~
repository: "quay.io/cilium/clustermesh-apiserver"
tag: "v1.17.3"
tag: "v1.17.2"
# clustermesh-apiserver-digest
digest: "sha256:98d5feaf67dd9b5d8d219ff5990de10539566eedc5412bcf52df75920896ad42"
digest: "sha256:981250ebdc6e66e190992eaf75cfca169113a8f08d5c3793fe15822176980398"
useDigest: true
pullPolicy: "IfNotPresent"
# -- TCP port for the clustermesh-apiserver health API.
@@ -3649,7 +3649,7 @@ authentication:
override: ~
repository: "docker.io/library/busybox"
tag: "1.37.0"
digest: "sha256:37f7b378a29ceb4c551b1b5582e27747b855bbfaa73fa11914fe0df028dc581f"
digest: "sha256:498a000f370d8c37927118ed80afe8adc38d1edcbfc071627d17b25c88efcab0"
useDigest: true
pullPolicy: "IfNotPresent"
# SPIRE agent configuration

View File

@@ -1,2 +1,2 @@
ARG VERSION=v1.17.3
ARG VERSION=v1.17.2
FROM quay.io/cilium/cilium:${VERSION}

View File

@@ -1,2 +1,2 @@
cozystackAPI:
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.30.2@sha256:7ef370dc8aeac0a6b2a50b7d949f070eb21d267ba0a70e7fc7c1564bfe6d4f83
image: ghcr.io/cozystack/cozystack/cozystack-api:v0.30.4@sha256:299b50de88aa945ab90ee41eeb1a0ac7ba20d858adacd1ef125af7d676ce440f

View File

@@ -1,5 +1,5 @@
cozystackController:
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.30.2@sha256:5b87a8ea0dcde1671f44532c1ee6db11a5dd922d1a009078ecf6495ec193e52a
image: ghcr.io/cozystack/cozystack/cozystack-controller:v0.30.4@sha256:a39395a6ce995d91bee8817c4032b8e073e0387f8b1e0de9d78909cb64189f80
debug: false
disableTelemetry: false
cozystackVersion: "v0.30.2"
cozystackVersion: "v0.30.4"

View File

@@ -76,7 +76,7 @@ data:
"kubeappsNamespace": {{ .Release.Namespace | quote }},
"helmGlobalNamespace": {{ include "kubeapps.helmGlobalPackagingNamespace" . | quote }},
"carvelGlobalNamespace": {{ .Values.kubeappsapis.pluginConfig.kappController.packages.v1alpha1.globalPackagingNamespace | quote }},
"appVersion": "v0.30.2",
"appVersion": "v0.30.4",
"authProxyEnabled": {{ .Values.authProxy.enabled }},
"oauthLoginURI": {{ .Values.authProxy.oauthLoginURI | quote }},
"oauthLogoutURI": {{ .Values.authProxy.oauthLogoutURI | quote }},

View File

@@ -1,80 +0,0 @@
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: dashboard-internal-dashboard
namespace: cozy-dashboard
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: dashboard-internal-dashboard
updatePolicy:
updateMode: "Auto"
resourcePolicy:
containerPolicies:
- containerName: dashboard
controlledResources: ["cpu", "memory"]
minAllowed:
cpu: 50m
memory: 64Mi
maxAllowed:
cpu: 500m
memory: 512Mi
---
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: dashboard-internal-kubeappsapis
namespace: cozy-dashboard
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: dashboard-internal-kubeappsapis
updatePolicy:
updateMode: "Auto"
resourcePolicy:
containerPolicies:
- containerName: kubeappsapis
controlledResources: ["cpu", "memory"]
minAllowed:
cpu: 50m
memory: 100Mi
maxAllowed:
cpu: 1000m
memory: 1Gi
---
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: dashboard-vpa
namespace: cozy-dashboard
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: dashboard
updatePolicy:
updateMode: "Auto"
resourcePolicy:
containerPolicies:
- containerName: nginx
controlledResources: ["cpu", "memory"]
minAllowed:
cpu: "50m"
memory: "64Mi"
maxAllowed:
cpu: "500m"
memory: "512Mi"
{{- $dashboardKCconfig := lookup "v1" "ConfigMap" "cozy-dashboard" "kubeapps-auth-config" }}
{{- $dashboardKCValues := dig "data" "values.yaml" "" $dashboardKCconfig }}
{{- if $dashboardKCValues }}
- containerName: auth-proxy
controlledResources: ["cpu", "memory"]
minAllowed:
cpu: "50m"
memory: "64Mi"
maxAllowed:
cpu: "500m"
memory: "512Mi"
{{- end }}

View File

@@ -15,19 +15,17 @@ kubeapps:
flux:
enabled: true
dashboard:
resourcesPreset: "none"
image:
registry: ghcr.io/cozystack/cozystack
repository: dashboard
tag: v0.30.2
tag: v0.30.4
digest: "sha256:a83fe4654f547469cfa469a02bda1273c54bca103a41eb007fdb2e18a7a91e93"
kubeappsapis:
resourcesPreset: "none"
image:
registry: ghcr.io/cozystack/cozystack
repository: kubeapps-apis
tag: v0.30.2
digest: "sha256:3b5805b56f2fb9fd25f4aa389cdfbbb28a3f2efb02245c52085a45d1dc62bf92"
tag: v0.30.4
digest: "sha256:a5ff1d3fdd69c78184554bd07e7675662201d7b27387717b4a70432a81db6301"
pluginConfig:
flux:
packages:

View File

@@ -3,7 +3,7 @@ kamaji:
deploy: false
image:
pullPolicy: IfNotPresent
tag: v0.30.2@sha256:e04f68e4cc5b023ed39ce2242b32aee51f97235371602239d0c4a9cea97c8d0d
tag: v0.30.4@sha256:aeb2d728818fd63a0ea2e64a79dcd6a8b01f0b8b0bab03e6e0fe3b9522edc0a3
repository: ghcr.io/cozystack/cozystack/kamaji
resources:
limits:

View File

@@ -216,7 +216,6 @@ data:
values.yaml: |
kubeapps:
authProxy:
resourcesPreset: "none"
enabled: true
provider: "oidc"
clientID: "kubeapps"

View File

@@ -1,3 +1,3 @@
portSecurity: true
routes: ""
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v0.30.2@sha256:fa14fa7a0ffa628eb079ddcf6ce41d75b43de92e50f489422f8fb15c4dab2dbf
image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v0.30.4@sha256:ec3e3b63ebefe5d11ad2b4a576c7b4cd0bbe6ab27ce2732e25e06c40692e8459

View File

@@ -22,4 +22,4 @@ global:
images:
kubeovn:
repository: kubeovn
tag: v1.13.8@sha256:071a93df2dce484b347bbace75934ca9e1743668bfe6f1161ba307dee204767d
tag: v1.13.8@sha256:385329464045cdf5e01364e9f2293edc71065a0910576a8e26ea9ac7097aae71

View File

@@ -17,7 +17,6 @@ spec:
- AutoResourceLimitsGate
- CPUManager
- GPU
- VMExport
evictionStrategy: LiveMigrate
customizeComponents: {}
imagePullPolicy: IfNotPresent

View File

@@ -3,8 +3,8 @@ name: piraeus
description: |
The Piraeus Operator manages software defined storage clusters using LINSTOR in Kubernetes.
type: application
version: 2.8.1
appVersion: "v2.8.1"
version: 2.7.1
appVersion: "v2.7.1"
maintainers:
- name: Piraeus Datastore
url: https://piraeus.io

View File

@@ -17,19 +17,20 @@ data:
# quay.io/piraeusdatastore/piraeus-server:v1.24.2
components:
linstor-controller:
tag: v1.31.0
tag: v1.29.2
image: piraeus-server
linstor-satellite:
tag: v1.31.0
# Pin with digest to ensure we pull the version with downgraded thin-send-recv
tag: v1.29.2
image: piraeus-server
linstor-csi:
tag: v1.7.1
tag: v1.6.4
image: piraeus-csi
drbd-reactor:
tag: v1.8.0
tag: v1.6.0
image: drbd-reactor
ha-controller:
tag: v1.3.0
tag: v1.2.3
image: piraeus-ha-controller
drbd-shutdown-guard:
tag: v1.0.0
@@ -38,7 +39,7 @@ data:
tag: v0.11
image: ktls-utils
drbd-module-loader:
tag: v9.2.13
tag: v9.2.12
# The special "match" attribute is used to select an image based on the node's reported OS.
# The operator will first check the k8s node's ".status.nodeInfo.osImage" field, and compare it against the list
# here. If one matches, that specific image name will be used instead of the fallback image.
@@ -89,25 +90,25 @@ data:
base: registry.k8s.io/sig-storage
components:
csi-attacher:
tag: v4.8.1
tag: v4.7.0
image: csi-attacher
csi-livenessprobe:
tag: v2.15.0
tag: v2.14.0
image: livenessprobe
csi-provisioner:
tag: v5.2.0
tag: v5.1.0
image: csi-provisioner
csi-snapshotter:
tag: v8.2.1
tag: v8.1.0
image: csi-snapshotter
csi-resizer:
tag: v1.13.2
tag: v1.12.0
image: csi-resizer
csi-external-health-monitor-controller:
tag: v0.14.0
tag: v0.13.0
image: csi-external-health-monitor-controller
csi-node-driver-registrar:
tag: v2.13.0
tag: v2.12.0
image: csi-node-driver-registrar
{{- range $idx, $value := .Values.imageConfigOverride }}
{{ add $idx 1 }}_helm_override.yaml: |

View File

@@ -21,12 +21,6 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
CozySystemConfigurationHashConfigMapName = "cozy-system-configuration-hash"
CozyTenantConfigurationHashConfigMapName = "cozy-tenant-configuration-hash"
CozyTenantConfigurationHashKey = "cozyTenantConfigurationHash"
)
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ApplicationList is a list of Application objects.

View File

@@ -988,18 +988,6 @@ func (r *REST) convertApplicationToHelmRelease(app *appsv1alpha1.Application) (*
},
}
valuesFromConfigMap := appsv1alpha1.CozyTenantConfigurationHashConfigMapName
if helmRelease.Name == "tenant-root" && helmRelease.Namespace == "tenant-root" {
valuesFromConfigMap = appsv1alpha1.CozySystemConfigurationHashConfigMapName
}
helmRelease.Spec.ValuesFrom = []helmv2.ValuesReference{{
Kind: "ConfigMap",
Name: valuesFromConfigMap,
ValuesKey: appsv1alpha1.CozyTenantConfigurationHashKey,
TargetPath: appsv1alpha1.CozyTenantConfigurationHashKey,
Optional: true,
}}
return helmRelease, nil
}

View File

@@ -3,7 +3,7 @@ set -o pipefail
set -e
BUNDLE=$(set -x; kubectl get configmap -n cozy-system cozystack -o 'go-template={{index .data "bundle-name"}}')
VERSION=$(find scripts/migrations -mindepth 1 -maxdepth 1 -type f | sort -V | awk -F/ 'END {print $NF+1}')
VERSION=10
run_migrations() {
if ! kubectl get configmap -n cozy-system cozystack-version; then
@@ -70,7 +70,7 @@ make -C packages/core/platform namespaces-apply
ensure_fluxcd
# Install platform chart
make -C packages/core/platform reconcile
make -C packages/core/platform apply
# Install basic charts
if ! flux_is_ok; then
@@ -93,5 +93,5 @@ done
trap 'exit' INT TERM
while true; do
sleep 60 & wait
make -C packages/core/platform reconcile
make -C packages/core/platform apply
done

View File

@@ -1,15 +0,0 @@
#!/bin/sh
# Migration 10 --> 11
# Force reconcile hr keycloak-configure
if kubectl get helmrelease keycloak-configure -n cozy-keycloak; then
kubectl delete po -l app=source-controller -n cozy-fluxcd
timestamp=$(date --rfc-3339=ns)
kubectl annotate helmrelease keycloak-configure -n cozy-keycloak \
reconcile.fluxcd.io/forceAt="$timestamp" \
reconcile.fluxcd.io/requestedAt="$timestamp" \
--overwrite
fi
# Write version to cozystack-version config
kubectl create configmap -n cozy-system cozystack-version --from-literal=version=11 --dry-run=client -o yaml | kubectl apply -f-

View File

@@ -1,21 +0,0 @@
#!/bin/sh
# Migration 11 --> 12
# Recreate daemonset kube-rbac-proxy
if kubectl get daemonset kube-rbac-proxy -n cozy-monitoring; then
kubectl delete daemonset kube-rbac-proxy --cascade=orphan -n cozy-monitoring
fi
if kubectl get helmrelease monitoring-agents -n cozy-monitoring; then
timestamp=$(date --rfc-3339=ns)
kubectl annotate helmrelease monitoring-agents -n cozy-monitoring \
reconcile.fluxcd.io/forceAt="$timestamp" \
reconcile.fluxcd.io/requestedAt="$timestamp" \
--overwrite
fi
kubectl delete pods -l app.kubernetes.io/component=kube-rbac-proxy -n cozy-monitoring
# Write version to cozystack-version config
kubectl create configmap -n cozy-system cozystack-version --from-literal=version=12 --dry-run=client -o yaml | kubectl apply -f-