Compare commits

...

24 Commits

Author SHA1 Message Date
Timofei Larkin
7c8823a835 [platform] Cozy values secret replicator
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2026-01-06 18:10:47 +03:00
Timofei Larkin
43df3a1b70 [testing] Add aliases and autocomplete (#1803)
## What this PR does

Adds a `k=kubectl` alias and bash completion for kubectl to the
e2e-testing sandbox container to maintainers have an easier time
exec'ing into the CI container when something needs to be debugged.

### Release note

```release-note
[testing] Add k=kubectl alias and enable kubectl completion in the CI
container.
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Enhanced the e2e sandbox image to enable shell bash-completion and
kubectl command completion.
* Added an alias (k) and completion wiring for kubectl to improve
interactive command use.
* These changes augment the test environment shell during image build to
provide a smoother developer/testing experience.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-05 22:48:01 +04:00
Andrei Kvapil
07b406e9bc [kubernetes] Fix endpoints for cilium-gateway (#1729)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

Integrate fix
- https://github.com/kubevirt/cloud-provider-kubevirt/pull/379

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kubernetes] Fix endpoints for cilium-gateway
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Improved handling of incomplete endpoint data by introducing fallback
detection mechanisms.
* Enhanced service discovery to gather endpoints from all available
resources when standard detection fails.
* Updated logging to provide better visibility into fallback operations
and current resource status.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-05 18:11:41 +01:00
Andrei Kvapil
5f36396ccc [platform] Replace Helm lookup with valuesFrom mechanism (#1787)
## What this PR does

Replaces Helm lookup functions with FluxCD valuesFrom mechanism for
passing configuration to HelmReleases. This provides cleaner config
propagation and eliminates the need for force reconcile controllers.

### Changes:

**Platform/Tenant charts:**
- Add Secret `cozystack-values` creation in platform chart (for
tenant-root and system namespaces)
- Add Secret `cozystack-values` creation in tenant chart (for child
namespaces)

**cozystack-api:**
- Add `valuesFrom` references to HelmRelease when creating applications
- Filter keys starting with `_` when returning Application specs
- Validate that user values don't contain `_` prefixed keys

**cozystack-controller:**
- Add validation that HelmRelease contains correct valuesFrom
configuration
- Remove `CozystackConfigReconciler` (no longer needed)
- Remove `TenantHelmReconciler` (no longer needed)

**Helm charts (40+ files):**
- Add helper templates in cozy-lib for `_cluster`/`_namespace` access
- Replace ConfigMap lookups with `.Values._cluster.*`
- Replace Namespace annotation lookups with `.Values._namespace.*`

### Architecture:

```
Secret cozystack-values (in each namespace)
├── _cluster: YAML with data from ConfigMaps (cozystack, cozystack-branding, cozystack-scheduling)
└── _namespace: YAML with namespace service references (etcd, host, ingress, monitoring, seaweedfs)

HelmRelease
└── spec.valuesFrom:
    ├── Secret/cozystack-values → _namespace → .Values._namespace
    └── Secret/cozystack-values → _cluster → .Values._cluster
```

### Release note

```release-note
[platform] Replace Helm lookup functions with FluxCD valuesFrom mechanism for configuration propagation
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Helm releases and namespaces now source centralized cluster/namespace
configuration via a new secret (cozystack-values), and many templates
read values from chart-provided _cluster/_namespace entries.

* **Bug Fixes**
* API now rejects application specs containing reserved keys prefixed
with "_" to prevent invalid configurations.

* **Refactor**
* Two background reconciler controllers were removed from startup,
simplifying controller initialization.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-05 17:53:33 +01:00
Andrei Kvapil
811fde9993 fix(ci): ensure correct latest release after backport publishing (#1800)
## What this PR does

Fixes an issue where backport releases incorrectly became marked as
"Latest" despite passing `make_latest: 'false'` to the GitHub API.

**Root cause:** The `getLatestRelease()` API returns the release with
the "Latest" flag, not the highest semver version. Combined with race
conditions during parallel release publishing and GitHub API potentially
ignoring `make_latest: 'false'`, backport releases were incorrectly
marked as latest.

**Solution:**
- Replace `getLatestRelease()` with semver-based max version detection
across all published releases
- After publishing a backport release, explicitly restore the latest
flag on the highest semver release
- Remove unused dead code from `tags.yaml` workflow

### Release note

```release-note
[ci] Fix latest release detection to use semver comparison instead of GitHub's "Latest" flag
```
2026-01-05 16:29:51 +01:00
Aleksei Sviridkin
695fc05dec [kubevirt-operator] Revert incorrect case change in VM alerts (#1804)
## What this PR does

Reverts PR #1770 which incorrectly changed status/phase checks from
lowercase to uppercase.

The actual KubeVirt metrics use **lowercase**:
- `kubevirt_vm_info` uses `status="running"` (not `"Running"`)
- `kubevirt_vmi_info` uses `phase="running"` (not `"Running"`)

### Verification

Queried virt-controller metrics directly in the instories cluster:

```
kubevirt_vm_info{...,status="running",status_group="running",...} 1
kubevirt_vmi_info{...,phase="running",...} 1
```

Note: `kubectl get vm` shows `STATUS: Running` with capital R, but this
is display formatting — the actual metric labels use lowercase.

### Release note

```release-note
[kubevirt-operator] Fix VM alert rules to use correct lowercase status values
```
2026-01-05 18:19:36 +03:00
Andrei Kvapil
2e61810547 refactor: replace Helm lookup with valuesFrom mechanism
Replace Helm lookup functions with FluxCD valuesFrom mechanism for
reading cluster and namespace configuration.

Changes:
- Create Secret cozystack-values in each namespace with values.yaml key
  containing _cluster and _namespace configuration as nested YAML
- Configure HelmReleases to read from this Secret via valuesFrom
  (valuesKey defaults to values.yaml, so it can be omitted)
- Update cozy-lib helpers to access config via .Values._cluster
- Add default values for required _cluster keys to ensure all fields exist
- Update Go code (cozystack-api and helm reconciler) to use new format

This eliminates the need for Helm lookup functions while maintaining
the same configuration interface for charts.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-05 16:10:55 +01:00
Aleksei Sviridkin
36836fd84e [kubevirt-operator] Revert incorrect case change in VM alerts
Revert PR #1770 which incorrectly changed status check from lowercase
to uppercase. The actual metrics use lowercase:
- kubevirt_vm_info uses status="running" (not "Running")
- kubevirt_vmi_info uses phase="running" (not "Running")

Verified by querying virt-controller metrics in instories cluster.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>
2026-01-05 17:35:56 +03:00
Timofei Larkin
bf1928c96f [testing] Add aliases and autocomplete
## What this PR does

Adds a `k=kubectl` alias and bash completion for kubectl to the
e2e-testing sandbox container to maintainers have an easier time
exec'ing into the CI container when something needs to be debugged.

### Release note

```release-note
[testing] Add k=kubectl alias and enable kubectl completion in the CI
container.
```

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2026-01-05 16:29:39 +03:00
Timofei Larkin
069a3ca9b0 Velero backup controller impl (#1762)
## What this PR does

Implement a controller for BackupJobs referencing a Velero strategy. Creates a Backup.velero.io according to the template in the `Velero.strategy.backups.cozystack.io`.

### Release note

```release-note
[backups] Implement a backup strategy controller for Velero strategies.
```

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
Co-authored-by: Andrey Kolkov <androndo@gmail.com>
Co-authored-by: Timofei Larkin <lllamnyp@gmail.com>
2026-01-05 16:29:38 +04:00
Timofei Larkin
f4228ffc20 [backups] Add templating of velero backups
## What this PR does

This patch narrows the scope of the Velero backup strategy controller to
simply template Velero Backups according to the application being backed
up and the template in the strategy. Creating storage locations is now
out of scope.

### Release note

```release-note
[backups] Implement templating for Velero backups and remove creation of
backup storage locations and volume snapshot locations.
```

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2026-01-04 13:31:15 +03:00
Andrey Kolkov
bfafcaa3ab [backups] Implement Velero strategy controller
## What this PR does

This patch implements the Reconcile function for BackupJobs with a
Velero strategy ref.

### Release note

```release-note
[backups] Implement the Velero backup strategy controller.
```

Signed-off-by: Andrey Kolkov <androndo@gmail.com>
2026-01-04 12:55:24 +03:00
Andrei Kvapil
f59665208c [kubernetes] Fix endpoints for cilium-gateway
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-04 10:35:16 +01:00
Andrei Kvapil
66a756b606 fix(ci): ensure correct latest release after backport publishing
Replace unreliable getLatestRelease() API with semver-based max version
detection. After publishing a backport release, explicitly restore the
latest flag on the highest semver release to handle cases where GitHub
API ignores make_latest: 'false'.

Also remove dead code (unused steps) from tags.yaml workflow.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-04 10:31:31 +01:00
Andrei Kvapil
a8688744e9 [platform] refactor: split cozystack-resource-definitions into separate packages (#1778)
## What this PR does

This PR splits the monolithic `cozystack-resource-definitions` package
into 25 individual resource definition packages (`*-rd`) for better
modularity and independent versioning.

**Changes:**
- Created 25 separate `*-rd` packages (bootbox-rd, bucket-rd,
clickhouse-rd, etcd-rd, ferretdb-rd, foundationdb-rd, http-cache-rd,
info-rd, ingress-rd, kafka-rd, kubernetes-rd, monitoring-rd, mysql-rd,
nats-rd, postgres-rd, rabbitmq-rd, redis-rd, seaweedfs-rd,
tcp-balancer-rd, tenant-rd, virtual-machine-rd, virtualprivatecloud-rd,
vm-disk-rd, vm-instance-rd, vpn-rd)
- Removed `packages/system/cozystack-resource-definitions`
- Updated platform bundles (paas-hosted, paas-full, distro-full) to
reference individual -rd packages
- Updated `hack/update-crd.sh` to use package-specific directories

Each `*-rd` package contains:
- `Chart.yaml` - package metadata
- `values.yaml` - default values
- `Makefile` - build instructions
- `cozyrds/<name>.yaml` - CRD definition
- `templates/cozyrd.yaml` - Helm template

**Benefits:**
- **Modularity**: Each resource definition is now a standalone package
- **Independent versioning**: Resources can be versioned independently
- **Maintainability**: Easier to update individual resources
- **Build efficiency**: Parallel building of resource packages

### Release note

```release-note
[platform] Split cozystack-resource-definitions into 25 separate *-rd packages for better modularity and independent versioning. Each resource definition is now a standalone package.

<!-- This is an auto-generated comment: release notes by coderabbit.ai -->
## Summary by CodeRabbit

* **Refactor**
  * Split the monolithic resource-definitions into many independent resource-definition packages (e.g., bootbox-rd, bucket-rd, clickhouse-rd, etc.) for modular deployment and finer-grained management
  * Updated deployment bundles to reference the new per-resource releases with uniform cozy-system namespace and CRD dependency

* **Chores**
  * Added packaging/Helm stubs (Chart.yaml, Makefile, values, templates) for each new resource-definition
* **Bug Fixes**
  * Made CRD path resolution dynamic (NAME validated before assignment)

<sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-04 09:06:50 +01:00
Andrei Kvapil
7a964eb7de [kubernetes] Add lb tests for tenant k8s (#1783)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kubernetes] Add lb tests for tenant k8s
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Tests**
  * Increased readiness and port-forward timeouts to improve stability.
* Added full end-to-end provisioning and validation: automated namespace
and backend deployment, load balancer provisioning, health checks with
retries, reachability validation, and cleanup.
* Provisioning sequence now runs earlier and is duplicated within the
test flow, altering execution order and adding extra validation/cleanup
steps.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-03 08:30:31 +01:00
Andrei Kvapil
3a5977ff60 fix(e2e): correct Service selector to match Deployment labels
The Service selector was using app: "${test_name}-backend" but the
Deployment pod template has app: backend. Fixed selector to match
the actual pod labels so endpoints are created correctly.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-02 21:28:18 +01:00
Andrei Kvapil
bbeaaccd0c feat(ci): add /retest command to rerun tests from Prepare environment
Allows maintainers to trigger test rerun by commenting /retest on a PR.
The workflow finds the latest run for the PR and reruns starting from
the "Prepare environment" job, skipping the build step.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-02 21:23:27 +01:00
Andrei Kvapil
08e5a25ce7 fix(ci): remove GITHUB_TOKEN extraheader to trigger workflows
actions/checkout configures http.extraheader with GITHUB_TOKEN which
takes priority over URL credentials. This caused tag pushes to
authenticate as github-actions instead of cozystack-bot, preventing
the Versioned Tag workflow from being triggered.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-02 21:10:12 +01:00
Andrei Kvapil
dd0bbd375f fix(e2e): run LB check curl from testing environment
Run curl directly from the testing container instead of creating
a separate pod with kubectl run. This avoids PodSecurity policy
violations and simplifies the test execution.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2026-01-02 21:02:06 +01:00
Andrei Kvapil
d26d99c925 [system] Add resource requests and limits to etcd-defrag (#1785)
## What this PR does
Added resource requests and limits for etcd-defrag container.

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[system] Add resource requests and limits to etcd-defrag
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Enhanced resource configurations for the etcd defragmentation process
to ensure optimal performance and system stability.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2026-01-02 19:12:30 +01:00
Matthieu ROBIN
675eaa6178 Add resource requests and limits to etcd-defrag
Added resource requests and limits for etcd-defrag container.

Signed-off-by: Matthieu ROBIN <info@matthieurobin.com>
2026-01-02 15:53:54 +01:00
IvanHunters
5638a7eae9 add lb tests for tenant k8s
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2026-01-02 17:41:42 +03:00
Andrei Kvapil
1fc48da514 refactor: split cozystack-resource-definitions into separate packages
Split the monolithic cozystack-resource-definitions package into 25
individual resource definition packages (*-rd) for better modularity
and independent versioning.

Changes:
- Create 25 separate *-rd packages (bootbox-rd, bucket-rd, clickhouse-rd,
  etcd-rd, ferretdb-rd, foundationdb-rd, http-cache-rd, info-rd,
  ingress-rd, kafka-rd, kubernetes-rd, monitoring-rd, mysql-rd, nats-rd,
  postgres-rd, rabbitmq-rd, redis-rd, seaweedfs-rd, tcp-balancer-rd,
  tenant-rd, virtual-machine-rd, virtualprivatecloud-rd, vm-disk-rd,
  vm-instance-rd, vpn-rd)
- Remove packages/system/cozystack-resource-definitions
- Update platform bundles (paas-hosted, paas-full, distro-full) to
  reference individual -rd packages instead of the monolithic package
- Update hack/update-crd.sh to use package-specific directories

Each *-rd package contains:
- Chart.yaml with package metadata
- values.yaml with default values
- Makefile for build instructions
- cozyrds/<name>.yaml with CRD definition
- templates/cozyrd.yaml with Helm template

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-12-30 14:00:31 +01:00
219 changed files with 3642 additions and 826 deletions

View File

@@ -33,6 +33,7 @@ jobs:
git config user.name "cozystack-bot"
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
git config --unset-all http.https://github.com/.extraheader || true
- name: Process release branches
uses: actions/github-script@v7
@@ -47,6 +48,8 @@ jobs:
execSync('git config user.name "cozystack-bot"', { encoding: 'utf8' });
execSync('git config user.email "217169706+cozystack-bot@users.noreply.github.com"', { encoding: 'utf8' });
execSync(`git remote set-url origin https://cozystack-bot:${process.env.GH_PAT}@github.com/${process.env.GITHUB_REPOSITORY}`, { encoding: 'utf8' });
// Remove GITHUB_TOKEN extraheader to ensure PAT is used (needed to trigger other workflows)
execSync('git config --unset-all http.https://github.com/.extraheader || true', { encoding: 'utf8' });
// Get all release-X.Y branches
const branches = execSync('git branch -r | grep -E "origin/release-[0-9]+\\.[0-9]+$" | sed "s|origin/||" | tr -d " "', { encoding: 'utf8' })

View File

@@ -110,67 +110,95 @@ jobs:
}
}
# Get the latest published release
- name: Get the latest published release
id: latest_release
uses: actions/github-script@v7
with:
script: |
try {
const rel = await github.rest.repos.getLatestRelease({
owner: context.repo.owner,
repo: context.repo.repo
});
core.setOutput('tag', rel.data.tag_name);
} catch (_) {
core.setOutput('tag', '');
}
# Compare current tag vs latest using semver-utils
- name: Semver compare
id: semver
uses: madhead/semver-utils@v4.3.0
with:
version: ${{ steps.get_tag.outputs.tag }}
compare-to: ${{ steps.latest_release.outputs.tag }}
# Derive flags: prerelease? make_latest?
- name: Calculate publish flags
id: flags
uses: actions/github-script@v7
with:
script: |
const tag = '${{ steps.get_tag.outputs.tag }}'; // v0.31.5-rc.1
const m = tag.match(/^v(\d+\.\d+\.\d+)(-(?:alpha|beta|rc)\.\d+)?$/);
if (!m) {
core.setFailed(`❌ tag '${tag}' must match 'vX.Y.Z' or 'vX.Y.Z-(alpha|beta|rc).N'`);
return;
}
const version = m[1] + (m[2] ?? ''); // 0.31.5-rc.1
const isRc = Boolean(m[2]);
core.setOutput('is_rc', isRc);
const outdated = '${{ steps.semver.outputs.comparison-result }}' === '<';
core.setOutput('make_latest', isRc || outdated ? 'false' : 'legacy');
# Publish draft release with correct flags
# Publish draft release and ensure correct latest flag
- name: Publish draft release
uses: actions/github-script@v7
with:
script: |
const tag = '${{ steps.get_tag.outputs.tag }}';
const m = tag.match(/^v(\d+\.\d+\.\d+)(-(?:alpha|beta|rc)\.\d+)?$/);
if (!m) {
core.setFailed(`❌ tag '${tag}' must match 'vX.Y.Z' or 'vX.Y.Z-(alpha|beta|rc).N'`);
return;
}
const isRc = Boolean(m[2]);
// Parse semver string to comparable numbers
function parseSemver(v) {
const match = v.replace(/^v/, '').match(/^(\d+)\.(\d+)\.(\d+)/);
if (!match) return null;
return {
major: parseInt(match[1]),
minor: parseInt(match[2]),
patch: parseInt(match[3])
};
}
// Compare two semver objects
function compareSemver(a, b) {
if (a.major !== b.major) return a.major - b.major;
if (a.minor !== b.minor) return a.minor - b.minor;
return a.patch - b.patch;
}
const currentSemver = parseSemver(tag);
// Get all releases
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo
});
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) throw new Error(`Draft release for ${tag} not found`);
await github.rest.repos.updateRelease({
owner: context.repo.owner,
repo: context.repo.repo,
release_id: draft.id,
draft: false,
prerelease: ${{ steps.flags.outputs.is_rc }},
make_latest: '${{ steps.flags.outputs.make_latest }}'
repo: context.repo.repo,
per_page: 100
});
console.log(`🚀 Published release for ${tag}`);
// Find draft release to publish
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) throw new Error(`Draft release for ${tag} not found`);
// Find max semver among published releases (excluding current draft)
const publishedReleases = releases.data
.filter(r => !r.draft && !r.prerelease)
.filter(r => /^v\d+\.\d+\.\d+$/.test(r.tag_name))
.map(r => ({ id: r.id, tag: r.tag_name, semver: parseSemver(r.tag_name) }))
.filter(r => r.semver !== null);
let maxRelease = null;
for (const rel of publishedReleases) {
if (!maxRelease || compareSemver(rel.semver, maxRelease.semver) > 0) {
maxRelease = rel;
}
}
// Determine if this release should be latest
const isOutdated = maxRelease && compareSemver(currentSemver, maxRelease.semver) < 0;
const makeLatest = (isRc || isOutdated) ? 'false' : 'true';
if (isRc) {
console.log(`🏷️ ${tag} is a prerelease, make_latest: false`);
} else if (isOutdated) {
console.log(`🏷️ ${tag} < ${maxRelease.tag} (max semver), make_latest: false`);
} else {
console.log(`🏷️ ${tag} is the highest version, make_latest: true`);
}
// Publish the release
await github.rest.repos.updateRelease({
owner: context.repo.owner,
repo: context.repo.repo,
release_id: draft.id,
draft: false,
prerelease: isRc,
make_latest: makeLatest
});
console.log(`🚀 Published release ${tag}`);
// If this is a backport/outdated release, ensure the correct release is marked as latest
if (isOutdated && maxRelease) {
console.log(`🔧 Ensuring ${maxRelease.tag} remains the latest release...`);
await github.rest.repos.updateRelease({
owner: context.repo.owner,
repo: context.repo.repo,
release_id: maxRelease.id,
make_latest: 'true'
});
console.log(`✅ Restored ${maxRelease.tag} as latest release`);
}

78
.github/workflows/retest.yaml vendored Normal file
View File

@@ -0,0 +1,78 @@
name: Retest
on:
issue_comment:
types: [created]
jobs:
retest:
name: Retest PR
runs-on: ubuntu-latest
if: |
github.event.issue.pull_request &&
contains(github.event.comment.body, '/retest')
permissions:
actions: write
pull-requests: read
steps:
- name: Rerun from Prepare environment
uses: actions/github-script@v7
with:
script: |
const prNumber = context.issue.number;
// Get the PR to find the head SHA
const pr = await github.rest.pulls.get({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: prNumber
});
// Find the latest workflow run for this PR
const runs = await github.rest.actions.listWorkflowRuns({
owner: context.repo.owner,
repo: context.repo.repo,
workflow_id: 'pull-requests.yaml',
head_sha: pr.data.head.sha
});
if (runs.data.workflow_runs.length === 0) {
core.setFailed('No workflow runs found for this PR');
return;
}
const latestRun = runs.data.workflow_runs[0];
console.log(`Found workflow run: ${latestRun.id} (${latestRun.status})`);
// Check if workflow is waiting for approval (fork PRs)
if (latestRun.conclusion === 'action_required') {
core.setFailed('Workflow is waiting for approval. A maintainer must approve the workflow first.');
return;
}
// Get jobs for this run
const jobs = await github.rest.actions.listJobsForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
run_id: latestRun.id
});
// Find "Prepare environment" job
const prepareJob = jobs.data.jobs.find(j => j.name === 'Prepare environment');
if (!prepareJob) {
core.setFailed('Could not find "Prepare environment" job');
return;
}
console.log(`Found job: ${prepareJob.name} (id: ${prepareJob.id}, status: ${prepareJob.status})`);
// Rerun the job
await github.rest.actions.reRunJobForWorkflowRun({
owner: context.repo.owner,
repo: context.repo.repo,
job_id: prepareJob.id
});
console.log(`✅ Triggered rerun of job "${prepareJob.name}" (${prepareJob.id})`);

View File

@@ -123,32 +123,6 @@ jobs:
git commit -m "Prepare release ${GITHUB_REF#refs/tags/}" -s || echo "No changes to commit"
git push origin HEAD || true
# Get `latest_version` from latest published release
- name: Get latest published release
if: steps.check_release.outputs.skip == 'false'
id: latest_release
uses: actions/github-script@v7
with:
script: |
try {
const rel = await github.rest.repos.getLatestRelease({
owner: context.repo.owner,
repo: context.repo.repo
});
core.setOutput('tag', rel.data.tag_name);
} catch (_) {
core.setOutput('tag', '');
}
# Compare tag (A) with latest (B)
- name: Semver compare
if: steps.check_release.outputs.skip == 'false'
id: semver
uses: madhead/semver-utils@v4.3.0
with:
version: ${{ steps.tag.outputs.tag }} # A
compare-to: ${{ steps.latest_release.outputs.tag }} # B
# Create or reuse draft release
- name: Create / reuse draft release
if: steps.check_release.outputs.skip == 'false'

View File

@@ -6,6 +6,7 @@
package v1alpha1
import (
velerov1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
@@ -47,7 +48,15 @@ type VeleroList struct {
}
// VeleroSpec specifies the desired strategy for backing up with Velero.
type VeleroSpec struct{}
type VeleroSpec struct {
Template VeleroTemplate `json:"template"`
}
// VeleroTemplate describes the data a backup.velero.io should have when
// templated from a Velero backup strategy.
type VeleroTemplate struct {
Spec velerov1.BackupSpec `json:"spec"`
}
type VeleroStatus struct {
Conditions []metav1.Condition `json:"conditions,omitempty"`

View File

@@ -127,7 +127,7 @@ func (in *Velero) DeepCopyInto(out *Velero) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
out.Spec = in.Spec
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
@@ -184,6 +184,7 @@ func (in *VeleroList) DeepCopyObject() runtime.Object {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *VeleroSpec) DeepCopyInto(out *VeleroSpec) {
*out = *in
in.Template.DeepCopyInto(&out.Template)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VeleroSpec.
@@ -217,3 +218,19 @@ func (in *VeleroStatus) DeepCopy() *VeleroStatus {
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *VeleroTemplate) DeepCopyInto(out *VeleroTemplate) {
*out = *in
in.Spec.DeepCopyInto(&out.Spec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VeleroTemplate.
func (in *VeleroTemplate) DeepCopy() *VeleroTemplate {
if in == nil {
return nil
}
out := new(VeleroTemplate)
in.DeepCopyInto(out)
return out
}

View File

@@ -21,6 +21,11 @@ func init() {
})
}
const (
OwningJobNameLabel = thisGroup + "/owned-by.BackupJobName"
OwningJobNamespaceLabel = thisGroup + "/owned-by.BackupJobNamespace"
)
// BackupJobPhase represents the lifecycle phase of a BackupJob.
type BackupJobPhase string
@@ -85,6 +90,8 @@ type BackupJobStatus struct {
// The field indexing on applicationRef will be needed later to display per-app backup resources.
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:name="Phase",type="string",JSONPath=".status.phase",priority=0
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.apiGroup`
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.kind`
// +kubebuilder:selectablefield:JSONPath=`.spec.applicationRef.name`

View File

@@ -25,8 +25,13 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
)
const (
thisGroup = "backups.cozystack.io"
thisVersion = "v1alpha1"
)
var (
GroupVersion = schema.GroupVersion{Group: "backups.cozystack.io", Version: "v1alpha1"}
GroupVersion = schema.GroupVersion{Group: thisGroup, Version: thisVersion}
SchemeBuilder = runtime.NewSchemeBuilder(addGroupVersion)
AddToScheme = SchemeBuilder.AddToScheme
)

View File

@@ -35,8 +35,10 @@ import (
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
"sigs.k8s.io/controller-runtime/pkg/webhook"
strategyv1alpha1 "github.com/cozystack/cozystack/api/backups/strategy/v1alpha1"
backupsv1alpha1 "github.com/cozystack/cozystack/api/backups/v1alpha1"
"github.com/cozystack/cozystack/internal/backupcontroller"
velerov1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
// +kubebuilder:scaffold:imports
)
@@ -49,6 +51,8 @@ func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(backupsv1alpha1.AddToScheme(scheme))
utilruntime.Must(strategyv1alpha1.AddToScheme(scheme))
utilruntime.Must(velerov1.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}
@@ -155,6 +159,15 @@ func main() {
os.Exit(1)
}
if err = (&backupcontroller.BackupJobReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
Recorder: mgr.GetEventRecorderFor("backup-controller"),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "BackupJob")
os.Exit(1)
}
// +kubebuilder:scaffold:builder
if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {

View File

@@ -200,22 +200,6 @@ func main() {
os.Exit(1)
}
if err = (&controller.TenantHelmReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "TenantHelmReconciler")
os.Exit(1)
}
if err = (&controller.CozystackConfigReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CozystackConfigReconciler")
os.Exit(1)
}
cozyAPIKind := "DaemonSet"
if reconcileDeployment {
cozyAPIKind = "Deployment"

View File

@@ -32,12 +32,16 @@ import (
helmv2 "github.com/fluxcd/helm-controller/api/v2"
sourcev1 "github.com/fluxcd/source-controller/api/v1"
sourcewatcherv1beta1 "github.com/fluxcd/source-watcher/api/v2/v1beta1"
corev1 "k8s.io/api/core/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log"
@@ -45,6 +49,7 @@ import (
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
"sigs.k8s.io/controller-runtime/pkg/webhook"
"github.com/cozystack/cozystack/internal/cozyvaluesreplicator"
"github.com/cozystack/cozystack/internal/fluxinstall"
"github.com/cozystack/cozystack/internal/operator"
// +kubebuilder:scaffold:imports
@@ -73,6 +78,9 @@ func main() {
var enableHTTP2 bool
var installFlux bool
var cozystackVersion string
var cozyValuesSecretName string
var cozyValuesSecretNamespace string
var cozyValuesNamespaceSelector string
var platformSourceURL string
var platformSourceName string
var platformSourceRef string
@@ -92,6 +100,9 @@ func main() {
flag.StringVar(&platformSourceURL, "platform-source-url", "", "Platform source URL (oci:// or https://). If specified, generates OCIRepository or GitRepository resource.")
flag.StringVar(&platformSourceName, "platform-source-name", "cozystack-packages", "Name for the generated platform source resource (default: cozystack-packages)")
flag.StringVar(&platformSourceRef, "platform-source-ref", "", "Reference specification as key=value pairs (e.g., 'branch=main' or 'digest=sha256:...,tag=v1.0'). For OCI: digest, semver, semverFilter, tag. For Git: branch, tag, semver, name, commit.")
flag.StringVar(&cozyValuesSecretName, "cozy-values-secret-name", "cozystack-values", "The name of the secret containing cluster-wide configuration values.")
flag.StringVar(&cozyValuesSecretNamespace, "cozy-values-secret-namespace", "cozy-system", "The namespace of the secret containing cluster-wide configuration values.")
flag.StringVar(&cozyValuesNamespaceSelector, "cozy-values-namespace-selector", "cozystack.io/system=true", "The label selector for namespaces where the cluster-wide configuration values must be replicated.")
opts := zap.Options{
Development: true,
@@ -110,10 +121,29 @@ func main() {
os.Exit(1)
}
targetNSSelector, err := labels.Parse(cozyValuesNamespaceSelector)
if err != nil {
setupLog.Error(err, "could not parse namespace label selector")
os.Exit(1)
}
// Start the controller manager
setupLog.Info("Starting controller manager")
mgr, err := ctrl.NewManager(config, ctrl.Options{
Scheme: scheme,
Cache: cache.Options{
ByObject: map[client.Object]cache.ByObject{
// Cache only Secrets named <secretName> (in any namespace)
&corev1.Secret{}: {
Field: fields.OneTermEqualSelector("metadata.name", cozyValuesSecretName),
},
// Cache only Namespaces that match a label selector
&corev1.Namespace{}: {
Label: targetNSSelector,
},
},
},
Metrics: metricsserver.Options{
BindAddress: metricsAddr,
SecureServing: secureMetrics,
@@ -169,6 +199,16 @@ func main() {
}
}
if err := (&cozyvaluesreplicator.SecretReplicatorReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
SourceNamespace: cozyValuesSecretNamespace,
SecretName: cozyValuesSecretName,
TargetNamespaceSelector: targetNSSelector,
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CozyValuesReplicator")
os.Exit(1)
}
// Setup PackageSource reconciler
if err := (&operator.PackageSourceReconciler{
Client: mgr.GetClient(),

View File

@@ -0,0 +1,20 @@
apiVersion: backups.cozystack.io/v1alpha1
kind: BackupJob
metadata:
name: desired-backup
namespace: tenant-root
labels:
backups.cozystack.io/triggered-by: manual
spec:
applicationRef:
apiGroup: apps.cozystack.io
kind: VirtualMachine
name: vm1
storageRef:
apiGroup: apps.cozystack.io
kind: Bucket
name: test-bucket
strategyRef:
apiGroup: strategy.backups.cozystack.io
kind: Velero
name: velero-strategy-default

29
go.mod
View File

@@ -17,6 +17,7 @@ require (
github.com/prometheus/client_golang v1.22.0
github.com/robfig/cron/v3 v3.0.1
github.com/spf13/cobra v1.9.1
github.com/vmware-tanzu/velero v1.17.1
go.uber.org/zap v1.27.0
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.34.1
@@ -80,8 +81,8 @@ require (
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_model v0.6.1 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.65.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/spf13/pflag v1.0.7 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
@@ -90,14 +91,14 @@ require (
go.etcd.io/etcd/client/pkg/v3 v3.6.4 // indirect
go.etcd.io/etcd/client/v3 v3.6.4 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
go.opentelemetry.io/otel v1.37.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.34.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/otel/sdk v1.34.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.opentelemetry.io/otel/metric v1.37.0 // indirect
go.opentelemetry.io/otel/sdk v1.37.0 // indirect
go.opentelemetry.io/otel/trace v1.37.0 // indirect
go.opentelemetry.io/proto/otlp v1.5.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.yaml.in/yaml/v2 v2.4.2 // indirect
@@ -105,18 +106,18 @@ require (
golang.org/x/crypto v0.42.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.45.0 // indirect
golang.org/x/oauth2 v0.29.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sync v0.17.0 // indirect
golang.org/x/sys v0.36.0 // indirect
golang.org/x/term v0.35.0 // indirect
golang.org/x/text v0.29.0 // indirect
golang.org/x/time v0.11.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.org/x/tools v0.37.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/grpc v1.72.1 // indirect
google.golang.org/protobuf v1.36.5 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
google.golang.org/grpc v1.73.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect

62
go.sum
View File

@@ -144,10 +144,10 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRI
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.65.0 h1:QDwzd+G1twt//Kwj/Ww6E9FQq1iVMmODnILtW1t2VzE=
github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
@@ -179,6 +179,8 @@ github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75 h1:6fotK7otjonDflCTK0BCfls4SPy3NcCVb5dqqmbRknE=
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75/go.mod h1:KO6IkyS8Y3j8OdNO85qEYBsRPuteD+YciPomcXdrMnk=
github.com/vmware-tanzu/velero v1.17.1 h1:ldKeiTuUwkThOw7zrUucNA1NwnLG66zl13YetWAoE0I=
github.com/vmware-tanzu/velero v1.17.1/go.mod h1:3KTxuUN6Un38JzmYAX+8U6j2k6EexGoNNxa8jrJML8U=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xiang90/probing v0.0.0-20221125231312-a49e3df8f510 h1:S2dVYn90KE98chqDkyE9Z4N61UnQd+KOfgp5Iu53llk=
@@ -201,24 +203,24 @@ go.etcd.io/raft/v3 v3.6.0 h1:5NtvbDVYpnfZWcIHgGRk9DyzkBIXOi8j+DDp1IcnUWQ=
go.etcd.io/raft/v3 v3.6.0/go.mod h1:nLvLevg6+xrVtHUmVaTcTz603gQPHfh7kUAwV6YpfGo=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0 h1:x7wzEgXfnzJcHDwStJT+mxOz4etr2EcexjqhBvmoakw=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.60.0/go.mod h1:rg+RlpR5dKwaS95IyyZqj5Wd4E13lk/msnTS0Xl9lJM=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 h1:yd02MEjBdJkG3uabWP9apV+OuWRIXGDuJEUJbOHmCFU=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0/go.mod h1:umTcuxiv1n/s/S6/c2AT/g2CQ7u5C59sHDNmfSwgz7Q=
go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0 h1:OeNbIYk/2C15ckl7glBlOBp5+WlYsOElzTNmiPW/x60=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0/go.mod h1:7Bept48yIeqxP2OZ9/AqIpYS94h2or0aB4FypJTc8ZM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.34.0 h1:tgJ0uaNS4c98WRNUEx5U3aDlrDOI5Rs+1Vifcw4DJ8U=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.34.0/go.mod h1:U7HYyW0zt/a9x5J1Kjs+r1f/d4ZHnYFclhYY2+YbeoE=
go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=
go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg=
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
go.opentelemetry.io/proto/otlp v1.5.0 h1:xJvq7gMzB31/d406fB8U5CBdyQGw4P399D1aQWU/3i4=
go.opentelemetry.io/proto/otlp v1.5.0/go.mod h1:keN8WnHxOy8PG0rQZjJJ5A2ebUoafqWp0eVQ4yIXvJ4=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
@@ -246,8 +248,8 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.45.0 h1:RLBg5JKixCy82FtLJpeNlVM0nrSqpCRYzVU1n8kj0tM=
golang.org/x/net v0.45.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
golang.org/x/oauth2 v0.29.0 h1:WdYw2tdTK1S8olAzWHdgeqfy+Mtm9XNhv/xJsY65d98=
golang.org/x/oauth2 v0.29.0/go.mod h1:onh5ek6nERTohokkhCD/y2cV4Do3fxFHFuAejCkRWT8=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -264,8 +266,8 @@ golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.29.0 h1:1neNs90w9YzJ9BocxfsQNHKuAT4pkghyXc4nhZ6sJvk=
golang.org/x/text v0.29.0/go.mod h1:7MhJOA9CD2qZyOKYazxdYMF85OwPdEr9jTtBpO7ydH4=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
@@ -278,14 +280,14 @@ golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw=
gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb h1:p31xT4yrYrSM/G4Sn2+TNUkVhFCbG9y8itM2S6Th950=
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:jbe3Bkdp+Dh2IrslsFCklNhweNTBgSYanP1UXhJDhKg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb h1:TLPQVbx1GJ8VKZxz52VAxl1EBgKXXbTiU9Fc5fZeLn4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:LuRYeWDFV6WOn90g357N17oMCaxpgCnbi/44qJvDn2I=
google.golang.org/grpc v1.72.1 h1:HR03wO6eyZ7lknl75XlxABNVLLFc2PAb6mHlYh756mA=
google.golang.org/grpc v1.72.1/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM=
google.golang.org/protobuf v1.36.5 h1:tPhr+woSbjfYvY6/GPufUoYizxw1cF/yFoxJ2fmpwlM=
google.golang.org/protobuf v1.36.5/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 h1:oWVWY3NzT7KJppx2UKhKmzPq4SRe0LdCijVRwvGeikY=
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822/go.mod h1:h3c4v36UTKzUiuaOKQ6gr3S+0hovBtUrXzTG/i3+XEc=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 h1:fc6jSaCT0vBduLYZHYrBBNY4dsWuvgyff9noRNDdBeE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok=
google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=

View File

@@ -0,0 +1,226 @@
#!/usr/bin/env bats
# Test variables - stored for teardown
TEST_NAMESPACE='tenant-test'
TEST_BUCKET_NAME='test-backup-bucket'
TEST_VM_NAME='test-backup-vm'
TEST_BACKUPJOB_NAME='test-backup-job'
teardown() {
# Clean up resources (runs even if test fails)
namespace="${TEST_NAMESPACE}"
bucket_name="${TEST_BUCKET_NAME}"
vm_name="${TEST_VM_NAME}"
backupjob_name="${TEST_BACKUPJOB_NAME}"
# Clean up port-forward if still running
pkill -f "kubectl.*port-forward.*seaweedfs-s3" 2>/dev/null || true
# Clean up Velero resources in cozy-velero namespace
# Find Velero backup by pattern matching namespace-backupjob
for backup in $(kubectl -n cozy-velero get backups.velero.io -o jsonpath='{.items[*].metadata.name}' 2>/dev/null || true); do
if echo "$backup" | grep -q "^${namespace}-${backupjob_name}-"; then
kubectl -n cozy-velero delete backups.velero.io ${backup} --wait=false 2>/dev/null || true
fi
done
# Clean up BackupStorageLocation and VolumeSnapshotLocation (named: namespace-backupjob)
BSL_NAME="${namespace}-${backupjob_name}"
kubectl -n cozy-velero delete backupstoragelocations.velero.io ${BSL_NAME} --wait=false 2>/dev/null || true
kubectl -n cozy-velero delete volumesnapshotlocations.velero.io ${BSL_NAME} --wait=false 2>/dev/null || true
# Clean up Velero credentials secret
SECRET_NAME="backup-${namespace}-${backupjob_name}-s3-credentials"
kubectl -n cozy-velero delete secret ${SECRET_NAME} --wait=false 2>/dev/null || true
# Clean up BackupJob
kubectl -n ${namespace} delete backupjob ${backupjob_name} --wait=false 2>/dev/null || true
# Clean up Virtual Machine
kubectl -n ${namespace} delete virtualmachines.apps.cozystack.io ${vm_name} --wait=false 2>/dev/null || true
# Clean up Bucket
kubectl -n ${namespace} delete bucket.apps.cozystack.io ${bucket_name} --wait=false 2>/dev/null || true
# Clean up temporary files
rm -f /tmp/bucket-backup-credentials.json
}
print_log() {
echo "# $1" >&3
}
@test "Create Backup for Virtual Machine" {
# Test variables
bucket_name="${TEST_BUCKET_NAME}"
vm_name="${TEST_VM_NAME}"
backupjob_name="${TEST_BACKUPJOB_NAME}"
namespace="${TEST_NAMESPACE}"
print_log "Step 0:Ensure BackupJob and Velero strategy CRDs are installed"
kubectl apply -f packages/system/backup-controller/definitions/backups.cozystack.io_backupjobs.yaml
kubectl apply -f packages/system/backupstrategy-controller/definitions/strategy.backups.cozystack.io_veleroes.yaml
# Wait for CRDs to be ready
kubectl wait --for condition=established --timeout=30s crd backupjobs.backups.cozystack.io
kubectl wait --for condition=established --timeout=30s crd veleroes.strategy.backups.cozystack.io
# Ensure velero-strategy-default resource exists
kubectl apply -f packages/system/backup-controller/templates/strategy.yaml
print_log "Step 1: Create the bucket resource"
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Bucket
metadata:
name: ${bucket_name}
namespace: ${namespace}
spec: {}
EOF
print_log "Wait for the bucket to be ready"
kubectl -n ${namespace} wait hr bucket-${bucket_name} --timeout=100s --for=condition=ready
kubectl -n ${namespace} wait bucketclaims.objectstorage.k8s.io bucket-${bucket_name} --timeout=300s --for=jsonpath='{.status.bucketReady}'=true
kubectl -n ${namespace} wait bucketaccesses.objectstorage.k8s.io bucket-${bucket_name} --timeout=300s --for=jsonpath='{.status.accessGranted}'=true
# Get bucket credentials for later S3 verification
kubectl -n ${namespace} get secret bucket-${bucket_name} -ojsonpath='{.data.BucketInfo}' | base64 -d > /tmp/bucket-backup-credentials.json
ACCESS_KEY=$(jq -r '.spec.secretS3.accessKeyID' /tmp/bucket-backup-credentials.json)
SECRET_KEY=$(jq -r '.spec.secretS3.accessSecretKey' /tmp/bucket-backup-credentials.json)
BUCKET_NAME=$(jq -r '.spec.bucketName' /tmp/bucket-backup-credentials.json)
print_log "Step 2: Create the Virtual Machine"
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VirtualMachine
metadata:
name: ${vm_name}
namespace: ${namespace}
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
instanceType: "u1.medium"
instanceProfile: ubuntu
systemDisk:
image: ubuntu
storage: 5Gi
storageClass: replicated
gpus: []
resources: {}
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
print_log "Wait for VM to be ready"
sleep 5
kubectl -n ${namespace} wait hr virtual-machine-${vm_name} --timeout=10s --for=condition=ready
kubectl -n ${namespace} wait dv virtual-machine-${vm_name} --timeout=150s --for=condition=ready
kubectl -n ${namespace} wait pvc virtual-machine-${vm_name} --timeout=100s --for=jsonpath='{.status.phase}'=Bound
kubectl -n ${namespace} wait vm virtual-machine-${vm_name} --timeout=100s --for=condition=ready
print_log "Step 3: Create BackupJob"
kubectl apply -f - <<EOF
apiVersion: backups.cozystack.io/v1alpha1
kind: BackupJob
metadata:
name: ${backupjob_name}
namespace: ${namespace}
labels:
backups.cozystack.io/triggered-by: e2e-test
spec:
applicationRef:
apiGroup: apps.cozystack.io
kind: VirtualMachine
name: ${vm_name}
storageRef:
apiGroup: apps.cozystack.io
kind: Bucket
name: ${bucket_name}
strategyRef:
apiGroup: strategy.backups.cozystack.io
kind: Velero
name: velero-strategy-default
EOF
print_log "Wait for BackupJob to start"
kubectl -n ${namespace} wait backupjob ${backupjob_name} --timeout=60s --for=jsonpath='{.status.phase}'=Running
print_log "Wait for BackupJob to complete"
kubectl -n ${namespace} wait backupjob ${backupjob_name} --timeout=300s --for=jsonpath='{.status.phase}'=Succeeded
print_log "Verify BackupJob status"
PHASE=$(kubectl -n ${namespace} get backupjob ${backupjob_name} -o jsonpath='{.status.phase}')
[ "$PHASE" = "Succeeded" ]
# Verify BackupJob has a backupRef
BACKUP_REF=$(kubectl -n ${namespace} get backupjob ${backupjob_name} -o jsonpath='{.status.backupRef.name}')
[ -n "$BACKUP_REF" ]
# Find the Velero backup by searching for backups matching the namespace-backupjob pattern
# Format: namespace-backupjob-timestamp
VELERO_BACKUP_NAME=""
VELERO_BACKUP_PHASE=""
print_log "Wait a bit for the backup to be created and appear in the API"
sleep 30
# Find backup by pattern matching namespace-backupjob
for backup in $(kubectl -n cozy-velero get backups.velero.io -o jsonpath='{.items[*].metadata.name}' 2>/dev/null); do
if echo "$backup" | grep -q "^${namespace}-${backupjob_name}-"; then
VELERO_BACKUP_NAME=$backup
VELERO_BACKUP_PHASE=$(kubectl -n cozy-velero get backups.velero.io $backup -o jsonpath='{.status.phase}' 2>/dev/null || echo "")
break
fi
done
print_log "Verify Velero Backup was found"
[ -n "$VELERO_BACKUP_NAME" ]
echo '# Wait for Velero Backup to complete' >&3
until kubectl -n cozy-velero get backups.velero.io ${VELERO_BACKUP_NAME} -o jsonpath='{.status.phase}' | grep -q 'Completed\|Failed'; do
sleep 5
done
print_log "Verify Velero Backup is Completed"
timeout 90 sh -ec "until [ \"\$(kubectl -n cozy-velero get backups.velero.io ${VELERO_BACKUP_NAME} -o jsonpath='{.status.phase}' 2>/dev/null)\" = \"Completed\" ]; do sleep 30; done"
# Final verification
VELERO_BACKUP_PHASE=$(kubectl -n cozy-velero get backups.velero.io ${VELERO_BACKUP_NAME} -o jsonpath='{.status.phase}' 2>/dev/null || echo "")
[ "$VELERO_BACKUP_PHASE" = "Completed" ]
print_log "Step 4: Verify S3 has backup data"
# Start port-forwarding to S3 service (with timeout to keep it alive)
bash -c 'timeout 100s kubectl port-forward service/seaweedfs-s3 -n tenant-root 8333:8333 > /dev/null 2>&1 &'
# Wait for port-forward to be ready
timeout 30 sh -ec "until nc -z localhost 8333; do sleep 1; done"
# Wait a bit for backup data to be written to S3
sleep 30
# Set up MinIO client with insecure flag (use environment variable for all commands)
export MC_INSECURE=1
mc alias set local https://localhost:8333 $ACCESS_KEY $SECRET_KEY
# Verify backup directory exists in S3
BACKUP_PATH="${BUCKET_NAME}/backups/${VELERO_BACKUP_NAME}"
mc ls local/${BACKUP_PATH}/ 2>/dev/null
[ $? -eq 0 ]
# Verify backup files exist (at least metadata files)
BACKUP_FILES=$(mc ls local/${BACKUP_PATH}/ 2>/dev/null | wc -l || echo "0")
[ "$BACKUP_FILES" -gt "0" ]
}

View File

@@ -72,7 +72,7 @@ EOF
kubectl wait --for=condition=TenantControlPlaneCreated kamajicontrolplane -n tenant-test kubernetes-${test_name} --timeout=4m
# Wait for Kubernetes resources to be ready (timeout after 2 minutes)
kubectl wait tcp -n tenant-test kubernetes-${test_name} --timeout=2m --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
kubectl wait tcp -n tenant-test kubernetes-${test_name} --timeout=5m --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
# Wait for all required deployments to be available (timeout after 4 minutes)
kubectl wait deploy --timeout=4m --for=condition=available -n tenant-test kubernetes-${test_name} kubernetes-${test_name}-cluster-autoscaler kubernetes-${test_name}-kccm kubernetes-${test_name}-kcsi-controller
@@ -87,7 +87,7 @@ EOF
# Set up port forwarding to the Kubernetes API server for a 200 second timeout
bash -c 'timeout 300s kubectl port-forward service/kubernetes-'"${test_name}"' -n tenant-test '"${port}"':6443 > /dev/null 2>&1 &'
bash -c 'timeout 500s kubectl port-forward service/kubernetes-'"${test_name}"' -n tenant-test '"${port}"':6443 > /dev/null 2>&1 &'
# Verify the Kubernetes version matches what we expect (retry for up to 20 seconds)
timeout 20 sh -ec 'until kubectl --kubeconfig tenantkubeconfig-'"${test_name}"' version 2>/dev/null | grep -Fq "Server Version: ${k8s_version}"; do sleep 5; done'
@@ -124,6 +124,100 @@ EOF
exit 1
fi
kubectl --kubeconfig tenantkubeconfig-${test_name} apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: tenant-test
EOF
# Backend 1
kubectl apply --kubeconfig tenantkubeconfig-${test_name} -f- <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: "${test_name}-backend"
namespace: tenant-test
spec:
replicas: 1
selector:
matchLabels:
app: backend
backend: "${test_name}-backend"
template:
metadata:
labels:
app: backend
backend: "${test_name}-backend"
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 2
periodSeconds: 2
EOF
# LoadBalancer Service
kubectl apply --kubeconfig tenantkubeconfig-${test_name} -f- <<EOF
apiVersion: v1
kind: Service
metadata:
name: "${test_name}-backend"
namespace: tenant-test
spec:
type: LoadBalancer
selector:
app: backend
backend: "${test_name}-backend"
ports:
- port: 80
targetPort: 80
EOF
# Wait for pods readiness
kubectl wait deployment --kubeconfig tenantkubeconfig-${test_name} ${test_name}-backend -n tenant-test --for=condition=Available --timeout=90s
# Wait for LoadBalancer to be provisioned (IP or hostname)
timeout 90 sh -ec "
until kubectl get svc ${test_name}-backend --kubeconfig tenantkubeconfig-${test_name} -n tenant-test \
-o jsonpath='{.status.loadBalancer.ingress[0]}' | grep -q .; do
sleep 5
done
"
LB_ADDR=$(
kubectl get svc --kubeconfig tenantkubeconfig-${test_name} "${test_name}-backend" \
-n tenant-test \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}{.status.loadBalancer.ingress[0].hostname}'
)
if [ -z "$LB_ADDR" ]; then
echo "LoadBalancer address is empty" >&2
exit 1
fi
for i in $(seq 1 20); do
echo "Attempt $i"
curl --silent --fail "http://${LB_ADDR}" && break
sleep 3
done
if [ "$i" -eq 20 ]; then
echo "LoadBalancer not reachable" >&2
exit 1
fi
# Cleanup
kubectl delete deployment --kubeconfig tenantkubeconfig-${test_name} "${test_name}-backend" -n tenant-test
kubectl delete service --kubeconfig tenantkubeconfig-${test_name} "${test_name}-backend" -n tenant-test
# Wait for all machine deployment replicas to be ready (timeout after 10 minutes)
kubectl wait machinedeployment kubernetes-${test_name}-md0 -n tenant-test --timeout=10m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2

View File

@@ -8,7 +8,6 @@ need yq; need jq; need base64
CHART_YAML="${CHART_YAML:-Chart.yaml}"
VALUES_YAML="${VALUES_YAML:-values.yaml}"
SCHEMA_JSON="${SCHEMA_JSON:-values.schema.json}"
CRD_DIR="../../system/cozystack-resource-definitions/cozyrds"
[[ -f "$CHART_YAML" ]] || { echo "No $CHART_YAML found"; exit 1; }
[[ -f "$SCHEMA_JSON" ]] || { echo "No $SCHEMA_JSON found"; exit 1; }
@@ -22,6 +21,8 @@ if [[ -z "$NAME" ]]; then
echo "Chart.yaml: .name is empty"; exit 1
fi
CRD_DIR="../../system/${NAME}-rd/cozyrds"
# Resolve icon path
# Accepts:
# /logos/foo.svg -> ./logos/foo.svg

View File

@@ -2,12 +2,18 @@ package backupcontroller
import (
"context"
"net/http"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/apiutil"
"sigs.k8s.io/controller-runtime/pkg/log"
strategyv1alpha1 "github.com/cozystack/cozystack/api/backups/strategy/v1alpha1"
@@ -18,35 +24,69 @@ import (
// Velero.strategy.backups.cozystack.io objects.
type BackupJobReconciler struct {
client.Client
Scheme *runtime.Scheme
dynamic.Interface
meta.RESTMapper
Scheme *runtime.Scheme
Recorder record.EventRecorder
}
func (r *BackupJobReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
logger := log.FromContext(ctx)
logger.Info("reconciling BackupJob", "namespace", req.Namespace, "name", req.Name)
j := &backupsv1alpha1.BackupJob{}
err := r.Get(ctx, types.NamespacedName{Namespace: req.Namespace, Name: req.Name}, j)
if err != nil {
if apierrors.IsNotFound(err) {
logger.V(1).Info("BackupJob not found, skipping")
return ctrl.Result{}, nil
}
logger.Error(err, "failed to get BackupJob")
return ctrl.Result{}, err
}
if j.Spec.StrategyRef.APIGroup == nil || *j.Spec.StrategyRef.APIGroup != strategyv1alpha1.GroupVersion.Group {
if j.Spec.StrategyRef.APIGroup == nil {
logger.V(1).Info("BackupJob has nil StrategyRef.APIGroup, skipping", "backupjob", j.Name)
return ctrl.Result{}, nil
}
if *j.Spec.StrategyRef.APIGroup != strategyv1alpha1.GroupVersion.Group {
logger.V(1).Info("BackupJob StrategyRef.APIGroup doesn't match, skipping",
"backupjob", j.Name,
"expected", strategyv1alpha1.GroupVersion.Group,
"got", *j.Spec.StrategyRef.APIGroup)
return ctrl.Result{}, nil
}
logger.Info("processing BackupJob", "backupjob", j.Name, "strategyKind", j.Spec.StrategyRef.Kind)
switch j.Spec.StrategyRef.Kind {
case strategyv1alpha1.JobStrategyKind:
return r.reconcileJob(ctx, j)
case strategyv1alpha1.VeleroStrategyKind:
return r.reconcileVelero(ctx, j)
default:
logger.V(1).Info("BackupJob StrategyRef.Kind not supported, skipping",
"backupjob", j.Name,
"kind", j.Spec.StrategyRef.Kind,
"supported", []string{strategyv1alpha1.JobStrategyKind, strategyv1alpha1.VeleroStrategyKind})
return ctrl.Result{}, nil
}
}
// SetupWithManager registers our controller with the Manager and sets up watches.
func (r *BackupJobReconciler) SetupWithManager(mgr ctrl.Manager) error {
cfg := mgr.GetConfig()
var err error
if r.Interface, err = dynamic.NewForConfig(cfg); err != nil {
return err
}
var h *http.Client
if h, err = rest.HTTPClientFor(cfg); err != nil {
return err
}
if r.RESTMapper, err = apiutil.NewDynamicRESTMapper(cfg, h); err != nil {
return err
}
return ctrl.NewControllerManagedBy(mgr).
For(&backupsv1alpha1.BackupJob{}).
Complete(r)

View File

@@ -2,14 +2,626 @@ package backupcontroller
import (
"context"
"encoding/json"
"fmt"
"reflect"
"time"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
strategyv1alpha1 "github.com/cozystack/cozystack/api/backups/strategy/v1alpha1"
backupsv1alpha1 "github.com/cozystack/cozystack/api/backups/v1alpha1"
"github.com/cozystack/cozystack/internal/template"
"github.com/go-logr/logr"
velerov1 "github.com/vmware-tanzu/velero/pkg/apis/velero/v1"
)
func getLogger(ctx context.Context) loggerWithDebug {
return loggerWithDebug{Logger: log.FromContext(ctx)}
}
// loggerWithDebug wraps a logr.Logger and provides a Debug() method
// that maps to V(1).Info() for convenience.
type loggerWithDebug struct {
logr.Logger
}
// Debug logs at debug level (equivalent to V(1).Info())
func (l loggerWithDebug) Debug(msg string, keysAndValues ...interface{}) {
l.Logger.V(1).Info(msg, keysAndValues...)
}
// S3Credentials holds the discovered S3 credentials from a Bucket storageRef
type S3Credentials struct {
BucketName string
Endpoint string
Region string
AccessKeyID string
AccessSecretKey string
}
// bucketInfo represents the structure of BucketInfo stored in the secret
type bucketInfo struct {
Spec struct {
BucketName string `json:"bucketName"`
SecretS3 struct {
Endpoint string `json:"endpoint"`
Region string `json:"region"`
AccessKeyID string `json:"accessKeyID"`
AccessSecretKey string `json:"accessSecretKey"`
} `json:"secretS3"`
} `json:"spec"`
}
const (
defaultRequeueAfter = 5 * time.Second
defaultActiveJobPollingInterval = defaultRequeueAfter
// Velero requires API objects and secrets to be in the cozy-velero namespace
veleroNamespace = "cozy-velero"
virtualMachinePrefix = "virtual-machine-"
)
func storageS3SecretName(namespace, backupJobName string) string {
return fmt.Sprintf("backup-%s-%s-s3-credentials", namespace, backupJobName)
}
func boolPtr(b bool) *bool {
return &b
}
func (r *BackupJobReconciler) reconcileVelero(ctx context.Context, j *backupsv1alpha1.BackupJob) (ctrl.Result, error) {
_ = log.FromContext(ctx)
logger := getLogger(ctx)
logger.Debug("reconciling Velero strategy", "backupjob", j.Name, "phase", j.Status.Phase)
// If already completed, no need to reconcile
if j.Status.Phase == backupsv1alpha1.BackupJobPhaseSucceeded ||
j.Status.Phase == backupsv1alpha1.BackupJobPhaseFailed {
logger.Debug("BackupJob already completed, skipping", "phase", j.Status.Phase)
return ctrl.Result{}, nil
}
// Step 1: On first reconcile, set startedAt (but not phase yet - phase will be set after backup creation)
logger.Debug("checking BackupJob status", "startedAt", j.Status.StartedAt, "phase", j.Status.Phase)
if j.Status.StartedAt == nil {
logger.Debug("setting BackupJob StartedAt")
now := metav1.Now()
j.Status.StartedAt = &now
// Don't set phase to Running yet - will be set after Velero backup is successfully created
if err := r.Status().Update(ctx, j); err != nil {
logger.Error(err, "failed to update BackupJob status")
return ctrl.Result{}, err
}
logger.Debug("set BackupJob StartedAt", "startedAt", j.Status.StartedAt)
} else {
logger.Debug("BackupJob already started", "startedAt", j.Status.StartedAt, "phase", j.Status.Phase)
}
// Step 2: Resolve inputs - Read Strategy, Storage, Application, optionally Plan
logger.Debug("fetching Velero strategy", "strategyName", j.Spec.StrategyRef.Name)
veleroStrategy := &strategyv1alpha1.Velero{}
if err := r.Get(ctx, client.ObjectKey{Name: j.Spec.StrategyRef.Name}, veleroStrategy); err != nil {
if errors.IsNotFound(err) {
logger.Error(err, "Velero strategy not found", "strategyName", j.Spec.StrategyRef.Name)
return r.markBackupJobFailed(ctx, j, fmt.Sprintf("Velero strategy not found: %s", j.Spec.StrategyRef.Name))
}
logger.Error(err, "failed to get Velero strategy")
return ctrl.Result{}, err
}
logger.Debug("fetched Velero strategy", "strategyName", veleroStrategy.Name)
// Step 3: Execute backup logic
// Check if we already created a Velero Backup
// Use human-readable timestamp: YYYY-MM-DD-HH-MM-SS
if j.Status.StartedAt == nil {
logger.Error(nil, "StartedAt is nil after status update, this should not happen")
return ctrl.Result{RequeueAfter: defaultRequeueAfter}, nil
}
logger.Debug("checking for existing Velero Backup", "namespace", veleroNamespace)
veleroBackupList := &velerov1.BackupList{}
opts := []client.ListOption{
client.InNamespace(veleroNamespace),
client.MatchingLabels{
backupsv1alpha1.OwningJobNamespaceLabel: j.Namespace,
backupsv1alpha1.OwningJobNameLabel: j.Name,
},
}
if err := r.List(ctx, veleroBackupList, opts...); err != nil {
logger.Error(err, "failed to get Velero Backup")
return ctrl.Result{}, err
}
if len(veleroBackupList.Items) == 0 {
// Create Velero Backup
logger.Debug("Velero Backup not found, creating new one")
if err := r.createVeleroBackup(ctx, j, veleroStrategy); err != nil {
logger.Error(err, "failed to create Velero Backup")
return r.markBackupJobFailed(ctx, j, fmt.Sprintf("failed to create Velero Backup: %v", err))
}
// After successful Velero backup creation, set phase to Running
if j.Status.Phase != backupsv1alpha1.BackupJobPhaseRunning {
logger.Debug("setting BackupJob phase to Running after successful Velero backup creation")
j.Status.Phase = backupsv1alpha1.BackupJobPhaseRunning
if err := r.Status().Update(ctx, j); err != nil {
logger.Error(err, "failed to update BackupJob phase to Running")
return ctrl.Result{}, err
}
}
logger.Debug("created Velero Backup, requeuing")
// Requeue to check status
return ctrl.Result{RequeueAfter: defaultRequeueAfter}, nil
}
if len(veleroBackupList.Items) > 1 {
logger.Error(fmt.Errorf("too many Velero backups for BackupJob"), "found more than one Velero Backup referencing a single BackupJob as owner")
j.Status.Phase = backupsv1alpha1.BackupJobPhaseFailed
if err := r.Status().Update(ctx, j); err != nil {
logger.Error(err, "failed to update BackupJob status")
}
return ctrl.Result{}, nil
}
veleroBackup := veleroBackupList.Items[0].DeepCopy()
logger.Debug("found existing Velero Backup", "phase", veleroBackup.Status.Phase)
// If Velero backup exists but phase is not Running, set it to Running
// This handles the case where the backup was created but phase wasn't set yet
if j.Status.Phase != backupsv1alpha1.BackupJobPhaseRunning {
logger.Debug("setting BackupJob phase to Running (Velero backup already exists)")
j.Status.Phase = backupsv1alpha1.BackupJobPhaseRunning
if err := r.Status().Update(ctx, j); err != nil {
logger.Error(err, "failed to update BackupJob phase to Running")
return ctrl.Result{}, err
}
}
// Check Velero Backup status
phase := string(veleroBackup.Status.Phase)
if phase == "" {
// Still in progress, requeue
return ctrl.Result{RequeueAfter: defaultActiveJobPollingInterval}, nil
}
// Step 4: On success - Create Backup resource and update status
if phase == "Completed" {
// Check if we already created the Backup resource
if j.Status.BackupRef == nil {
backup, err := r.createBackupResource(ctx, j, veleroBackup)
if err != nil {
return r.markBackupJobFailed(ctx, j, fmt.Sprintf("failed to create Backup resource: %v", err))
}
now := metav1.Now()
j.Status.BackupRef = &corev1.LocalObjectReference{Name: backup.Name}
j.Status.CompletedAt = &now
j.Status.Phase = backupsv1alpha1.BackupJobPhaseSucceeded
if err := r.Status().Update(ctx, j); err != nil {
logger.Error(err, "failed to update BackupJob status")
return ctrl.Result{}, err
}
logger.Debug("BackupJob succeeded", "backup", backup.Name)
}
return ctrl.Result{}, nil
}
// Step 5: On failure
if phase == "Failed" || phase == "PartiallyFailed" {
message := fmt.Sprintf("Velero Backup failed with phase: %s", phase)
if len(veleroBackup.Status.ValidationErrors) > 0 {
message = fmt.Sprintf("%s: %v", message, veleroBackup.Status.ValidationErrors)
}
return r.markBackupJobFailed(ctx, j, message)
}
// Still in progress (InProgress, New, etc.)
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil
}
// resolveBucketStorageRef discovers S3 credentials from a Bucket storageRef
// It follows this flow:
// 1. Get the Bucket resource (apps.cozystack.io/v1alpha1)
// 2. Find the BucketAccess that references this bucket
// 3. Get the secret from BucketAccess.spec.credentialsSecretName
// 4. Decode BucketInfo from secret.data.BucketInfo and extract S3 credentials
func (r *BackupJobReconciler) resolveBucketStorageRef(ctx context.Context, storageRef corev1.TypedLocalObjectReference, namespace string) (*S3Credentials, error) {
logger := getLogger(ctx)
// Step 1: Get the Bucket resource
bucket := &unstructured.Unstructured{}
bucket.SetGroupVersionKind(schema.GroupVersionKind{
Group: *storageRef.APIGroup,
Version: "v1alpha1",
Kind: storageRef.Kind,
})
if *storageRef.APIGroup != "apps.cozystack.io" {
return nil, fmt.Errorf("Unsupported storage APIGroup: %v, expected apps.cozystack.io", storageRef.APIGroup)
}
bucketKey := client.ObjectKey{Namespace: namespace, Name: storageRef.Name}
if err := r.Get(ctx, bucketKey, bucket); err != nil {
return nil, fmt.Errorf("failed to get Bucket %s: %w", storageRef.Name, err)
}
// Step 2: Determine the bucket claim name
// For apps.cozystack.io Bucket, the BucketClaim name is typically the same as the Bucket name
// or follows a pattern. Based on the templates, it's usually the Release.Name which equals the Bucket name
bucketName := storageRef.Name
// Step 3: Get BucketAccess by name (assuming BucketAccess name matches bucketName)
bucketAccess := &unstructured.Unstructured{}
bucketAccess.SetGroupVersionKind(schema.GroupVersionKind{
Group: "objectstorage.k8s.io",
Version: "v1alpha1",
Kind: "BucketAccess",
})
bucketAccessKey := client.ObjectKey{Name: "bucket-" + bucketName, Namespace: namespace}
if err := r.Get(ctx, bucketAccessKey, bucketAccess); err != nil {
return nil, fmt.Errorf("failed to get BucketAccess %s in namespace %s: %w", bucketName, namespace, err)
}
// Step 4: Get the secret name from BucketAccess
secretName, found, err := unstructured.NestedString(bucketAccess.Object, "spec", "credentialsSecretName")
if err != nil {
return nil, fmt.Errorf("failed to get credentialsSecretName from BucketAccess: %w", err)
}
if !found || secretName == "" {
return nil, fmt.Errorf("credentialsSecretName not found in BucketAccess %s", bucketAccessKey.Name)
}
// Step 5: Get the secret
secret := &corev1.Secret{}
secretKey := client.ObjectKey{Namespace: namespace, Name: secretName}
if err := r.Get(ctx, secretKey, secret); err != nil {
return nil, fmt.Errorf("failed to get secret %s: %w", secretName, err)
}
// Step 6: Decode BucketInfo from secret.data.BucketInfo
bucketInfoData, found := secret.Data["BucketInfo"]
if !found {
return nil, fmt.Errorf("BucketInfo key not found in secret %s", secretName)
}
// Parse JSON value
var info bucketInfo
if err := json.Unmarshal(bucketInfoData, &info); err != nil {
return nil, fmt.Errorf("failed to unmarshal BucketInfo from secret %s: %w", secretName, err)
}
// Step 7: Extract and return S3 credentials
creds := &S3Credentials{
BucketName: info.Spec.BucketName,
Endpoint: info.Spec.SecretS3.Endpoint,
Region: info.Spec.SecretS3.Region,
AccessKeyID: info.Spec.SecretS3.AccessKeyID,
AccessSecretKey: info.Spec.SecretS3.AccessSecretKey,
}
logger.Debug("resolved S3 credentials from Bucket storageRef",
"bucket", storageRef.Name,
"bucketName", creds.BucketName,
"endpoint", creds.Endpoint)
return creds, nil
}
// createS3CredsForVelero creates or updates a Kubernetes Secret containing
// Velero S3 credentials in the format expected by Velero's cloud-credentials plugin.
func (r *BackupJobReconciler) createS3CredsForVelero(ctx context.Context, backupJob *backupsv1alpha1.BackupJob, creds *S3Credentials) error {
logger := getLogger(ctx)
secretName := storageS3SecretName(backupJob.Namespace, backupJob.Name)
secretNamespace := veleroNamespace
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: secretNamespace,
},
Type: corev1.SecretTypeOpaque,
StringData: map[string]string{
"cloud": fmt.Sprintf(`[default]
aws_access_key_id=%s
aws_secret_access_key=%s
services = seaweed-s3
[services seaweed-s3]
s3 =
endpoint_url = %s
`, creds.AccessKeyID, creds.AccessSecretKey, creds.Endpoint),
},
}
foundSecret := &corev1.Secret{}
secretKey := client.ObjectKey{Name: secretName, Namespace: secretNamespace}
err := r.Get(ctx, secretKey, foundSecret)
if err != nil && errors.IsNotFound(err) {
// Create the Secret
if err := r.Create(ctx, secret); err != nil {
r.Recorder.Event(backupJob, corev1.EventTypeWarning, "SecretCreationFailed",
fmt.Sprintf("Failed to create Velero credentials secret %s/%s: %v", secretNamespace, secretName, err))
return fmt.Errorf("failed to create Velero credentials secret: %w", err)
}
logger.Debug("created Velero credentials secret", "secret", secretName)
r.Recorder.Event(backupJob, corev1.EventTypeNormal, "SecretCreated",
fmt.Sprintf("Created Velero credentials secret %s/%s", secretNamespace, secretName))
} else if err == nil {
// Update if necessary - only update if the secret data has actually changed
// Compare the new secret data with existing secret data
existingData := foundSecret.Data
if existingData == nil {
existingData = make(map[string][]byte)
}
newData := make(map[string][]byte)
for k, v := range secret.StringData {
newData[k] = []byte(v)
}
// Check if data has changed
dataChanged := false
if len(existingData) != len(newData) {
dataChanged = true
} else {
for k, newVal := range newData {
existingVal, exists := existingData[k]
if !exists || !reflect.DeepEqual(existingVal, newVal) {
dataChanged = true
break
}
}
}
if dataChanged {
foundSecret.StringData = secret.StringData
foundSecret.Data = nil // Clear .Data so .StringData will be used
if err := r.Update(ctx, foundSecret); err != nil {
r.Recorder.Event(backupJob, corev1.EventTypeWarning, "SecretUpdateFailed",
fmt.Sprintf("Failed to update Velero credentials secret %s/%s: %v", secretNamespace, secretName, err))
return fmt.Errorf("failed to update Velero credentials secret: %w", err)
}
logger.Debug("updated Velero credentials secret", "secret", secretName)
r.Recorder.Event(backupJob, corev1.EventTypeNormal, "SecretUpdated",
fmt.Sprintf("Updated Velero credentials secret %s/%s", secretNamespace, secretName))
} else {
logger.Debug("Velero credentials secret data unchanged, skipping update", "secret", secretName)
}
} else if err != nil {
return fmt.Errorf("error checking for existing Velero credentials secret: %w", err)
}
return nil
}
// createBackupStorageLocation creates or updates a Velero BackupStorageLocation resource.
func (r *BackupJobReconciler) createBackupStorageLocation(ctx context.Context, bsl *velerov1.BackupStorageLocation) error {
logger := getLogger(ctx)
foundBSL := &velerov1.BackupStorageLocation{}
bslKey := client.ObjectKey{Name: bsl.Name, Namespace: bsl.Namespace}
err := r.Get(ctx, bslKey, foundBSL)
if err != nil && errors.IsNotFound(err) {
// Create the BackupStorageLocation
if err := r.Create(ctx, bsl); err != nil {
return fmt.Errorf("failed to create BackupStorageLocation: %w", err)
}
logger.Debug("created BackupStorageLocation", "name", bsl.Name, "namespace", bsl.Namespace)
} else if err == nil {
// Update if necessary - use patch to avoid conflicts with Velero's status updates
// Only update if the spec has actually changed
if !reflect.DeepEqual(foundBSL.Spec, bsl.Spec) {
// Retry on conflict since Velero may be updating status concurrently
for i := 0; i < 3; i++ {
if err := r.Get(ctx, bslKey, foundBSL); err != nil {
return fmt.Errorf("failed to get BackupStorageLocation for update: %w", err)
}
foundBSL.Spec = bsl.Spec
if err := r.Update(ctx, foundBSL); err != nil {
if errors.IsConflict(err) && i < 2 {
logger.Debug("conflict updating BackupStorageLocation, retrying", "attempt", i+1)
time.Sleep(100 * time.Millisecond)
continue
}
return fmt.Errorf("failed to update BackupStorageLocation: %w", err)
}
logger.Debug("updated BackupStorageLocation", "name", bsl.Name, "namespace", bsl.Namespace)
return nil
}
} else {
logger.Debug("BackupStorageLocation spec unchanged, skipping update", "name", bsl.Name, "namespace", bsl.Namespace)
}
} else if err != nil {
return fmt.Errorf("error checking for existing BackupStorageLocation: %w", err)
}
return nil
}
// createVolumeSnapshotLocation creates or updates a Velero VolumeSnapshotLocation resource.
func (r *BackupJobReconciler) createVolumeSnapshotLocation(ctx context.Context, vsl *velerov1.VolumeSnapshotLocation) error {
logger := getLogger(ctx)
foundVSL := &velerov1.VolumeSnapshotLocation{}
vslKey := client.ObjectKey{Name: vsl.Name, Namespace: vsl.Namespace}
err := r.Get(ctx, vslKey, foundVSL)
if err != nil && errors.IsNotFound(err) {
// Create the VolumeSnapshotLocation
if err := r.Create(ctx, vsl); err != nil {
return fmt.Errorf("failed to create VolumeSnapshotLocation: %w", err)
}
logger.Debug("created VolumeSnapshotLocation", "name", vsl.Name, "namespace", vsl.Namespace)
} else if err == nil {
// Update if necessary - only update if the spec has actually changed
if !reflect.DeepEqual(foundVSL.Spec, vsl.Spec) {
// Retry on conflict since Velero may be updating status concurrently
for i := 0; i < 3; i++ {
if err := r.Get(ctx, vslKey, foundVSL); err != nil {
return fmt.Errorf("failed to get VolumeSnapshotLocation for update: %w", err)
}
foundVSL.Spec = vsl.Spec
if err := r.Update(ctx, foundVSL); err != nil {
if errors.IsConflict(err) && i < 2 {
logger.Debug("conflict updating VolumeSnapshotLocation, retrying", "attempt", i+1)
time.Sleep(100 * time.Millisecond)
continue
}
return fmt.Errorf("failed to update VolumeSnapshotLocation: %w", err)
}
logger.Debug("updated VolumeSnapshotLocation", "name", vsl.Name, "namespace", vsl.Namespace)
return nil
}
} else {
logger.Debug("VolumeSnapshotLocation spec unchanged, skipping update", "name", vsl.Name, "namespace", vsl.Namespace)
}
} else if err != nil {
return fmt.Errorf("error checking for existing VolumeSnapshotLocation: %w", err)
}
return nil
}
func (r *BackupJobReconciler) markBackupJobFailed(ctx context.Context, backupJob *backupsv1alpha1.BackupJob, message string) (ctrl.Result, error) {
logger := getLogger(ctx)
now := metav1.Now()
backupJob.Status.CompletedAt = &now
backupJob.Status.Phase = backupsv1alpha1.BackupJobPhaseFailed
backupJob.Status.Message = message
// Add condition
backupJob.Status.Conditions = append(backupJob.Status.Conditions, metav1.Condition{
Type: "Ready",
Status: metav1.ConditionFalse,
Reason: "BackupFailed",
Message: message,
LastTransitionTime: now,
})
if err := r.Status().Update(ctx, backupJob); err != nil {
logger.Error(err, "failed to update BackupJob status to Failed")
return ctrl.Result{}, err
}
logger.Debug("BackupJob failed", "message", message)
return ctrl.Result{}, nil
}
func (r *BackupJobReconciler) createVeleroBackup(ctx context.Context, backupJob *backupsv1alpha1.BackupJob, strategy *strategyv1alpha1.Velero) error {
logger := getLogger(ctx)
logger.Debug("createVeleroBackup called", "strategy", strategy.Name)
mapping, err := r.RESTMapping(schema.GroupKind{Group: *backupJob.Spec.ApplicationRef.APIGroup, Kind: backupJob.Spec.ApplicationRef.Kind})
if err != nil {
return err
}
ns := backupJob.Namespace
if mapping.Scope.Name() != meta.RESTScopeNameNamespace {
ns = ""
}
app, err := r.Resource(mapping.Resource).Namespace(ns).Get(ctx, backupJob.Spec.ApplicationRef.Name, metav1.GetOptions{})
if err != nil {
return err
}
veleroBackupSpec, err := template.Template(&strategy.Spec.Template.Spec, app.Object)
if err != nil {
return err
}
veleroBackup := &velerov1.Backup{
ObjectMeta: metav1.ObjectMeta{
GenerateName: fmt.Sprintf("%s.%s-", backupJob.Namespace, backupJob.Name),
Namespace: veleroNamespace,
Labels: map[string]string{
backupsv1alpha1.OwningJobNameLabel: backupJob.Name,
backupsv1alpha1.OwningJobNamespaceLabel: backupJob.Namespace,
},
},
Spec: *veleroBackupSpec,
}
name := veleroBackup.GenerateName
if err := r.Create(ctx, veleroBackup); err != nil {
if veleroBackup.Name != "" {
name = veleroBackup.Name
}
logger.Error(err, "failed to create Velero Backup", "name", veleroBackup.Name)
r.Recorder.Event(backupJob, corev1.EventTypeWarning, "VeleroBackupCreationFailed",
fmt.Sprintf("Failed to create Velero Backup %s/%s: %v", veleroNamespace, name, err))
return err
}
logger.Debug("created Velero Backup", "name", veleroBackup.Name, "namespace", veleroBackup.Namespace)
r.Recorder.Event(backupJob, corev1.EventTypeNormal, "VeleroBackupCreated",
fmt.Sprintf("Created Velero Backup %s/%s", veleroNamespace, name))
return nil
}
func (r *BackupJobReconciler) createBackupResource(ctx context.Context, backupJob *backupsv1alpha1.BackupJob, veleroBackup *velerov1.Backup) (*backupsv1alpha1.Backup, error) {
logger := getLogger(ctx)
// Extract artifact information from Velero Backup
// Create a basic artifact referencing the Velero backup
artifact := &backupsv1alpha1.BackupArtifact{
URI: fmt.Sprintf("velero://%s/%s", backupJob.Namespace, veleroBackup.Name),
}
// Get takenAt from Velero Backup creation timestamp or status
takenAt := metav1.Now()
if veleroBackup.Status.StartTimestamp != nil {
takenAt = *veleroBackup.Status.StartTimestamp
} else if !veleroBackup.CreationTimestamp.IsZero() {
takenAt = veleroBackup.CreationTimestamp
}
// Extract driver metadata (e.g., Velero backup name)
driverMetadata := map[string]string{
"velero.io/backup-name": veleroBackup.Name,
"velero.io/backup-namespace": veleroBackup.Namespace,
}
backup := &backupsv1alpha1.Backup{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s", backupJob.Name),
Namespace: backupJob.Namespace,
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: backupJob.APIVersion,
Kind: backupJob.Kind,
Name: backupJob.Name,
UID: backupJob.UID,
Controller: boolPtr(true),
},
},
},
Spec: backupsv1alpha1.BackupSpec{
ApplicationRef: backupJob.Spec.ApplicationRef,
StorageRef: backupJob.Spec.StorageRef,
StrategyRef: backupJob.Spec.StrategyRef,
TakenAt: takenAt,
DriverMetadata: driverMetadata,
},
Status: backupsv1alpha1.BackupStatus{
Phase: backupsv1alpha1.BackupPhaseReady,
},
}
if backupJob.Spec.PlanRef != nil {
backup.Spec.PlanRef = backupJob.Spec.PlanRef
}
if artifact != nil {
backup.Status.Artifact = artifact
}
if err := r.Create(ctx, backup); err != nil {
logger.Error(err, "failed to create Backup resource")
return nil, err
}
logger.Debug("created Backup resource", "name", backup.Name)
return backup, nil
}

View File

@@ -97,7 +97,34 @@ func (r *CozystackResourceDefinitionHelmReconciler) updateHelmReleasesForCRD(ctx
return nil
}
// updateHelmReleaseChart updates the chart in HelmRelease based on CozystackResourceDefinition
// expectedValuesFrom returns the expected valuesFrom configuration for HelmReleases
func expectedValuesFrom() []helmv2.ValuesReference {
return []helmv2.ValuesReference{
{
Kind: "Secret",
Name: "cozystack-values",
},
}
}
// valuesFromEqual compares two ValuesReference slices
func valuesFromEqual(a, b []helmv2.ValuesReference) bool {
if len(a) != len(b) {
return false
}
for i := range a {
if a[i].Kind != b[i].Kind ||
a[i].Name != b[i].Name ||
a[i].ValuesKey != b[i].ValuesKey ||
a[i].TargetPath != b[i].TargetPath ||
a[i].Optional != b[i].Optional {
return false
}
}
return true
}
// updateHelmReleaseChart updates the chart and valuesFrom in HelmRelease based on CozystackResourceDefinition
func (r *CozystackResourceDefinitionHelmReconciler) updateHelmReleaseChart(ctx context.Context, hr *helmv2.HelmRelease, crd *cozyv1alpha1.CozystackResourceDefinition) error {
logger := log.FromContext(ctx)
hrCopy := hr.DeepCopy()
@@ -154,6 +181,14 @@ func (r *CozystackResourceDefinitionHelmReconciler) updateHelmReleaseChart(ctx c
}
}
// Check and update valuesFrom configuration
expected := expectedValuesFrom()
if !valuesFromEqual(hrCopy.Spec.ValuesFrom, expected) {
logger.V(4).Info("Updating HelmRelease valuesFrom", "name", hr.Name, "namespace", hr.Namespace)
hrCopy.Spec.ValuesFrom = expected
updated = true
}
if updated {
logger.V(4).Info("Updating HelmRelease chart", "name", hr.Name, "namespace", hr.Namespace)
if err := r.Update(ctx, hrCopy); err != nil {

View File

@@ -1,140 +0,0 @@
package controller
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"sort"
"time"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/predicate"
)
type CozystackConfigReconciler struct {
client.Client
Scheme *runtime.Scheme
}
var configMapNames = []string{"cozystack", "cozystack-branding", "cozystack-scheduling"}
const configMapNamespace = "cozy-system"
const digestAnnotation = "cozystack.io/cozy-config-digest"
const forceReconcileKey = "reconcile.fluxcd.io/forceAt"
const requestedAt = "reconcile.fluxcd.io/requestedAt"
func (r *CozystackConfigReconciler) Reconcile(ctx context.Context, _ ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
time.Sleep(2 * time.Second)
digest, err := r.computeDigest(ctx)
if err != nil {
log.Error(err, "failed to compute config digest")
return ctrl.Result{}, nil
}
var helmList helmv2.HelmReleaseList
if err := r.List(ctx, &helmList); err != nil {
return ctrl.Result{}, fmt.Errorf("failed to list HelmReleases: %w", err)
}
now := time.Now().Format(time.RFC3339Nano)
updated := 0
for _, hr := range helmList.Items {
isSystemApp := hr.Labels["cozystack.io/system-app"] == "true"
isTenantRoot := hr.Namespace == "tenant-root" && hr.Name == "tenant-root"
if !isSystemApp && !isTenantRoot {
continue
}
patchTarget := hr.DeepCopy()
if hr.Annotations == nil {
hr.Annotations = map[string]string{}
}
if hr.Annotations[digestAnnotation] == digest {
continue
}
patchTarget.Annotations[digestAnnotation] = digest
patchTarget.Annotations[forceReconcileKey] = now
patchTarget.Annotations[requestedAt] = now
patch := client.MergeFrom(hr.DeepCopy())
if err := r.Patch(ctx, patchTarget, patch); err != nil {
log.Error(err, "failed to patch HelmRelease", "name", hr.Name, "namespace", hr.Namespace)
continue
}
updated++
log.Info("patched HelmRelease with new config digest", "name", hr.Name, "namespace", hr.Namespace)
}
log.Info("finished reconciliation", "updatedHelmReleases", updated)
return ctrl.Result{}, nil
}
func (r *CozystackConfigReconciler) computeDigest(ctx context.Context) (string, error) {
hash := sha256.New()
for _, name := range configMapNames {
var cm corev1.ConfigMap
err := r.Get(ctx, client.ObjectKey{Namespace: configMapNamespace, Name: name}, &cm)
if err != nil {
if kerrors.IsNotFound(err) {
continue // ignore missing
}
return "", err
}
// Sort keys for consistent hashing
var keys []string
for k := range cm.Data {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
v := cm.Data[k]
fmt.Fprintf(hash, "%s:%s=%s\n", name, k, v)
}
}
return hex.EncodeToString(hash.Sum(nil)), nil
}
func (r *CozystackConfigReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
WithEventFilter(predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
cm, ok := e.ObjectNew.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
CreateFunc: func(e event.CreateEvent) bool {
cm, ok := e.Object.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
DeleteFunc: func(e event.DeleteEvent) bool {
cm, ok := e.Object.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
}).
For(&corev1.ConfigMap{}).
Complete(r)
}
func contains(slice []string, val string) bool {
for _, s := range slice {
if s == val {
return true
}
}
return false
}

View File

@@ -1,159 +0,0 @@
package controller
import (
"context"
"fmt"
"strings"
"time"
e "errors"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
"gopkg.in/yaml.v2"
corev1 "k8s.io/api/core/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
)
type TenantHelmReconciler struct {
client.Client
Scheme *runtime.Scheme
}
func (r *TenantHelmReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)
time.Sleep(2 * time.Second)
hr := &helmv2.HelmRelease{}
if err := r.Get(ctx, req.NamespacedName, hr); err != nil {
if errors.IsNotFound(err) {
return ctrl.Result{}, nil
}
logger.Error(err, "unable to fetch HelmRelease")
return ctrl.Result{}, err
}
if !strings.HasPrefix(hr.Name, "tenant-") {
return ctrl.Result{}, nil
}
if len(hr.Status.Conditions) == 0 || hr.Status.Conditions[0].Type != "Ready" {
return ctrl.Result{}, nil
}
if len(hr.Status.History) == 0 {
logger.Info("no history in HelmRelease status", "name", hr.Name)
return ctrl.Result{}, nil
}
if hr.Status.History[0].Status != "deployed" {
return ctrl.Result{}, nil
}
newDigest := hr.Status.History[0].Digest
var hrList helmv2.HelmReleaseList
childNamespace := getChildNamespace(hr.Namespace, hr.Name)
if childNamespace == "tenant-root" && hr.Name == "tenant-root" {
if hr.Spec.Values == nil {
logger.Error(e.New("hr.Spec.Values is nil"), "cant annotate tenant-root ns")
return ctrl.Result{}, nil
}
err := annotateTenantRootNs(*hr.Spec.Values, r.Client)
if err != nil {
logger.Error(err, "cant annotate tenant-root ns")
return ctrl.Result{}, nil
}
logger.Info("namespace 'tenant-root' annotated")
}
if err := r.List(ctx, &hrList, client.InNamespace(childNamespace)); err != nil {
logger.Error(err, "unable to list HelmReleases in namespace", "namespace", hr.Name)
return ctrl.Result{}, err
}
for _, item := range hrList.Items {
if item.Name == hr.Name {
continue
}
oldDigest := item.GetAnnotations()["cozystack.io/tenant-config-digest"]
if oldDigest == newDigest {
continue
}
patchTarget := item.DeepCopy()
if patchTarget.Annotations == nil {
patchTarget.Annotations = map[string]string{}
}
ts := time.Now().Format(time.RFC3339Nano)
patchTarget.Annotations["cozystack.io/tenant-config-digest"] = newDigest
patchTarget.Annotations["reconcile.fluxcd.io/forceAt"] = ts
patchTarget.Annotations["reconcile.fluxcd.io/requestedAt"] = ts
patch := client.MergeFrom(item.DeepCopy())
if err := r.Patch(ctx, patchTarget, patch); err != nil {
logger.Error(err, "failed to patch HelmRelease", "name", patchTarget.Name)
continue
}
logger.Info("patched HelmRelease with new digest", "name", patchTarget.Name, "digest", newDigest, "version", hr.Status.History[0].Version)
}
return ctrl.Result{}, nil
}
func (r *TenantHelmReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&helmv2.HelmRelease{}).
Complete(r)
}
func getChildNamespace(currentNamespace, hrName string) string {
tenantName := strings.TrimPrefix(hrName, "tenant-")
switch {
case currentNamespace == "tenant-root" && hrName == "tenant-root":
// 1) root tenant inside root namespace
return "tenant-root"
case currentNamespace == "tenant-root":
// 2) any other tenant in root namespace
return fmt.Sprintf("tenant-%s", tenantName)
default:
// 3) tenant in a dedicated namespace
return fmt.Sprintf("%s-%s", currentNamespace, tenantName)
}
}
func annotateTenantRootNs(values apiextensionsv1.JSON, c client.Client) error {
var data map[string]interface{}
if err := yaml.Unmarshal(values.Raw, &data); err != nil {
return fmt.Errorf("failed to parse HelmRelease values: %w", err)
}
host, ok := data["host"].(string)
if !ok || host == "" {
return fmt.Errorf("host field not found or not a string")
}
var ns corev1.Namespace
if err := c.Get(context.TODO(), client.ObjectKey{Name: "tenant-root"}, &ns); err != nil {
return fmt.Errorf("failed to get namespace tenant-root: %w", err)
}
if ns.Annotations == nil {
ns.Annotations = map[string]string{}
}
ns.Annotations["namespace.cozystack.io/host"] = host
if err := c.Update(context.TODO(), &ns); err != nil {
return fmt.Errorf("failed to update namespace: %w", err)
}
return nil
}

View File

@@ -0,0 +1,160 @@
package cozyvaluesreplicator
import (
"context"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
)
// Reconciler fields this setup relies on.
type SecretReplicatorReconciler struct {
client.Client
Scheme *runtime.Scheme
// Source of truth:
SourceNamespace string
SecretName string
// Namespaces to replicate into:
// (e.g. labels.SelectorFromSet(labels.Set{"tenant":"true"}), or metav1.LabelSelectorAsSelector(...))
TargetNamespaceSelector labels.Selector
}
func (r *SecretReplicatorReconciler) SetupWithManager(mgr ctrl.Manager) error {
// 1) Primary watch for requirement (b):
// Reconcile any Secret named r.SecretName in any namespace (includes source too).
// This keeps Secrets in cache and causes “copy changed -> reconcile it” to happen.
secretNameOnly := predicate.NewPredicateFuncs(func(obj client.Object) bool {
return obj.GetName() == r.SecretName
})
// 2) Secondary watch for requirement (c):
// When the *source* Secret changes, fan-out reconcile requests to every matching namespace.
onlySourceSecret := predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool { return isSourceSecret(e.Object, r) },
UpdateFunc: func(e event.UpdateEvent) bool { return isSourceSecret(e.ObjectNew, r) },
DeleteFunc: func(e event.DeleteEvent) bool { return isSourceSecret(e.Object, r) },
GenericFunc: func(e event.GenericEvent) bool {
return isSourceSecret(e.Object, r)
},
}
// Fan-out mapper for source Secret events -> one request per matching target namespace.
fanOutOnSourceSecret := handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, _ client.Object) []reconcile.Request {
// List namespaces *from the cache* (because we also watch Namespaces below).
var nsList corev1.NamespaceList
if err := r.List(ctx, &nsList); err != nil {
// If list fails, best-effort: return nothing; reconcile will be retried by next event.
return nil
}
reqs := make([]reconcile.Request, 0, len(nsList.Items))
for i := range nsList.Items {
ns := &nsList.Items[i]
if ns.Name == r.SourceNamespace {
continue
}
if r.TargetNamespaceSelector != nil && !r.TargetNamespaceSelector.Matches(labels.Set(ns.Labels)) {
continue
}
reqs = append(reqs, reconcile.Request{
NamespacedName: types.NamespacedName{
Namespace: ns.Name,
Name: r.SecretName,
},
})
}
return reqs
})
// 3) Namespace watch for requirement (a):
// When a namespace is created/updated to match selector, enqueue reconcile for the Secret copy in that namespace.
enqueueOnNamespaceMatch := handler.EnqueueRequestsFromMapFunc(func(ctx context.Context, obj client.Object) []reconcile.Request {
ns, ok := obj.(*corev1.Namespace)
if !ok {
return nil
}
if ns.Name == r.SourceNamespace {
return nil
}
if r.TargetNamespaceSelector != nil && !r.TargetNamespaceSelector.Matches(labels.Set(ns.Labels)) {
return nil
}
return []reconcile.Request{{
NamespacedName: types.NamespacedName{
Namespace: ns.Name,
Name: r.SecretName,
},
}}
})
// Only trigger from namespace events where the label match may be (or become) true.
// (You can keep this simple; its fine if it fires on any update—your Reconcile should be idempotent.)
namespaceMayMatter := predicate.Funcs{
CreateFunc: func(e event.CreateEvent) bool {
ns, ok := e.Object.(*corev1.Namespace)
return ok && (r.TargetNamespaceSelector == nil || r.TargetNamespaceSelector.Matches(labels.Set(ns.Labels)))
},
UpdateFunc: func(e event.UpdateEvent) bool {
oldNS, okOld := e.ObjectOld.(*corev1.Namespace)
newNS, okNew := e.ObjectNew.(*corev1.Namespace)
if !okOld || !okNew {
return false
}
// Fire if it matches now OR matched before (covers transitions both ways; reconcile can decide what to do).
oldMatch := r.TargetNamespaceSelector == nil || r.TargetNamespaceSelector.Matches(labels.Set(oldNS.Labels))
newMatch := r.TargetNamespaceSelector == nil || r.TargetNamespaceSelector.Matches(labels.Set(newNS.Labels))
return oldMatch || newMatch
},
DeleteFunc: func(event.DeleteEvent) bool { return false }, // nothing to do on namespace delete
GenericFunc: func(event.GenericEvent) bool { return false },
}
return ctrl.NewControllerManagedBy(mgr).
// (b) Watch all Secrets with the chosen name; this also ensures Secret objects are cached.
For(&corev1.Secret{}, builder.WithPredicates(secretNameOnly)).
// (c) Add a second watch on Secret, but only for the source secret, and fan-out to all namespaces.
Watches(
&corev1.Secret{},
fanOutOnSourceSecret,
builder.WithPredicates(onlySourceSecret),
).
// (a) Watch Namespaces so theyre cached and so “namespace appears / starts matching” enqueues reconcile.
Watches(
&corev1.Namespace{},
enqueueOnNamespaceMatch,
builder.WithPredicates(namespaceMayMatter),
).
Complete(r)
}
func isSourceSecret(obj client.Object, r *SecretReplicatorReconciler) bool {
if obj == nil {
return false
}
return obj.GetNamespace() == r.SourceNamespace && obj.GetName() == r.SecretName
}
func (r *SecretReplicatorReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
if req.Name != r.SecretName || req.Namespace == r.SourceNamespace {
return ctrl.Result{}, nil
}
originalSecret := &corev1.Secret{}
r.Get(ctx, types.NamespacedName{Namespace: r.SourceNamespace, Name: r.SecretName}, originalSecret)
replicatedSecret := originalSecret.DeepCopy()
replicatedSecret.Namespace = req.Namespace
r.Update(ctx, replicatedSecret)
return ctrl.Result{}, nil
}

View File

@@ -0,0 +1,68 @@
package template
import (
"bytes"
"encoding/json"
tmpl "text/template"
)
func Template[T any](obj *T, templateContext map[string]any) (*T, error) {
b, err := json.Marshal(obj)
if err != nil {
return nil, err
}
var unstructured any
err = json.Unmarshal(b, &unstructured)
if err != nil {
return nil, err
}
templateFunc := func(in string) string {
out, err := template(in, templateContext)
if err != nil {
return in
}
return out
}
unstructured = mapAtStrings(unstructured, templateFunc)
b, err = json.Marshal(unstructured)
if err != nil {
return nil, err
}
var out T
err = json.Unmarshal(b, &out)
if err != nil {
return nil, err
}
return &out, nil
}
func mapAtStrings(v any, f func(string) string) any {
switch x := v.(type) {
case map[string]any:
for k, val := range x {
x[k] = mapAtStrings(val, f)
}
return x
case []any:
for i, val := range x {
x[i] = mapAtStrings(val, f)
}
return x
case string:
return f(x)
default:
return v
}
}
func template(in string, templateContext map[string]any) (string, error) {
tpl, err := tmpl.New("this").Parse(in)
if err != nil {
return "", err
}
var buf bytes.Buffer
if err := tpl.Execute(&buf, templateContext); err != nil {
return "", err
}
return buf.String(), nil
}

View File

@@ -0,0 +1,68 @@
package template
import (
"encoding/json"
"testing"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func TestTemplate_PodTemplateSpec(t *testing.T) {
original := corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Name: "my-pod",
Labels: map[string]string{
"app": "demo",
},
Annotations: map[string]string{
"note": "hello",
},
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "{{ .Release.Name }}",
Image: "nginx:1.21",
Args: []string{"--flag={{ .Values.value }}"},
Env: []corev1.EnvVar{
{
Name: "FOO",
Value: "{{ .Release.Namespace }}",
},
},
},
},
},
}
templateContext := map[string]any{
"Release": map[string]any{
"Name": "foo",
"Namespace": "notdefault",
},
"Values": map[string]any{
"value": 3,
},
}
reference := *original.DeepCopy()
reference.Spec.Containers[0].Name = "foo"
reference.Spec.Containers[0].Args[0] = "--flag=3"
reference.Spec.Containers[0].Env[0].Value = "notdefault"
got, err := Template(&original, templateContext)
if err != nil {
t.Fatalf("Template returned error: %v", err)
}
b1, err := json.Marshal(reference)
t.Logf("reference:\n%s", string(b1))
if err != nil {
t.Fatalf("failed to marshal reference value: %v", err)
}
b2, err := json.Marshal(got)
t.Logf("got:\n%s", string(b2))
if err != nil {
t.Fatalf("failed to marshal transformed value: %v", err)
}
if string(b1) != string(b2) {
t.Fatalf("transformed value not equal to reference value, expected: %s, got: %s", string(b1), string(b2))
}
}

View File

@@ -1,5 +1,4 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $seaweedfs := index $myNS.metadata.annotations "namespace.cozystack.io/seaweedfs" }}
{{- $seaweedfs := .Values._namespace.seaweedfs }}
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketClaim
metadata:

View File

@@ -21,5 +21,8 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
values:
bucketName: {{ .Release.Name }}

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
{{- if .Values.clickhouseKeeper.enabled }}
apiVersion: "clickhouse-keeper.altinity.com/v1"

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-credentials" .Release.Name) }}
{{- $passwords := dict }}
{{- $users := .Values.users }}

View File

@@ -50,9 +50,8 @@ spec:
postgresUID: 999
postgresGID: 999
enableSuperuserAccess: true
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints := get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
labelSelector:

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" | default (dict "data" (dict)) }}
{{- $clusterDomain := index $cozyConfig.data "cluster-domain" | default "cozy.local" }}
{{- $clusterDomain := index .Values._cluster "cluster-domain" | default "cozy.local" }}
---
apiVersion: apps.foundationdb.org/v1beta2
kind: FoundationDBCluster

View File

@@ -1,5 +1,5 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 53388eb8e..28644236f 100644
index 53388eb8e..873060251 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -12,7 +12,6 @@ import (
@@ -10,12 +10,17 @@ index 53388eb8e..28644236f 100644
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@@ -669,35 +668,50 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
@@ -666,38 +665,62 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
// for extracting the nodes it does not matter what type of address we are dealing with
// all nodes with an endpoint for a corresponding slice will be selected.
nodeSet := sets.Set[string]{}
+ hasEndpointsWithoutNodeName := false
for _, slice := range tenantSlices {
for _, endpoint := range slice.Endpoints {
// find all unique nodes that correspond to an endpoint in a tenant slice
+ if endpoint.NodeName == nil {
+ klog.Warningf("Skipping endpoint without NodeName in slice %s/%s", slice.Namespace, slice.Name)
+ hasEndpointsWithoutNodeName = true
+ continue
+ }
nodeSet.Insert(*endpoint.NodeName)
@@ -23,6 +28,13 @@ index 53388eb8e..28644236f 100644
}
- klog.Infof("Desired nodes for service %s in namespace %s: %v", service.Name, service.Namespace, sets.List(nodeSet))
+ // Fallback: if no endpoints with NodeName were found, but there are endpoints without NodeName,
+ // distribute traffic to all VMIs (similar to ExternalTrafficPolicy=Cluster behavior)
+ if nodeSet.Len() == 0 && hasEndpointsWithoutNodeName {
+ klog.Infof("No endpoints with NodeName found for service %s/%s, falling back to all VMIs", service.Namespace, service.Name)
+ return c.getAllVMIEndpoints()
+ }
+
+ klog.Infof("Desired nodes for service %s/%s: %v", service.Namespace, service.Name, sets.List(nodeSet))
for _, node := range sets.List(nodeSet) {
@@ -68,7 +80,7 @@ index 53388eb8e..28644236f 100644
desiredEndpoints = append(desiredEndpoints, &discovery.Endpoint{
Addresses: []string{i.IP},
Conditions: discovery.EndpointConditions{
@@ -705,9 +719,9 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
@@ -705,9 +728,9 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
Serving: &serving,
Terminating: &terminating,
},
@@ -80,6 +92,71 @@ index 53388eb8e..28644236f 100644
}
}
}
@@ -716,6 +739,64 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
return desiredEndpoints
}
+// getAllVMIEndpoints returns endpoints for all VMIs in the infra namespace.
+// This is used as a fallback when tenant endpoints don't have NodeName specified,
+// similar to ExternalTrafficPolicy=Cluster behavior where traffic is distributed to all nodes.
+func (c *Controller) getAllVMIEndpoints() []*discovery.Endpoint {
+ var endpoints []*discovery.Endpoint
+
+ // List all VMIs in the infra namespace
+ vmiList, err := c.infraDynamic.
+ Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).
+ Namespace(c.infraNamespace).
+ List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ klog.Errorf("Failed to list VMIs in namespace %q: %v", c.infraNamespace, err)
+ return endpoints
+ }
+
+ for _, obj := range vmiList.Items {
+ vmi := &kubevirtv1.VirtualMachineInstance{}
+ err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, vmi)
+ if err != nil {
+ klog.Errorf("Failed to convert Unstructured to VirtualMachineInstance: %v", err)
+ continue
+ }
+
+ if vmi.Status.NodeName == "" {
+ klog.Warningf("Skipping VMI %s/%s: NodeName is empty", vmi.Namespace, vmi.Name)
+ continue
+ }
+ nodeNamePtr := &vmi.Status.NodeName
+
+ ready := vmi.Status.Phase == kubevirtv1.Running
+ serving := vmi.Status.Phase == kubevirtv1.Running
+ terminating := vmi.Status.Phase == kubevirtv1.Failed || vmi.Status.Phase == kubevirtv1.Succeeded
+
+ for _, i := range vmi.Status.Interfaces {
+ if i.Name == "default" {
+ if i.IP == "" {
+ klog.Warningf("VMI %s/%s interface %q has no IP, skipping", vmi.Namespace, vmi.Name, i.Name)
+ continue
+ }
+ endpoints = append(endpoints, &discovery.Endpoint{
+ Addresses: []string{i.IP},
+ Conditions: discovery.EndpointConditions{
+ Ready: &ready,
+ Serving: &serving,
+ Terminating: &terminating,
+ },
+ NodeName: nodeNamePtr,
+ })
+ break
+ }
+ }
+ }
+
+ klog.Infof("Fallback: created %d endpoints from all VMIs in namespace %s", len(endpoints), c.infraNamespace)
+ return endpoints
+}
+
func (c *Controller) ensureEndpointSliceLabels(slice *discovery.EndpointSlice, svc *v1.Service) (map[string]string, bool) {
labels := make(map[string]string)
labelsChanged := false
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1c97035b4..d205d0bed 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go

View File

@@ -1,7 +1,6 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $etcd := index $myNS.metadata.annotations "namespace.cozystack.io/etcd" }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $etcd := .Values._namespace.etcd }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
{{- $kubevirtmachinetemplateNames := list }}
{{- define "kubevirtmachinetemplate" -}}
spec:
@@ -31,9 +30,8 @@ spec:
{{- end }}
cluster.x-k8s.io/deployment-name: {{ $.Release.Name }}-{{ .groupName }}
spec:
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints := get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 10 }}
labelSelector:

View File

@@ -1,5 +1,4 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $targetTenant := index $myNS.metadata.annotations "namespace.cozystack.io/monitoring" }}
{{- $targetTenant := .Values._namespace.monitoring }}
{{- if .Values.addons.monitoringAgents.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease

View File

@@ -1,8 +1,6 @@
{{- define "cozystack.defaultVPAValues" -}}
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $targetTenant := index $myNS.metadata.annotations "namespace.cozystack.io/monitoring" }}
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
{{- $targetTenant := .Values._namespace.monitoring }}
vpaForVPA: false
vertical-pod-autoscaler:
recommender:

View File

@@ -1,5 +1,4 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $ingress := .Values._namespace.ingress }}
{{- if and (eq .Values.addons.ingressNginx.exposeMethod "Proxied") .Values.addons.ingressNginx.hosts }}
---
apiVersion: networking.k8s.io/v1

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $clusterDomain := (index .Values._cluster "cluster-domain") | default "cozy.local" }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-credentials" .Release.Name) }}
{{- $passwords := dict }}
@@ -53,6 +52,9 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
values:
nats:
container:

View File

@@ -46,9 +46,8 @@ spec:
imageName: ghcr.io/cloudnative-pg/postgresql:{{ include "postgres.versionMap" $ | trimPrefix "v" }}
enableSuperuserAccess: true
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints := get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
labelSelector:

View File

@@ -31,4 +31,7 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
{{- end }}

View File

@@ -30,3 +30,6 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values

View File

@@ -31,4 +31,7 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
{{- end }}

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $oidcEnabled := index $cozyConfig.data "oidc-enabled" }}
{{- $oidcEnabled := index .Values._cluster "oidc-enabled" }}
{{- if eq $oidcEnabled "true" }}
{{- if .Capabilities.APIVersions.Has "v1.edp.epam.com/v1" }}
apiVersion: v1.edp.epam.com/v1

View File

@@ -31,4 +31,7 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
{{- end }}

View File

@@ -1,46 +1,63 @@
{{- define "cozystack.namespace-anotations" }}
{{- $context := index . 0 }}
{{- $existingNS := index . 1 }}
{{- range $x := list "etcd" "monitoring" "ingress" "seaweedfs" }}
{{- if (index $context.Values $x) }}
namespace.cozystack.io/{{ $x }}: "{{ include "tenant.name" $context }}"
{{- else }}
namespace.cozystack.io/{{ $x }}: "{{ index $existingNS.metadata.annotations (printf "namespace.cozystack.io/%s" $x) | required (printf "namespace %s has no namespace.cozystack.io/%s annotation" $context.Release.Namespace $x) }}"
{{- end }}
{{- end }}
{{- end }}
{{/* Lookup for namespace uid (needed for ownerReferences) */}}
{{- $existingNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- if not $existingNS }}
{{- fail (printf "error lookup existing namespace: %s" .Release.Namespace) }}
{{- end }}
{{- if ne (include "tenant.name" .) "tenant-root" }}
{{/* Compute namespace values once for use in both Secret and labels */}}
{{- $tenantName := include "tenant.name" . }}
{{- $parentNamespace := .Values._namespace | default dict }}
{{- $parentHost := $parentNamespace.host | default "" }}
{{/* Compute host */}}
{{- $computedHost := "" }}
{{- if .Values.host }}
{{- $computedHost = .Values.host }}
{{- else if $parentHost }}
{{- $computedHost = printf "%s.%s" (splitList "-" $tenantName | last) $parentHost }}
{{- end }}
{{/* Compute service references */}}
{{- $etcd := $parentNamespace.etcd | default "" }}
{{- if .Values.etcd }}
{{- $etcd = $tenantName }}
{{- end }}
{{- $ingress := $parentNamespace.ingress | default "" }}
{{- if .Values.ingress }}
{{- $ingress = $tenantName }}
{{- end }}
{{- $monitoring := $parentNamespace.monitoring | default "" }}
{{- if .Values.monitoring }}
{{- $monitoring = $tenantName }}
{{- end }}
{{- $seaweedfs := $parentNamespace.seaweedfs | default "" }}
{{- if .Values.seaweedfs }}
{{- $seaweedfs = $tenantName }}
{{- end }}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ include "tenant.name" . }}
name: {{ $tenantName }}
{{- if hasPrefix "tenant-" .Release.Namespace }}
annotations:
{{- if .Values.host }}
namespace.cozystack.io/host: "{{ .Values.host }}"
{{- else }}
{{ $parentHost := index $existingNS.metadata.annotations "namespace.cozystack.io/host" | required (printf "namespace %s has no namespace.cozystack.io/host annotation" .Release.Namespace) }}
namespace.cozystack.io/host: "{{ splitList "-" (include "tenant.name" .) | last }}.{{ $parentHost }}"
{{- end }}
{{- include "cozystack.namespace-anotations" (list . $existingNS) | nindent 4 }}
labels:
tenant.cozystack.io/{{ include "tenant.name" $ }}: ""
{{- if hasPrefix "tenant-" .Release.Namespace }}
tenant.cozystack.io/{{ $tenantName }}: ""
{{- $parts := splitList "-" .Release.Namespace }}
{{- range $i, $v := $parts }}
{{- if ne $i 0 }}
tenant.cozystack.io/{{ join "-" (slice $parts 0 (add $i 1)) }}: ""
{{- end }}
{{- end }}
{{- end }}
{{- include "cozystack.namespace-anotations" (list $ $existingNS) | nindent 4 }}
{{/* Labels for network policies */}}
namespace.cozystack.io/etcd: {{ $etcd | quote }}
namespace.cozystack.io/ingress: {{ $ingress | quote }}
namespace.cozystack.io/monitoring: {{ $monitoring | quote }}
namespace.cozystack.io/seaweedfs: {{ $seaweedfs | quote }}
namespace.cozystack.io/host: {{ $computedHost | quote }}
alpha.kubevirt.io/auto-memory-limits-ratio: "1.0"
ownerReferences:
- apiVersion: v1
@@ -50,4 +67,23 @@ metadata:
name: {{ .Release.Namespace }}
uid: {{ $existingNS.metadata.uid }}
{{- end }}
---
apiVersion: v1
kind: Secret
metadata:
name: cozystack-values
namespace: {{ $tenantName }}
labels:
reconcile.fluxcd.io/watch: Enabled
type: Opaque
stringData:
values.yaml: |
_cluster:
{{- .Values._cluster | toYaml | nindent 6 }}
_namespace:
etcd: {{ $etcd | quote }}
ingress: {{ $ingress | quote }}
monitoring: {{ $monitoring | quote }}
seaweedfs: {{ $seaweedfs | quote }}
host: {{ $computedHost | quote }}
{{- end }}

View File

@@ -31,4 +31,7 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
{{- end }}

View File

@@ -74,9 +74,8 @@ Generate a stable UUID for cloud-init re-initialization upon upgrade.
Node Affinity for Windows VMs
*/}}
{{- define "virtual-machine.nodeAffinity" -}}
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" -}}
{{- if $configMap -}}
{{- $dedicatedNodesForWindowsVMs := get $configMap.data "dedicatedNodesForWindowsVMs" -}}
{{- if .Values._cluster.scheduling -}}
{{- $dedicatedNodesForWindowsVMs := get .Values._cluster.scheduling "dedicatedNodesForWindowsVMs" -}}
{{- if eq $dedicatedNodesForWindowsVMs "true" -}}
{{- $isWindows := hasPrefix "windows" (toString .Values.instanceProfile) -}}
affinity:

View File

@@ -74,9 +74,8 @@ Generate a stable UUID for cloud-init re-initialization upon upgrade.
Node Affinity for Windows VMs
*/}}
{{- define "virtual-machine.nodeAffinity" -}}
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" -}}
{{- if $configMap -}}
{{- $dedicatedNodesForWindowsVMs := get $configMap.data "dedicatedNodesForWindowsVMs" -}}
{{- if .Values._cluster.scheduling -}}
{{- $dedicatedNodesForWindowsVMs := get .Values._cluster.scheduling "dedicatedNodesForWindowsVMs" -}}
{{- if eq $dedicatedNodesForWindowsVMs "true" -}}
{{- $isWindows := hasPrefix "windows" (toString .Values.instanceProfile) -}}
affinity:

View File

@@ -1,5 +1,4 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $host := .Values._namespace.host }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-vpn" .Release.Name) }}
{{- $accessKeys := list }}
{{- $passwords := dict }}

View File

@@ -62,11 +62,155 @@ releases:
namespace: cozy-system
dependsOn: [cilium]
- name: cozystack-resource-definitions
releaseName: cozystack-resource-definitions
chart: cozystack-resource-definitions
- name: bootbox-rd
releaseName: bootbox-rd
chart: bootbox-rd
namespace: cozy-system
dependsOn: [cilium,cozystack-controller,cozystack-resource-definition-crd]
dependsOn: [cozystack-resource-definition-crd]
- name: bucket-rd
releaseName: bucket-rd
chart: bucket-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: clickhouse-rd
releaseName: clickhouse-rd
chart: clickhouse-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: etcd-rd
releaseName: etcd-rd
chart: etcd-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: ferretdb-rd
releaseName: ferretdb-rd
chart: ferretdb-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: foundationdb-rd
releaseName: foundationdb-rd
chart: foundationdb-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: http-cache-rd
releaseName: http-cache-rd
chart: http-cache-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: info-rd
releaseName: info-rd
chart: info-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: ingress-rd
releaseName: ingress-rd
chart: ingress-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: kafka-rd
releaseName: kafka-rd
chart: kafka-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: kubernetes-rd
releaseName: kubernetes-rd
chart: kubernetes-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: monitoring-rd
releaseName: monitoring-rd
chart: monitoring-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: mysql-rd
releaseName: mysql-rd
chart: mysql-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: nats-rd
releaseName: nats-rd
chart: nats-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: postgres-rd
releaseName: postgres-rd
chart: postgres-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: rabbitmq-rd
releaseName: rabbitmq-rd
chart: rabbitmq-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: redis-rd
releaseName: redis-rd
chart: redis-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: seaweedfs-rd
releaseName: seaweedfs-rd
chart: seaweedfs-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: tcp-balancer-rd
releaseName: tcp-balancer-rd
chart: tcp-balancer-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: tenant-rd
releaseName: tenant-rd
chart: tenant-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: virtual-machine-rd
releaseName: virtual-machine-rd
chart: virtual-machine-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: virtualprivatecloud-rd
releaseName: virtualprivatecloud-rd
chart: virtualprivatecloud-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: vm-disk-rd
releaseName: vm-disk-rd
chart: vm-disk-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: vm-instance-rd
releaseName: vm-instance-rd
chart: vm-instance-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: vpn-rd
releaseName: vpn-rd
chart: vpn-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: cert-manager
releaseName: cert-manager

View File

@@ -112,11 +112,155 @@ releases:
namespace: cozy-system
dependsOn: [cilium,kubeovn,cozystack-api,cozystack-controller]
- name: cozystack-resource-definitions
releaseName: cozystack-resource-definitions
chart: cozystack-resource-definitions
- name: bootbox-rd
releaseName: bootbox-rd
chart: bootbox-rd
namespace: cozy-system
dependsOn: [cilium,kubeovn,cozystack-api,cozystack-controller,cozystack-resource-definition-crd]
dependsOn: [cozystack-resource-definition-crd]
- name: bucket-rd
releaseName: bucket-rd
chart: bucket-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: clickhouse-rd
releaseName: clickhouse-rd
chart: clickhouse-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: etcd-rd
releaseName: etcd-rd
chart: etcd-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: ferretdb-rd
releaseName: ferretdb-rd
chart: ferretdb-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: foundationdb-rd
releaseName: foundationdb-rd
chart: foundationdb-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: http-cache-rd
releaseName: http-cache-rd
chart: http-cache-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: info-rd
releaseName: info-rd
chart: info-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: ingress-rd
releaseName: ingress-rd
chart: ingress-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: kafka-rd
releaseName: kafka-rd
chart: kafka-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: kubernetes-rd
releaseName: kubernetes-rd
chart: kubernetes-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: monitoring-rd
releaseName: monitoring-rd
chart: monitoring-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: mysql-rd
releaseName: mysql-rd
chart: mysql-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: nats-rd
releaseName: nats-rd
chart: nats-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: postgres-rd
releaseName: postgres-rd
chart: postgres-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: rabbitmq-rd
releaseName: rabbitmq-rd
chart: rabbitmq-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: redis-rd
releaseName: redis-rd
chart: redis-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: seaweedfs-rd
releaseName: seaweedfs-rd
chart: seaweedfs-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: tcp-balancer-rd
releaseName: tcp-balancer-rd
chart: tcp-balancer-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: tenant-rd
releaseName: tenant-rd
chart: tenant-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: virtual-machine-rd
releaseName: virtual-machine-rd
chart: virtual-machine-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: virtualprivatecloud-rd
releaseName: virtualprivatecloud-rd
chart: virtualprivatecloud-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: vm-disk-rd
releaseName: vm-disk-rd
chart: vm-disk-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: vm-instance-rd
releaseName: vm-instance-rd
chart: vm-instance-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: vpn-rd
releaseName: vpn-rd
chart: vpn-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: cert-manager
releaseName: cert-manager

View File

@@ -55,11 +55,155 @@ releases:
namespace: cozy-system
dependsOn: [cozystack-api,cozystack-controller]
- name: cozystack-resource-definitions
releaseName: cozystack-resource-definitions
chart: cozystack-resource-definitions
- name: bootbox-rd
releaseName: bootbox-rd
chart: bootbox-rd
namespace: cozy-system
dependsOn: [cozystack-api,cozystack-controller,cozystack-resource-definition-crd]
dependsOn: [cozystack-resource-definition-crd]
- name: bucket-rd
releaseName: bucket-rd
chart: bucket-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: clickhouse-rd
releaseName: clickhouse-rd
chart: clickhouse-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: etcd-rd
releaseName: etcd-rd
chart: etcd-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: ferretdb-rd
releaseName: ferretdb-rd
chart: ferretdb-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: foundationdb-rd
releaseName: foundationdb-rd
chart: foundationdb-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: http-cache-rd
releaseName: http-cache-rd
chart: http-cache-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: info-rd
releaseName: info-rd
chart: info-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: ingress-rd
releaseName: ingress-rd
chart: ingress-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: kafka-rd
releaseName: kafka-rd
chart: kafka-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: kubernetes-rd
releaseName: kubernetes-rd
chart: kubernetes-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: monitoring-rd
releaseName: monitoring-rd
chart: monitoring-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: mysql-rd
releaseName: mysql-rd
chart: mysql-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: nats-rd
releaseName: nats-rd
chart: nats-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: postgres-rd
releaseName: postgres-rd
chart: postgres-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: rabbitmq-rd
releaseName: rabbitmq-rd
chart: rabbitmq-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: redis-rd
releaseName: redis-rd
chart: redis-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: seaweedfs-rd
releaseName: seaweedfs-rd
chart: seaweedfs-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: tcp-balancer-rd
releaseName: tcp-balancer-rd
chart: tcp-balancer-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: tenant-rd
releaseName: tenant-rd
chart: tenant-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: virtual-machine-rd
releaseName: virtual-machine-rd
chart: virtual-machine-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: virtualprivatecloud-rd
releaseName: virtualprivatecloud-rd
chart: virtualprivatecloud-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: vm-disk-rd
releaseName: vm-disk-rd
chart: vm-disk-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: vm-instance-rd
releaseName: vm-instance-rd
chart: vm-instance-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: vpn-rd
releaseName: vpn-rd
chart: vpn-rd
namespace: cozy-system
dependsOn: [cozystack-resource-definition-crd]
- name: cert-manager
releaseName: cert-manager

View File

@@ -1,6 +1,22 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $cozystackBranding := lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
{{- $cozystackScheduling := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- $kubeRootCa := lookup "v1" "ConfigMap" "kube-system" "kube-root-ca.crt" }}
{{- $bundleName := index $cozyConfig.data "bundle-name" }}
{{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
{{/* Default values for _cluster config to ensure all required keys exist */}}
{{- $clusterDefaults := dict
"root-host" ""
"bundle-name" ""
"clusterissuer" "http01"
"oidc-enabled" "false"
"expose-services" ""
"expose-ingress" "tenant-root"
"expose-external-ips" ""
"cluster-domain" "cozy.local"
"api-server-endpoint" ""
}}
{{- $clusterConfig := mergeOverwrite $clusterDefaults ($cozyConfig.data | default dict) }}
{{- $host := "example.org" }}
{{- $host := "example.org" }}
{{- if $cozyConfig.data }}
@@ -22,6 +38,8 @@ kind: Namespace
metadata:
annotations:
helm.sh/resource-policy: keep
labels:
tenant.cozystack.io/tenant-root: ""
namespace.cozystack.io/etcd: tenant-root
namespace.cozystack.io/monitoring: tenant-root
namespace.cozystack.io/ingress: tenant-root
@@ -29,6 +47,36 @@ metadata:
namespace.cozystack.io/host: "{{ $host }}"
name: tenant-root
---
apiVersion: v1
kind: Secret
metadata:
name: cozystack-values
namespace: tenant-root
labels:
reconcile.fluxcd.io/watch: Enabled
type: Opaque
stringData:
values.yaml: |
_cluster:
{{- $clusterConfig | toYaml | nindent 6 }}
{{- with $cozystackBranding.data }}
branding:
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- with $cozystackScheduling.data }}
scheduling:
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- with $kubeRootCa.data }}
kube-root-ca: {{ index . "ca.crt" | b64enc | quote }}
{{- end }}
_namespace:
etcd: tenant-root
monitoring: tenant-root
ingress: tenant-root
seaweedfs: tenant-root
host: {{ $host | quote }}
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
@@ -56,6 +104,9 @@ spec:
kind: HelmRepository
name: cozystack-apps
namespace: cozy-public
valuesFrom:
- kind: Secret
name: cozystack-values
values:
host: "{{ $host }}"
dependsOn:

View File

@@ -83,6 +83,9 @@ spec:
values:
{{- toYaml . | nindent 4}}
{{- end }}
valuesFrom:
- kind: Secret
name: cozystack-values
{{- with $x.dependsOn }}
dependsOn:

View File

@@ -1,5 +1,20 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $cozystackBranding := lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
{{- $cozystackScheduling := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- $bundleName := index $cozyConfig.data "bundle-name" }}
{{/* Default values for _cluster config to ensure all required keys exist */}}
{{- $clusterDefaults := dict
"root-host" ""
"bundle-name" ""
"clusterissuer" "http01"
"oidc-enabled" "false"
"expose-services" ""
"expose-ingress" "tenant-root"
"expose-external-ips" ""
"cluster-domain" "cozy.local"
"api-server-endpoint" ""
}}
{{- $clusterConfig := mergeOverwrite $clusterDefaults ($cozyConfig.data | default dict) }}
{{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
{{- $disabledComponents := splitList "," ((index $cozyConfig.data "bundle-disable") | default "") }}
{{- $enabledComponents := splitList "," ((index $cozyConfig.data "bundle-enable") | default "") }}
@@ -37,4 +52,25 @@ metadata:
pod-security.kubernetes.io/enforce: privileged
{{- end }}
name: {{ $namespace }}
---
apiVersion: v1
kind: Secret
metadata:
name: cozystack-values
namespace: {{ $namespace }}
labels:
reconcile.fluxcd.io/watch: Enabled
type: Opaque
stringData:
values.yaml: |
_cluster:
{{- $clusterConfig | toYaml | nindent 6 }}
{{- with $cozystackBranding.data }}
branding:
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- with $cozystackScheduling.data }}
scheduling:
{{- . | toYaml | nindent 8 }}
{{- end }}
{{- end }}

0
packages/core/testing/Chart.yaml Executable file → Normal file
View File

0
packages/core/testing/Makefile Executable file → Normal file
View File

10
packages/core/testing/images/e2e-sandbox/Dockerfile Executable file → Normal file
View File

@@ -9,7 +9,7 @@ ARG TARGETOS
ARG TARGETARCH
RUN apt update -q
RUN apt install -yq --no-install-recommends psmisc genisoimage ca-certificates qemu-kvm qemu-utils iproute2 iptables wget xz-utils netcat curl jq make git
RUN apt install -yq --no-install-recommends psmisc genisoimage ca-certificates qemu-kvm qemu-utils iproute2 iptables wget xz-utils netcat curl jq make git bash-completion
RUN curl -sSL "https://github.com/siderolabs/talos/releases/download/v${TALOSCTL_VERSION}/talosctl-${TARGETOS}-${TARGETARCH}" -o /usr/local/bin/talosctl \
&& chmod +x /usr/local/bin/talosctl
RUN curl -sSL "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/${TARGETOS}/${TARGETARCH}/kubectl" -o /usr/local/bin/kubectl \
@@ -21,5 +21,13 @@ RUN curl -sSL "https://fluxcd.io/install.sh" | bash
RUN curl -sSL "https://github.com/cozystack/cozyhr/raw/refs/heads/main/hack/install.sh" | sh -s -- -v "${COZYHR_VERSION}"
RUN curl https://dl.min.io/client/mc/release/${TARGETOS}-${TARGETARCH}/mc --create-dirs -o /usr/local/bin/mc \
&& chmod +x /usr/local/bin/mc
RUN <<'EOF'
cat <<'EOT' >> /etc/bash.bashrc
. /etc/bash_completion
. <(kubectl completion bash)
alias k=kubectl
complete -F __start_kubectl k
EOT
EOF
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]

0
packages/core/testing/values.yaml Executable file → Normal file
View File

View File

@@ -1,9 +1,6 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:

View File

@@ -1,9 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $host := .Values._namespace.host }}
{{ range $m := .Values.machines }}
---

View File

@@ -49,10 +49,9 @@ spec:
{{- with .Values.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list . $) | nindent 10 }}
{{- end }}
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- $rawConstraints := "" }}
{{- if $configMap }}
{{- $rawConstraints = get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints = get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- end }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 6 }}

View File

@@ -12,6 +12,13 @@ spec:
containers:
- name: etcd-defrag
image: ghcr.io/ahrtr/etcd-defrag:v0.13.0
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
args:
- --endpoints={{ range $i, $e := until (int .Values.replicas) }}{{ if $i }},{{ end }}https://{{ $.Release.Name }}-{{ $i }}.{{ $.Release.Name }}-headless.{{ $.Release.Namespace }}.svc:2379{{ end }}
- --cacert=/etc/etcd/pki/client/cert/ca.crt

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $oidcEnabled := index $cozyConfig.data "oidc-enabled" }}
{{- $oidcEnabled := index .Values._cluster "oidc-enabled" }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:

View File

@@ -1,23 +1,14 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $host := .Values._namespace.host | default (index .Values._cluster "root-host") }}
{{- $k8sClientSecret := lookup "v1" "Secret" "cozy-keycloak" "k8s-client" }}
{{- if $k8sClientSecret }}
{{- $apiServerEndpoint := index $cozyConfig.data "api-server-endpoint" }}
{{- $managementKubeconfigEndpoint := default "" (get $cozyConfig.data "management-kubeconfig-endpoint") }}
{{- $apiServerEndpoint := index .Values._cluster "api-server-endpoint" }}
{{- $managementKubeconfigEndpoint := default "" (index .Values._cluster "management-kubeconfig-endpoint") }}
{{- if and $managementKubeconfigEndpoint (ne $managementKubeconfigEndpoint "") }}
{{- $apiServerEndpoint = $managementKubeconfigEndpoint }}
{{- end }}
{{- $k8sClient := index $k8sClientSecret.data "client-secret-key" | b64dec }}
{{- $rootSaConfigMap := lookup "v1" "ConfigMap" "kube-system" "kube-root-ca.crt" }}
{{- $k8sCa := index $rootSaConfigMap.data "ca.crt" | b64enc }}
{{- if .Capabilities.APIVersions.Has "helm.toolkit.fluxcd.io/v2" }}
{{- $tenantRoot := lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" "tenant-root" "tenant-root" }}
{{- if and $tenantRoot $tenantRoot.spec $tenantRoot.spec.values $tenantRoot.spec.values.host }}
{{- $host = $tenantRoot.spec.values.host }}
{{- end }}
{{- end }}
{{- $k8sCa := index .Values._cluster "kube-root-ca" }}
---
apiVersion: v1
kind: Secret

View File

@@ -1,6 +1,5 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $exposeIngress := index $cozyConfig.data "expose-ingress" | default "tenant-root" }}
{{- $exposeExternalIPs := (index $cozyConfig.data "expose-external-ips") | default "" | nospace }}
{{- $exposeIngress := (index .Values._cluster "expose-ingress") | default "tenant-root" }}
{{- $exposeExternalIPs := (index .Values._cluster "expose-external-ips") | default "" | nospace }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
@@ -24,6 +23,9 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
values:
ingress-nginx:
fullnameOverride: {{ trimPrefix "tenant-" .Release.Namespace }}-ingress

View File

@@ -5,9 +5,8 @@ metadata:
name: alerta-db
spec:
instances: 2
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints := get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
labelSelector:

View File

@@ -1,9 +1,6 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
{{- $apiKey := randAlphaNum 32 }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace "alerta" }}

View File

@@ -6,9 +6,8 @@ spec:
instances: 2
storage:
size: {{ .Values.grafana.db.size }}
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
{{- $rawConstraints := get $configMap.data "globalAppTopologySpreadConstraints" }}
{{- if .Values._cluster.scheduling }}
{{- $rawConstraints := get .Values._cluster.scheduling "globalAppTopologySpreadConstraints" }}
{{- if $rawConstraints }}
{{- $rawConstraints | fromYaml | toYaml | nindent 2 }}
labelSelector:

View File

@@ -1,9 +1,6 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
---
apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana

View File

@@ -1,7 +1,5 @@
{{- if eq .Values.topology "Client" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $host := .Values._namespace.host }}
---
apiVersion: apps/v1
kind: Deployment

View File

@@ -1,9 +1,5 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
{{- if and (not (eq .Values.topology "Client")) (.Values.filer.grpcHost) }}
---
apiVersion: networking.k8s.io/v1

View File

@@ -34,9 +34,8 @@
{{- end }}
{{- if not (eq .Values.topology "Client") }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $ingress := .Values._namespace.ingress }}
{{- $host := .Values._namespace.host }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
@@ -60,6 +59,9 @@ spec:
force: true
remediation:
retries: -1
valuesFrom:
- kind: Secret
name: cozystack-values
values:
global:
serviceAccountName: "{{ .Release.Namespace }}-seaweedfs"

View File

@@ -1,7 +1,130 @@
{{/*
Cluster-wide configuration helpers.
These helpers read from .Values._cluster which is populated via valuesFrom from Secret cozystack-values.
*/}}
{{/*
Get the root host for the cluster.
Usage: {{ include "cozy-lib.root-host" . }}
*/}}
{{- define "cozy-lib.root-host" -}}
{{- (index .Values._cluster "root-host") | default "" }}
{{- end }}
{{/*
Get the bundle name for the cluster.
Usage: {{ include "cozy-lib.bundle-name" . }}
*/}}
{{- define "cozy-lib.bundle-name" -}}
{{- (index .Values._cluster "bundle-name") | default "" }}
{{- end }}
{{/*
Get the images registry.
Usage: {{ include "cozy-lib.images-registry" . }}
*/}}
{{- define "cozy-lib.images-registry" -}}
{{- (index .Values._cluster "images-registry") | default "" }}
{{- end }}
{{/*
Get the ipv4 cluster CIDR.
Usage: {{ include "cozy-lib.ipv4-cluster-cidr" . }}
*/}}
{{- define "cozy-lib.ipv4-cluster-cidr" -}}
{{- (index .Values._cluster "ipv4-cluster-cidr") | default "" }}
{{- end }}
{{/*
Get the ipv4 service CIDR.
Usage: {{ include "cozy-lib.ipv4-service-cidr" . }}
*/}}
{{- define "cozy-lib.ipv4-service-cidr" -}}
{{- (index .Values._cluster "ipv4-service-cidr") | default "" }}
{{- end }}
{{/*
Get the ipv4 join CIDR.
Usage: {{ include "cozy-lib.ipv4-join-cidr" . }}
*/}}
{{- define "cozy-lib.ipv4-join-cidr" -}}
{{- (index .Values._cluster "ipv4-join-cidr") | default "" }}
{{- end }}
{{/*
Get scheduling configuration.
Usage: {{ include "cozy-lib.scheduling" . }}
Returns: YAML string of scheduling configuration
*/}}
{{- define "cozy-lib.scheduling" -}}
{{- if .Values._cluster.scheduling }}
{{- .Values._cluster.scheduling | toYaml }}
{{- end }}
{{- end }}
{{/*
Get branding configuration.
Usage: {{ include "cozy-lib.branding" . }}
Returns: YAML string of branding configuration
*/}}
{{- define "cozy-lib.branding" -}}
{{- if .Values._cluster.branding }}
{{- .Values._cluster.branding | toYaml }}
{{- end }}
{{- end }}
{{/*
Namespace-specific configuration helpers.
These helpers read from .Values._namespace which is populated via valuesFrom from Secret cozystack-values.
*/}}
{{/*
Get the host for this namespace.
Usage: {{ include "cozy-lib.ns-host" . }}
*/}}
{{- define "cozy-lib.ns-host" -}}
{{- .Values._namespace.host | default "" }}
{{- end }}
{{/*
Get the etcd namespace reference.
Usage: {{ include "cozy-lib.ns-etcd" . }}
*/}}
{{- define "cozy-lib.ns-etcd" -}}
{{- .Values._namespace.etcd | default "" }}
{{- end }}
{{/*
Get the ingress namespace reference.
Usage: {{ include "cozy-lib.ns-ingress" . }}
*/}}
{{- define "cozy-lib.ns-ingress" -}}
{{- .Values._namespace.ingress | default "" }}
{{- end }}
{{/*
Get the monitoring namespace reference.
Usage: {{ include "cozy-lib.ns-monitoring" . }}
*/}}
{{- define "cozy-lib.ns-monitoring" -}}
{{- .Values._namespace.monitoring | default "" }}
{{- end }}
{{/*
Get the seaweedfs namespace reference.
Usage: {{ include "cozy-lib.ns-seaweedfs" . }}
*/}}
{{- define "cozy-lib.ns-seaweedfs" -}}
{{- .Values._namespace.seaweedfs | default "" }}
{{- end }}
{{/*
Legacy helper - kept for backward compatibility during migration.
Loads config into context. Deprecated: use direct .Values._cluster access instead.
*/}}
{{- define "cozy-lib.loadCozyConfig" }}
{{- include "cozy-lib.checkInput" . }}
{{- if not (hasKey (index . 1) "cozyConfig") }}
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $_ := set (index . 1) "cozyConfig" $cozyConfig }}
{{- $_ := set (index . 1) "cozyConfig" (dict "data" ((index . 1).Values._cluster | default dict)) }}
{{- end }}
{{- end }}

View File

@@ -14,7 +14,11 @@ spec:
singular: backupjob
scope: Namespaced
versions:
- name: v1alpha1
- additionalPrinterColumns:
- jsonPath: .status.phase
name: Phase
type: string
name: v1alpha1
schema:
openAPIV3Schema:
description: |-
@@ -233,3 +237,5 @@ spec:
- jsonPath: .spec.applicationRef.name
served: true
storage: true
subresources:
status: {}

View File

@@ -8,4 +8,25 @@ rules:
verbs: ["get", "list", "watch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["backupjobs"]
verbs: ["create", "get", "list", "watch", "update", "patch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["backupjobs/status"]
verbs: ["get", "update", "patch"]
- apiGroups: ["backups.cozystack.io"]
resources: ["backups"]
verbs: ["create", "get", "list", "watch"]
- apiGroups: ["apps.cozystack.io"]
resources: ["buckets", "bucketaccesses", "virtualmachines"]
verbs: ["get", "list", "watch"]
- apiGroups: ["objectstorage.k8s.io"]
resources: ["buckets", "bucketaccesses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "list", "watch", "update", "patch"]
- apiGroups: ["kubevirt.io"]
resources: ["virtualmachines"]
verbs: ["get", "list", "watch"]
- apiGroups: ["velero.io"]
resources: ["backups", "backupstoragelocations", "volumesnapshotlocations", "restores"]
verbs: ["create", "get", "list", "watch", "update", "patch"]

View File

@@ -0,0 +1,5 @@
apiVersion: strategy.backups.cozystack.io/v1alpha1
kind: Velero
metadata:
name: velero-strategy-default
spec: {}

View File

@@ -1,3 +1,3 @@
apiVersion: v2
name: cozystack-resource-definitions
name: bootbox-rd
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -1,4 +1,4 @@
export NAME=cozystack-resource-definitions
export NAME=bootbox-rd
export NAMESPACE=cozy-system
include ../../../scripts/package.mk

View File

@@ -0,0 +1,3 @@
apiVersion: v2
name: bucket-rd
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -0,0 +1,4 @@
export NAME=bucket-rd
export NAMESPACE=cozy-system
include ../../../scripts/package.mk

View File

@@ -0,0 +1,4 @@
{{- range $path, $_ := .Files.Glob "cozyrds/*" }}
---
{{ $.Files.Get $path }}
{{- end }}

View File

@@ -0,0 +1 @@
{}

View File

@@ -1,8 +1,6 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $host := .Values._namespace.host }}
{{- $ingress := .Values._namespace.ingress }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
apiVersion: networking.k8s.io/v1
kind: Ingress

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
apiVersion: cert-manager.io/v1
kind: ClusterIssuer

View File

@@ -0,0 +1,3 @@
apiVersion: v2
name: clickhouse-rd
version: 0.0.0 # Placeholder, the actual version will be automatically set during the build process

View File

@@ -0,0 +1,4 @@
export NAME=clickhouse-rd
export NAMESPACE=cozy-system
include ../../../scripts/package.mk

View File

@@ -0,0 +1,4 @@
{{- range $path, $_ := .Files.Glob "cozyrds/*" }}
---
{{ $.Files.Get $path }}
{{- end }}

View File

@@ -0,0 +1 @@
{}

View File

@@ -1,7 +1,6 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $exposeServices := splitList "," ((index $cozyConfig.data "expose-services") | default "") }}
{{- $exposeIngress := index $cozyConfig.data "expose-ingress" | default "tenant-root" }}
{{- $host := index .Values._cluster "root-host" }}
{{- $exposeServices := splitList "," ((index .Values._cluster "expose-services") | default "") }}
{{- $exposeIngress := (index .Values._cluster "expose-ingress") | default "tenant-root" }}
{{- if and (has "api" $exposeServices) }}
apiVersion: networking.k8s.io/v1

View File

@@ -1,4 +1,4 @@
{{- $brandingConfig:= lookup "v1" "ConfigMap" "cozy-system" "cozystack-branding" }}
{{- $brandingConfig := .Values._cluster.branding | default dict }}
{{- $tenantText := "v0.38.2" }}
{{- $footerText := "Cozystack" }}
@@ -16,9 +16,9 @@ metadata:
app.kubernetes.io/instance: incloud-web
app.kubernetes.io/name: web
data:
CUSTOM_TENANT_TEXT: {{ $brandingConfig | dig "data" "tenantText" $tenantText | quote }}
FOOTER_TEXT: {{ $brandingConfig | dig "data" "footerText" $footerText | quote }}
TITLE_TEXT: {{ $brandingConfig | dig "data" "titleText" $titleText | quote }}
LOGO_TEXT: {{ $brandingConfig | dig "data" "logoText" $logoText | quote }}
CUSTOM_LOGO_SVG: {{ $brandingConfig | dig "data" "logoSvg" $logoSvg | quote }}
ICON_SVG: {{ $brandingConfig | dig "data" "iconSvg" $iconSvg | quote }}
CUSTOM_TENANT_TEXT: {{ $brandingConfig.tenantText | default $tenantText | quote }}
FOOTER_TEXT: {{ $brandingConfig.footerText | default $footerText | quote }}
TITLE_TEXT: {{ $brandingConfig.titleText | default $titleText | quote }}
LOGO_TEXT: {{ $brandingConfig.logoText | default $logoText | quote }}
CUSTOM_LOGO_SVG: {{ $brandingConfig.logoSvg | default $logoSvg | quote }}
ICON_SVG: {{ $brandingConfig.iconSvg | default $iconSvg | quote }}

View File

@@ -1,6 +1,5 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $oidcEnabled := index $cozyConfig.data "oidc-enabled" }}
{{- $host := index .Values._cluster "root-host" }}
{{- $oidcEnabled := index .Values._cluster "oidc-enabled" }}
apiVersion: apps/v1
kind: Deployment

View File

@@ -1,8 +1,7 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $issuerType := (index $cozyConfig.data "clusterissuer") | default "http01" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $exposeServices := splitList "," ((index $cozyConfig.data "expose-services") | default "") }}
{{- $exposeIngress := index $cozyConfig.data "expose-ingress" | default "tenant-root" }}
{{- $issuerType := (index .Values._cluster "clusterissuer") | default "http01" }}
{{- $host := index .Values._cluster "root-host" }}
{{- $exposeServices := splitList "," ((index .Values._cluster "expose-services") | default "") }}
{{- $exposeIngress := (index .Values._cluster "expose-ingress") | default "tenant-root" }}
{{- if and (has "dashboard" $exposeServices) }}
apiVersion: networking.k8s.io/v1

View File

@@ -1,6 +1,5 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $extraRedirectUris := splitList "," ((index $cozyConfig.data "extra-keycloak-redirect-uri-for-dashboard") | default "") }}
{{- $host := index .Values._cluster "root-host" }}
{{- $extraRedirectUris := splitList "," ((index .Values._cluster "extra-keycloak-redirect-uri-for-dashboard") | default "") }}
{{- $existingK8sSecret := lookup "v1" "Secret" .Release.Namespace "k8s-client" }}
{{- $existingDashboardSecret := lookup "v1" "Secret" .Release.Namespace "dashboard-client" }}

View File

@@ -1,5 +1,4 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $host := index $cozyConfig.data "root-host" }}
{{- $host := index .Values._cluster "root-host" }}
apiVersion: v1
data:

Some files were not shown because too many files have changed in this diff Show More