Compare commits

...

343 Commits

Author SHA1 Message Date
Timur Tukaev
14aba9edb2 Create CONTRIBUTOR_LADDER.md
Contributor ladder is an important tool for community participants who are loyal to project and would like to take more responsibility in project. Besides, it's needed for CNCF Incubated  applications

Signed-off-by: Timur Tukaev <90071493+tym83@users.noreply.github.com>
2025-07-20 15:56:25 +05:00
Andrei Kvapil
28c9fcd61c [tenant] Enable deleting extra applications from a tenant (#1162)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does
- make extra apps deletable

### Release note
- make extra apps deletable
<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- make extra apps deletable
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
  * Incremented the tenant application version to 1.11.1.
  * Updated version mappings for the tenant package.

* **Refactor**
* Removed resource policy annotations and version wildcards from
multiple tenant components for streamlined configuration.
* Simplified monitoring settings by removing detailed storage and
feature flag configurations.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-19 03:44:35 +02:00
Andrei Kvapil
a010fde4b0 Merge branch 'main' into make-extra-apps-deletable
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-19 03:42:55 +02:00
Andrei Kvapil
379e0da6d2 Remove default values
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-19 03:41:56 +02:00
Andrei Kvapil
a60dff1215 Release v0.34.0-beta.2 (#1213)
This PR prepares the release `v0.34.0-beta.2`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Updated various container image tags and digests across multiple
components to newer versions, including cozystack, kubeapps, Kamaji,
kubeovn, kubevirt, nginx-cache, mariadb-backup, clickhouse-backup,
cluster-autoscaler, and related services.
* Refreshed version references in configuration files to ensure
consistency with the latest releases.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-18 08:41:04 +02:00
cozystack-bot
a5896be36a Prepare release v0.34.0-beta.2
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2025-07-18 01:01:06 +00:00
Andrei Kvapil
9022b8bda8 Fix arrays in OpenAPI spec
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-18 02:54:33 +02:00
Andrei Kvapil
190f94c485 Get rid of bitnami's readme-generator (#1218)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[tests,dx] Replace bitnami's readme-generator with go version
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Updated the tool used for generating README files across multiple
projects to a new version tailored for Helm charts, ensuring consistent
documentation generation.
* Simplified the workflow for installing the documentation generator,
reducing dependencies and installation steps for improved reliability.
* Enhanced JSON schemas for various charts by adding default values,
reorganizing properties, and expanding configuration options for
improved clarity and usability.
* Added new resource configuration parameters and expanded documentation
for several components to provide more detailed customization.
* Improved error handling in pre-commit hooks to enforce stricter
failure detection during code generation steps.
* Cleaned up README files by removing trailing blank lines and
simplifying content in select packages.
* Added new chart and schema files for the `extra/info` package,
including initial values and README generation support.
* Disabled generation of `openapi-schemas` directory in system Makefile
to streamline build process.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-18 00:43:22 +02:00
Andrei Kvapil
72e7b5e0b5 Get rid of bitnami's readme-generator
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-18 00:40:31 +02:00
Andrei Kvapil
def5a612c6 [applications] Reorder values.yaml for better readability (#1214)
Use the same order for values in all applications:

1. Common configuration parameters in the specified order, if exist:
   - replicas
   - shards
   - resources
   - resourcesPreset
   - size
   - storageClass
- external (goes last, because we don't want to promote this practice)

2. Application-specific parameters, such as database and users
3. Component-specific, each component under its own section
4. Backup
5. Bootstrap (recovery)

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[]
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Improved organization and clarity of configuration documentation
across multiple apps by restructuring parameter groupings, adding
section headers, and enhancing parameter descriptions.
* Added or updated parameter documentation for resource configuration
options, including explicit CPU/memory settings and sizing presets.
* Enhanced usage examples and reordered parameters for better
readability.

* **New Features**
* Introduced new configuration options for explicit CPU and memory
resource settings and resource sizing presets in several app
configuration files.

* **Style**
* Refined formatting, indentation, and comments throughout configuration
and documentation files for consistency and easier navigation.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-17 23:32:34 +02:00
Andrei Kvapil
725f94f347 fix add vm job resources (#1217)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- add resources for vm and vmi jobs
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added explicit CPU and memory resource requests and limits for update
jobs in both virtual-machine and vm-instance applications to improve
resource management.

* **Chores**
* Updated version mappings and chart versions for virtual-machine (to
0.12.2) and vm-instance (to 0.10.1).

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-17 23:19:52 +02:00
kklinch0
a0b1914972 fix add vm job resources
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-17 22:05:19 +03:00
Nick Volynkin
bb907e5e7d [applications] Reorder values.yaml for better readability
Use the same order for values in all applications:

1. Common configuration parameters in the specified order, if exist:
   - replicas
   - shards
   - resources
   - resourcesPreset
   - size
   - storageClass
   - external (goes last, because we don't want to promote this practice)

2. Application-specific parameters, such as database and users
3. Component-specific, each component under its own section
4. Backup
5. Bootstrap (recovery)

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-17 19:36:20 +03:00
Andrei Kvapil
909208baec [kubernetes] Explicitly mention available K8s versions (#1212)
Follow-up to cozystack/cozystack#1191

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Documentation**
* Updated documentation to clarify that users can select Kubernetes
patch versions ranging from 1.28 to 1.33 for tenant clusters.
* Revised descriptions and comments to explicitly specify the supported
Kubernetes version range (1.28–1.33) in relevant documentation and
configuration files.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-17 11:46:01 +02:00
Andrei Kvapil
7abca1bdf5 [platform] Fix stale workloads not being deleted (#1210)
Workloads tracking an object undergoing deletion can be reconciled when
the object is marked for deletion, but is not yet removed. After the
object is deleted, there is no event to trigger another reconciliation
of the workload and it might never get deleted until a global reconcile
happens or the controller is restarted. This patch ensures they are
requeued in the reconciliation loop.

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[platform] Fix stale workloads not being deleted
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Improvements**
* Added a delay before reprocessing items that are being deleted,
resulting in more efficient handling of deletions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-17 11:44:07 +02:00
Andrei Kvapil
4728127253 [cozystack-api] Fix non-existing OpenAPI refs (#1208)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cozystack-api] Fix non-existing OpenAPI refs
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit


## Summary by CodeRabbit

* **Refactor**
* Improved and unified the processing of OpenAPI schemas for both v3 and
v2 formats, resulting in more consistent and maintainable API
documentation.
* Enhanced support for status schemas and improved handling of schema
references across different resource types.

* **Bug Fixes**
* Fixed issues with schema references to ensure they correctly point to
kind-specific definitions in generated OpenAPI documentation.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-17 11:42:53 +02:00
Andrei Kvapil
d919dcc05a [seaweedfs] Update Seaweedfs and support Multizone configuration (#1194)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[seaweedfs] Update Seaweedfs and support Multizone configuration
```
2025-07-17 11:42:29 +02:00
Andrei Kvapil
8a1929038b [objectstorage] Update COSI controller and sidecar (#1209)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

This PR updates COSI image and also includes these fixes:
-
https://github.com/kubernetes-sigs/container-object-storage-interface/pull/89
-
https://github.com/kubernetes-sigs/container-object-storage-interface/pull/90

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[objectstorage] Update COSI controller and sidecar
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Introduced automated image building and version injection for the
object storage controller, including support for both controller and
sidecar images.
* Added comprehensive Kubernetes CustomResourceDefinitions (CRDs) for
object storage resources, including Bucket, BucketClaim, BucketClass,
BucketAccess, and BucketAccessClass.
* Added a dedicated namespace and updated resource naming conventions
for improved clarity and consistency.

* **Bug Fixes**
* Improved and unified deletion handling for object storage resources,
ensuring proper cleanup and event recording.

* **Chores**
* Updated configuration and deployment manifests to use new image
locations and naming conventions.
* Added a configuration file for specifying the controller image used in
deployments.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-17 11:42:08 +02:00
Nick Volynkin
1d6b9a025a [kubernetes] Explicitly mention available K8s versions
Follow-up to cozystack/cozystack#1191

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-17 09:45:37 +03:00
Andrei Kvapil
3475cdb17a [objectstorage] Update COSI controller and sidecar
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-16 23:23:03 +02:00
Andrei Kvapil
181e8dce28 [cozystack-api] Fix non-existing OpenAPI refs
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-16 22:59:34 +02:00
Timofei Larkin
38f76f6ad0 [platform] Fix stale workloads not being deleted
Workloads tracking an object undergoing deletion can be reconciled when
the object is marked for deletion, but is not yet removed. After the
object is deleted, there is no event to trigger another reconciliation
of the workload and it might never get deleted until a global reconcile
happens or the controller is restarted. This patch ensures they are
requeued in the reconciliation loop.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-16 23:51:40 +03:00
Andrei Kvapil
2c2b44e8fd [cozystack-controller] cozy controller fix system reconcilations (#1205)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- fix system reconcilations
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved reliability when updating HelmRelease objects to prevent
unintended changes during reconciliation.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-16 22:22:41 +02:00
Andrei Kvapil
5199021b8d Update FerretDB v2.4.0 (#1206)
## What this PR does

This PR updates FerretDB from v1 to v2

**Breaking change**: before upgrading your ferretdb, please backup and
restore your data, using this guide:
- https://docs.ferretdb.io/migration/migrating-from-v1/

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[ferretdb] Introduce FerretDB v2.4.0
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Upgraded FerretDB application to version 2.4.0 with Helm chart version
1.0.0.
* Added support for scheduled backups via a new `ScheduledBackup`
resource.

* **Improvements**
* Default resource sizing for FerretDB replicas increased from "nano" to
"micro" for better performance.
* PostgreSQL configuration enhanced with additional extensions, improved
security settings, and automated extension setup.
* Streamlined environment variable configuration for PostgreSQL
connection.
* Backup configuration updated for more flexible retention, scheduling
(including seconds), destination paths, and bootstrap recovery options.

* **Removals**
* Removed Kubernetes initialization job and related scripts for
PostgreSQL user and role management, simplifying deployment.
* Deleted legacy backup CronJob, backup scripts, and backup secrets
templates.

* **Chores**
* Updated version mappings and added a new Makefile target to streamline
image and version updates.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-16 22:20:26 +02:00
Timofei Larkin
f2a8c3d0d1 [kubernetes] User-selectable cluster version (#1191)
## What this PR does

This patch adds a new version field to the kubernetes chart, letting
end-users specify the version of kubernetes they want to deploy.

### Release note

```release-note
[kubernetes] Let users specify desired version of tenant k8s cluster.
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added a configurable Kubernetes version parameter, allowing selection
of specific minor versions for cluster deployments.
* Introduced a version mapping system to ensure clusters use precise
Kubernetes patch versions.
* **Bug Fixes**
* Ensured only supported Kubernetes versions can be selected, reducing
configuration errors.
* **Documentation**
* Updated documentation to describe the new version parameter and its
usage.
* **Tests**
* Enhanced end-to-end tests to cover deployments with both the latest
and previous Kubernetes versions.
* **Chores**
* Consolidated version references for multiple packages to streamline
version management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-16 22:12:00 +04:00
IvanHunters
5b6ebbc796 [review] compact _versions.tpl
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-16 17:41:15 +03:00
IvanHunters
7b87d555e4 [review] disable caching and remove reusing root context
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-16 17:36:42 +03:00
Andrei Kvapil
e5cde60311 [ferretdb] Reuse backup logic from postgres
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-16 16:05:02 +02:00
Andrei Kvapil
d0fba985e2 [docs] Changelogs for the release series v0.33.x (#1189)
- **[docs] Changelog for v0.33.0**
- **[docs] Feature highlights for v0.33.0**
- **[docs] Changelogs for v0.33.1 and v0.33.2 plus regression warning in
0.33.0**


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Documentation**
* Added detailed changelogs for versions 0.33.0, 0.33.1, and 0.33.2,
outlining new features, improvements, bug fixes, and development
updates.
* Included important upgrade guidance and links for further information.
* Enhanced documentation with backup and restore instructions for
PostgreSQL using Velero.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-16 15:56:54 +02:00
Andrei Kvapil
7d5ab78b84 Add SeaweedFS update hook
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-16 15:54:00 +02:00
Andrei Kvapil
493ad821c1 [seaweedfs] Support MultiZone topology
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-16 15:53:40 +02:00
Andrei Kvapil
c01462d3f9 Update Seaweedfs v3.94
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-16 15:52:17 +02:00
Andrei Kvapil
bccf6113cc [mariadb-operator] Update mariadb-operator v0.38.1 (#1188)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[mariadb-operator] Update mariadb-operator v0.38.1
```
2025-07-16 15:40:13 +02:00
Andrei Kvapil
a862d41aa4 k8s add snapshotter and snapshot-controller to tenant k8s (#1203)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- Add snapshotter and snapshot-controller to tenant k8s
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Introduced support for Kubernetes volume snapshots, enabling creation
and management of persistent volume snapshots.
* Added deployment of snapshot-related controllers to enhance snapshot
functionality.
* Integrated new CustomResourceDefinitions (CRDs) for `VolumeSnapshot`,
`VolumeSnapshotContent`, and `VolumeSnapshotClass`.
* Provided automated deployment and management of volume snapshot CRDs
via Helm chart and HelmRelease resources.
* Enhanced security for CSI-related containers by enforcing read-only
root filesystems and dropping Linux capabilities.

* **Chores**
* Added supporting files for packaging and updating volume snapshot
CRDs.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-16 15:36:48 +02:00
Andrei Kvapil
096227d025 Update Flux Operator (v0.24.1) (#1207)
Comes with Flux `v2.6.4` manifests included, other release notes:


https://github.com/controlplaneio-fluxcd/flux-operator/releases/tag/v0.24.1

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated Helm chart versions and app versions for Flux Operator and
Flux Instance from 0.24.0 to 0.24.1.
* Refreshed version badges in related documentation to reflect the new
release.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-16 15:30:36 +02:00
Andrei Kvapil
4d62961c89 Update FerretDB v2.4.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-16 15:22:16 +02:00
Kingdon B
2466a0ae6c update FluxInstance chart to v0.24.1
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-07-16 09:11:12 -04:00
Kingdon B
8042c85bca update Flux Operator to 0.24.1
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-07-16 09:10:51 -04:00
kklinch0
79f7300474 cozy controller fix system reconcilations
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-16 14:29:00 +03:00
klinch0
7a74936d6b bugfix fix pg LB frontend (#1204)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- fix pg LB frontend
```
2025-07-16 11:59:48 +03:00
kklinch0
c5d3fe9aaa bugfix fix pg LB frontend
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-16 11:56:45 +03:00
kklinch0
d201e03d5e k8s add snapshotter and snapshot-controller to tenant k8s
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-16 01:35:44 +03:00
IvanHunters
168a9ae7f4 [conflicts] merge from fixed conflicts branch
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-15 22:36:39 +03:00
Andrei Kvapil
c664d4550f [platform] Autoscale the autoscaler (#1198)
## What this PR does

The Vertical Pod Autoscaler is a component with resource requirements
highly dependent on the environment it is running in, hence it also
needs to be autoscaled to reduce the number of configuration parameters
that platform admins need to manage. This patch introduces an ancillary
autoscaler that watches only the primary autoscaler's namespace and
adjusts its resource requests and limits, since the autoscaler cannot
autoscale itself. In turn, the primary autoscaler can autoscale the
ancillary autoscaler.

### Release note

```release-note
[platform] Implement autoscaling for the Vertical Pod Autoscaler itself.
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added an option to enable a dedicated Vertical Pod Autoscaler (VPA)
for managing the VPA itself, including new namespace and resource
creation when enabled.

* **Configuration**
  * Introduced a new setting to toggle the VPA-for-VPA feature.
* Updated resource configuration for the recommender component by
removing specific CPU and memory settings.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-15 19:31:07 +02:00
Timofei Larkin
19b79b7ca4 Merge branch 'main' into feat/select-k8s-fix-conflict
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-15 15:48:56 +04:00
Timofei Larkin
0de9a0a262 Fixing versions_map
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-15 14:43:19 +03:00
IvanHunters
edc9995832 [tests] fix versions.yaml path
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-15 14:40:23 +03:00
IvanHunters
6023dffd6d [tests] fix versions.yaml path
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-15 13:06:48 +03:00
IvanHunters
6fdde29723 Merge branch 'main' into feat/selectable-k8s-version
Signed-off-by: IvanHunters <49371933+IvanHunters@users.noreply.github.com>
2025-07-15 12:56:32 +03:00
IvanHunters
d63aac727c [kubernetes] refactoring checking k8s version for nitpick comment by rabbit
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-15 12:48:10 +03:00
IvanHunters
7b9a19c94b [kubernetes] refactoring doubles for difference k8s versions
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-15 12:48:10 +03:00
IvanHunters
f78ab1c867 [kubernetes] add caching for loading kubernetes versions file
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-15 12:48:10 +03:00
IvanHunters
7c918125e5 [kubernetes] add check for deployed Kubernetes server version using kubectl
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-15 12:48:10 +03:00
IvanHunters
d3f1dca1ad generate kubeversions from versions.yaml
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-15 12:48:10 +03:00
IvanHunters
259a2f5cab [kubernetes] modify tests for user-selectable cluster version case
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-15 12:48:10 +03:00
Timofei Larkin
c7376ef3c9 [kubernetes] User-selectable cluster version
This patch adds a new version field to the kubernetes chart, letting
end-users specify the version of kubernetes they want to deploy.

[kubernetes] Let users specify desired version of tenant k8s cluster.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
2025-07-15 12:48:10 +03:00
klinch0
7a619d8b04 bugfix fix nats (#1195)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- fix nats helm chart
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
  * Updated the NATS application chart version to 0.8.1.
  * Adjusted version mapping entries for the NATS package.

* **Refactor**
* Reorganized the NATS configuration by moving the routeURLs setting
under the cluster section for improved clarity.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-15 12:41:23 +03:00
Nick Volynkin
c58aa798a4 [apps] Remove preset 'none' from app charts and README (#1196)
Preset 'none' is in fact disallowed since cozystack/cozystack#1156

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Updated documentation across all supported applications to remove
"none" from the list of allowed values for the `resourcesPreset`
parameter. Only sizing presets from "nano" to "2xlarge" are now listed
as valid options.
* **Chores**
  * Incremented chart versions for all affected applications.
* Updated version mapping to reference specific commits for released
versions.
* Removed "none" from allowed enum values for `resourcesPreset` in JSON
schemas across all applications.
* Refactored Makefiles to centralize and update resource preset enums,
removing "none" from allowed values.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-15 09:05:59 +03:00
Andrei Kvapil
378e6e018e [seaweedfs] Fix drift for security config (#1193)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

upstream issue: https://github.com/seaweedfs/seaweedfs/pull/6967

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[seaweedfs] Fix drift for security config
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Ensured JWT signing keys in the SeaweedFS security configuration
remain consistent across Helm upgrades, preventing unintentional key
rotation and maintaining stable authentication.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-14 21:37:08 +02:00
Nick Volynkin
55cfdb3a38 [apps] Remove preset 'none' from app charts and README
Preset 'none' is in fact disallowed since cozystack/cozystack#1156

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-14 19:31:10 +03:00
Timofei Larkin
83e0ab3adf [platform] Autoscale the autoscaler
The Vertical Pod Autoscaler is a component with resource requirements
highly dependent on the environment it is running in, hence it also
needs to be autoscaled to reduce the number of configuration parameters
that platform admins need to manage. This patch introduces an ancillary
autoscaler that watches only the primary autoscaler's namespace and
adjusts its resource requests and limits, since the autoscaler cannot
autoscale itself. In turn, the primary autoscaler can autoscale the
ancillary autoscaler.

[platform] Implement autoscaling for the Vertical Pod Autoscaler itself.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-14 15:15:44 +03:00
kklinch0
cc2b36fbe0 bugfix fix nats
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-11 22:47:19 +03:00
Andrei Kvapil
76c8de7f4d [seaweedfs] Fix drift for security config
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-11 16:45:10 +02:00
klinch0
c1a4a58500 [oidc] make keycloak deletable (#1178)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[oidc] make keycloak deletable
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

* **New Features**
* Added automated cleanup of Keycloak-related resources during
uninstallation to ensure smooth deletion.
* **Bug Fixes**
* Improved conditional logic for enabling OIDC and Keycloak-related
resources, ensuring they are only activated when explicitly set to
"true".
* **Chores**
  * Updated version numbers and references for the tenant application.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-11 09:02:34 +03:00
kklinch0
1faf40cd81 [oidc] make keycloak deletable
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-10 20:10:54 +03:00
Andrei Kvapil
1b7a597f1c [talos] Update Talos Linux v1.10.5 (#1186)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[talos] Update Talos Linux v1.10.5
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated system firmware, microcode, and storage extension versions to
the latest releases across all installer profiles.
* Increased profile version from v1.10.3 to v1.10.5 for improved
component compatibility and reliability.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 14:29:15 +02:00
Andrei Kvapil
aa84b1c054 [talos] Update Talos Linux v1.10.5
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-10 14:28:49 +02:00
Nick Volynkin
8b0fc77202 [docs] Changelogs for v0.33.1 and v0.33.2 plus regression warning in 0.33.0
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-10 15:20:36 +03:00
Timofei Larkin
6e96dd0a33 [docs] Changelogs for v0.32.1 and changelog template (#1111)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Added a new changelog template with predefined sections for consistent
release documentation.
* Published a detailed changelog for version 0.32.1, outlining major
features, fixes, dependency updates, documentation changes, testing
improvements, and CI/CD enhancements.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 13:13:17 +04:00
Nick Volynkin
adc2c17c38 [docs] Feature highlights for v0.33.0
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-10 12:08:26 +03:00
Nick Volynkin
56f230391d [docs] Changelog for v0.33.0
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-10 12:08:26 +03:00
Andrei Kvapil
08cb7c0f28 Release v0.34.0-beta.1 (#1187)
This PR prepares the release `v0.34.0-beta.1`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated multiple container image versions and tags across various
components to newer releases, including several beta versions.
  * Refreshed image digests to ensure the latest builds are used.
  * Updated dashboard configuration to reflect the new app version.
  * No changes to functionality or user interface.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 11:03:03 +02:00
Nick Volynkin
ef30e69245 [docs] Changelog for v0.32.1 and changelog template
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-10 11:50:39 +03:00
Timofei Larkin
847980f03d [release-v0.31] [docs] Release notes for v0.31.1 and v0.31.2 (#1068)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Added detailed changelog entries for versions 0.31.1 and 0.31.2,
highlighting recent fixes, improvements, and security updates.
- Included a summary of key changes, security fixes, and platform,
dashboard, and application enhancements.
  - Provided links and references for further details on each release.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 12:48:06 +04:00
Andrei Kvapil
999faa7f66 [mariadb-operator] Update mariadb-operator v0.38.1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-10 10:12:50 +02:00
cozystack-bot
0ecb8585bc Prepare release v0.34.0-beta.1
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2025-07-10 08:09:47 +00:00
Andrei Kvapil
32aea4254b [cilium] Update Cilium v1.17.5 (#1181)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cilium] Update Cilium v1.17.5
```
2025-07-10 10:05:47 +02:00
Andrei Kvapil
e49918745e [kube-ovn] Update Kube-OVN v1.13.14 (#1182)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kube-ovn] Update Kube-OVN v1.13.14
```
2025-07-10 09:31:06 +02:00
Andrei Kvapil
220c347cc5 [kamaji] Update Kamaji edge-25.7.1 (#1184)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kamaji] Update Kamaji edge-25.7.1 #
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Removed all Helm chart files, templates, configuration, documentation,
and related scripts for the Kamaji Etcd component.
* Deleted Kubernetes resource definitions, backup/defrag jobs,
monitoring, RBAC, and ServiceAccount templates associated with Kamaji
Etcd.
* Removed supporting patches and Makefiles for managing the Kamaji Etcd
Helm chart.
* All user-facing configuration and deployment options for Kamaji Etcd
via Helm are no longer available.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 01:32:23 +02:00
Andrei Kvapil
a4ec46a941 [cozystack-api] Specify OpenAPI schema for apps (#1174)
Depends on https://github.com/cozystack/cozystack/pull/1173

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cozystack-api] Specify OpenAPI schema for apps
```
2025-07-10 01:23:19 +02:00
Andrei Kvapil
2c126786b3 Update Flux Operator (0.24.0) (#1167)
This PR updates Flux Operator to 0.24.0 - some changes have been
undertaken to make upgrading Flux on any version of the flux-operator
more reliable - these are related to `spec.distribution.artifact` which
I think you have already seen


https://fluxcd.control-plane.io/operator/fluxinstance/#distribution-artifact

May be relevant to air-gapped environments.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added support for specifying extra pod volumes and container volume
mounts via new configuration options in the Helm chart.
* Extended CRD schemas to support additional provider types, new
filtering options, and enhanced validation and authentication fields.
* Introduced new fields for improved authentication and workload
identity federation in CRDs.

* **Documentation**
* Updated README files to document new configuration options and reflect
the latest chart versions.

* **Chores**
* Bumped Helm chart and app versions to 0.24.0 for both operator and
instance charts.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-10 00:19:35 +02:00
Andrei Kvapil
784f1454ba [kubevirt][cdi] Update KubeVirt v1.5.2 and CDI v1.62.0 (#1183)
- [kubevirt] Update KubeVirt v1.5.2
- [cdi] Update CDI v1.62.0

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kubevirt] Update KubeVirt v1.5.2
[cdi] Update CDI v1.62.0
```
2025-07-10 00:16:45 +02:00
Andrei Kvapil
9d9226b575 [linstor] Update LINSTOR v1.31.2 (#1180)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[linstor] Update LINSTOR v1.31.2
```
2025-07-10 00:16:28 +02:00
Andrei Kvapil
9ec5863a75 Release v0.33.2 (#1177)
This PR prepares the release `v0.33.2`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Updated container image versions and digests for multiple components,
including cluster-autoscaler, kubevirt-cloud-provider,
kubevirt-csi-driver, cozystack installer, e2e service, matchbox,
s3manager, cozystackAPI, cozystack-controller, dashboard, kubeapps-apis,
Kamaji, kubeovn-webhook, kubeovn, and kubevirt-csi-node.
* Updated configuration fields to reflect new image versions where
applicable.
  * No changes to user-facing features or functionality.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-09 22:14:03 +02:00
cozystack-bot
50f3089f14 Prepare release v0.33.2
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2025-07-09 19:36:09 +00:00
Andrei Kvapil
1aadefef75 [ci] overwrite checkout token
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 21:24:59 +02:00
Andrei Kvapil
5727110542 [kamaji] Update Kamaji edge-25.7.1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 19:03:07 +02:00
Andrei Kvapil
f2fffb03e4 [cdi] Update CDI v1.62.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:58:43 +02:00
Andrei Kvapil
ab5eae3fbc [kubevirt] Update KubeVirt v1.5.2
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:58:39 +02:00
Andrei Kvapil
38cf5fd58c [cilium] Update Cilium v1.17.5
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:54:42 +02:00
Andrei Kvapil
cda554b58c [kube-ovn] Update Kube-OVN v1.13.14
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:54:01 +02:00
Andrei Kvapil
a73794d751 [linstor] Update LINSTOR v1.31.2
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:45:12 +02:00
Andrei Kvapil
81a412517c [cozystack-api] Disable startegic-json-patch support (#1179)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

This PR adds a post-processing hook that removes
application/strategic-merge-patch+json from every PATCH operation in the
generated OpenAPI v2/v3 specs.

Strategic-merge-patch (SMP) is never supported for CRDs, and our
aggregated API implementation can’t handle it either. When the spec
advertises SMP, kubectl picks that media-type by default and sends an
SMP body, which the apiserver then rejects with
unable to find api field in struct JSON for the json field ….

By dropping SMP from consumes / content:
* kubectl apply|patch … transparently falls back to
application/merge-patch+json or application/json-patch+json.
* Server-side-apply (kubectl apply --server-side …) keeps working via
application/apply-patch+yaml.

No changes are required on the handler side—only the advertised
media-types are updated.


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cozystack-api] Disable startegic-json-patch support
```
2025-07-09 18:34:43 +02:00
Andrei Kvapil
23a7281fbf [cozystack-api] Disable startegic-json-patch support
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:31:14 +02:00
Andrei Kvapil
f32c6426a9 [cozystack-api] Refactor OpenAPI Schema (#1173)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cozystack-api] Fix updaing lists on cozystack objects
[cozystack-api] Refactor OpenAPI Schema
[cozystack-api] Support reading OpenAPI Schema from config
[cozystack-api] Disable startegic-json-patch support
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added support for dynamic OpenAPI schema post-processing for both
OpenAPI v2 and v3 specifications, enabling custom schema injection per
resource kind.
* Introduced a new configuration field to allow specifying a custom
OpenAPI schema.

* **Refactor**
* Streamlined OpenAPI schema handling by moving from inline logic to
modular post-processing functions.
* Implemented dynamic versioning for OpenAPI specs based on resource
configuration changes.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-09 18:30:13 +02:00
Andrei Kvapil
91583a4e1a [cozystack-api] Refactor OpenAPI Schema
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 18:28:06 +02:00
Andrei Kvapil
f628e7d9c7 [docs] Add backup and restore instructions for PostgreSQL (#1141)
## What this PR does

Rephrase the descriptions for backup and restore variables

### Release note

```release-note
[docs] Add backup and restore instructions for PostgreSQL 
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Updated PostgreSQL backup restore instructions to use a YAML
configuration approach for bootstrapping from a backup, replacing
previous shell command examples.
* Clarified and restructured backup and recovery documentation,
including detailed configuration examples for enabling backups with
S3-compatible storage.
* Improved descriptions and default values for backup-related
configuration parameters for better clarity and consistency.

* **Chores**
  * Incremented the PostgreSQL app chart version.
  * Updated version mapping for the PostgreSQL package.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-09 11:24:10 +02:00
klinch0
68d1646ae7 make velero deletable (#1176)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- make velero deletable
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Included the Velero Helm release in the pre-delete suspension process
to ensure proper cleanup during teardown.

* **Chores**
  * Updated the Kubernetes application chart version to 0.25.2.
  * Adjusted version mapping for improved tracking of releases.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-09 12:18:35 +03:00
kklinch0
8fde834e39 make velero addon deletable
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-09 11:52:44 +03:00
kklinch0
e99d238647 [docs] Add backup and restore instructions for PostgreSQL
Rephrase the descriptions for backup and restore variables

Co-authored-by: Nick Volynkin <nick.volynkin@gmail.com>
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-09 10:47:09 +02:00
Andrei Kvapil
e9435c2d3d [docs] Fix a typo in preset resource tables in the README's (#1172)
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Documentation**
* Updated documentation across multiple applications to reflect a change
in the CPU allocation for the "large" resource preset from 3 CPUs to 2
CPUs. Memory allocation for this preset remains unchanged at 2Gi. No
other documentation changes were made.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-09 10:42:39 +02:00
Andrei Kvapil
da3ee5d0ea [virtual-machine] add comment about sshKeys logic
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 10:37:39 +02:00
Andrei Kvapil
411a465b14 [virtual-machine] Fix cloudInit and sshKeys (#1175)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

fixes https://github.com/cozystack/cozystack/issues/1148

This PR does two things:
1. **Fixes the cloud-init shebang**
(e1382f51c6)
Dashboard comments were removed unintentionally, which also stripped out
the cloud-init shebang. This fix puts it back.
2. **Improves cloudInit option handling**
The update refines how various cloudInit options are processed, whether
or not sshKeys are provided.

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[dashboard] Fix removing shebang for cloud init
[virtual-machine] Fix cloudInit and sshKeys processing
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Cloud-init configuration now supports providing SSH keys even when
explicit cloud-init data is not set, allowing for easier SSH access
setup.

* **Refactor**
* Simplified and unified the logic for handling cloud-init and SSH key
configuration in virtual machine templates, reducing complexity and
improving maintainability.

* **Chores**
* Updated the default commit reference for Kubeapps components to a
newer version in the dashboard build process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-09 10:21:37 +02:00
Andrei Kvapil
cad57cd922 [cozystack-api] Fix updaing lists (#1171)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

When you update lists in cozystack objects, you might face with the
error:

```
Warning: resource vminstances/mikrotik-demo is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used o
n resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
warning: error calculating patch from openapi v3 spec: unable to find api field "disks"
Error from server: error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps.cozystack.io/v1alpha1\",\"kind\":\"VMInstance\",\"metadata\":{\"annotations\":{},\"name
\":\"mikrotik-demo\",\"namespace\":\"tenant-vasya\"},\"spec\":{\"disks\":[{\"bus\":\"sata\",\"name\":\"mikrotik-system\"},{\"name\":\"mikrotik-iso\"}],\"instanceProfile\":\"ubuntu\",\"instan
ceType\":\"u1.medium\",\"running\":true}}\n"}},"spec":{"disks":[{"bus":"sata","name":"mikrotik-system"},{"name":"mikrotik-iso"}]}}
to:
Resource: "apps.cozystack.io/v1alpha1, Resource=vminstances", GroupVersionKind: "apps.cozystack.io/v1alpha1, Kind=VMInstance"
Name: "mikrotik-demo", Namespace: "tenant-vasya"
for: "/tmp/2": error when patching "/tmp/2": unable to find api field in struct JSON for the json field "disks"
```

This PR workarounds this.

Related to https://github.com/cozystack/cozystack/pull/1168

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cozystack-api] Fix updaing lists on cozystack objects
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Enhancements**
* Made resource specifications more flexible by allowing any content
under the specification property for dynamically registered resource
kinds.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-09 10:20:30 +02:00
Andrei Kvapil
fe1776b4c8 [cozystack-api] Fix resourceVersion error (#1170)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

This PR fixes error:

```
failed to update HelmRelease: helmreleases.helm.toolkit.fluxcd.io "xxx" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update
```

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cozystack-api] Fix resourceVersion error
```
2025-07-09 10:20:14 +02:00
Andrei Kvapil
d9779d55ea [cozystack-api] Fix singular name for cozystack resources (#1169)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cozystack-api] Fix singular name for cozystack resources
```
2025-07-09 10:19:57 +02:00
Andrei Kvapil
74d3c89235 [vm-instance] Add bus option; Always specify bootOrder (#1168)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[vm-instance] Add bus option
[vm-instance] Always specify bootOrder for all disks
```
2025-07-09 10:19:38 +02:00
Andrei Kvapil
9af6ce25bc [cozystack-api] Specify OpenAPI schema for apps
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 08:43:48 +02:00
Andrei Kvapil
c831f53444 [virtual-machine] Fix cloudInit and sshKeys
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-09 08:41:40 +02:00
Andrei Kvapil
2c68eee9f8 [cozystack-api] Fix updaing lists
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-08 20:23:06 +02:00
Andrei Kvapil
e6ffb4f4e5 [cozystack-api] Fix resourceVersion error
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-08 18:45:19 +02:00
Andrei Kvapil
e63cc1890e [cozystack-api] Fix singular name for cozystack resources
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-08 18:09:12 +02:00
Andrei Kvapil
1079472a2a [vm-instance] Add bus option; Always specify bootOrder
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-08 17:47:34 +02:00
Kingdon B
e70dfdec31 Update Flux Operator - 0.24.0
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-07-08 10:39:45 -04:00
Kingdon B
08c0eecbc5 Update flux-instance chart
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-07-08 10:38:38 -04:00
Nick Volynkin
1609931e3f [docs] Fix a typo in preset resource tables in the README's
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-08 16:17:23 +03:00
Andrei Kvapil
699d38d8b9 bugfix: vm and vmi add svc to dashboard (#1161)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- vm and vmi add svc to dashboard
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

* **New Features**
* Enhanced dashboard permissions to allow viewing and monitoring of
specific service resources in both the virtual-machine and vm-instance
applications.

* **Chores**
* Updated chart versions for virtual-machine (to 0.12.1) and vm-instance
(to 0.9.1).
* Refreshed version mappings for virtual-machine and vm-instance
components.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-08 10:49:27 +02:00
Andrei Kvapil
acd4663aee Release v0.33.1 (#1166)
This PR prepares the release `v0.33.1`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated container image tags and digests across multiple components to
newer patch versions, including cluster-autoscaler,
kubevirt-cloud-provider, kubevirt-csi-driver, cozystack installer, e2e
testing service, matchbox, s3manager, cozystackAPI,
cozystack-controller, dashboard, kubeapps, Kamaji, kubeovn-webhook,
kubeovn, and kubevirt-csi-node.
* Updated related configuration files to reflect the new image versions
and digests.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-08 11:48:09 +03:00
kklinch0
f251cba363 bugfix: vm and vmi add svc to dashboard
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-08 10:02:34 +03:00
Andrei Kvapil
91a07dcda6 [postgres] Restrict password change for user postgres (#1164)
Restrict password change for user postgres

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
  * Updated the chart version for Postgres from 0.16.0 to 0.17.0.
* Updated the versions map to reference the latest commit and added the
new version.

* **Bug Fixes**
* Enhanced initialization script to forbid creating a user named
"postgres," providing clear error messaging.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-08 09:25:11 +03:00
cozystack-bot
99552bf792 Prepare release v0.33.1
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2025-07-08 06:24:09 +00:00
Andrei Kvapil
45031055f8 [kubevirt-csi] Update Role of CSI controller (#1165)
## What this PR does

Following a [recent
update](0171916b01),
the KubeVirt CSI controller now needs new permissions to manage volumes
for tenant k8s clusters. This patch updates the role granted to the
kcsi-controller deployment of each tenant k8s cluster.

### Release note

```release-note
[kubevirt-csi] Update kcsi-controller role to align with the requirements of the version of the controller in use.
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Expanded permissions for Kubernetes infrastructure service accounts,
including enhanced access to virtual machines, volume snapshots, and
persistent volume claims.

* **Chores**
  * Updated chart version to 0.25.1.
  * Refreshed version mapping for the Kubernetes package.
* Made the CSI driver container image configurable via deployment
settings.
* Integrated CSI driver image reference into deployment configuration
automatically.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-08 09:20:02 +03:00
Andrei Kvapil
d200017f74 Automatically set image for kubevirt-csi-node
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-08 09:19:03 +03:00
Ahmad Murzahmatov
f6eaca3843 [postgres] do not allow change postgres pwd
Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
2025-07-08 08:52:29 +06:00
Timofei Larkin
8d3324f958 [kubevirt-csi] Update Role of CSI controller
Following a [recent update](0171916b01),
the KubeVirt CSI controller now needs new permissions to manage volumes
for tenant k8s clusters.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-07 19:12:51 +03:00
kklinch0
dd16b8f27f vm add svc to dashboard
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-05 12:43:06 +03:00
Andrei Kvapil
70f8266767 Release v0.33.0 (#1159)
This PR prepares the release `v0.33.0`.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated container image versions and digests across multiple
components, including ClickHouse backup, nginx-cache,
cluster-autoscaler, kubevirt-cloud-provider, kubevirt-csi-driver,
mariadb-backup, Grafana, s3manager, and others.
* Upgraded image tags and digests for core and system services such as
the installer, API, controller, dashboard, Kamaji, kubeovn, and related
components.
* Updated configuration files to reflect new image versions and digests,
ensuring consistency across deployments.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-04 00:21:43 +03:00
cozystack-bot
a9674d2ae7 Prepare release v0.33.0
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com>
2025-07-03 20:57:31 +00:00
Andrei Kvapil
cb6a55bc4a [ci] fix releasing pipeline
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 23:53:36 +03:00
Andrei Kvapil
3ecbaf23a4 [apps] Give examples of new resources in managed app README's (#1120)
Merge after https://github.com/cozystack/cozystack/pull/1117 and
https://github.com/cozystack/cozystack/pull/1155


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Improved clarity and detail in parameter descriptions across multiple
app documentation files, especially for resource configuration options.
* Expanded explanations for `resources` and `resourcesPreset`
parameters, including explicit usage, allowed values, and fallback
behavior.
* Added new sections with YAML configuration examples and reference
tables for resource presets in several app READMEs.
* Corrected typos, improved formatting, and updated terminology for
better readability and consistency.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-03 23:26:43 +03:00
Andrei Kvapil
946fad8bb8 [apps] Give examples of new resources in managed app README's
- Change wording for `resources` and `resourcesPreset` variables.
- Explain and give exampls of other object-type variables,
  if their child fields are not annotated.
- Fix a few typos, improve wording.
- Bump all application charts to ensure that new texts are shown
  immediately after updating Cozystack.

Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 22:58:06 +03:00
Andrei Kvapil
f1d86e5045 [keycloak, cozy-lib] Calculate Java heap params (#1157)
## What this PR does

This patch passes Java heap parameters to Keycloak to prevent OOM errors
when the JVM lacks compatibility with cgroups v2 and fails to recognize
container memory requests and limits. A new function is introduced in
cozy-lib to calculate the heap parameters from requests and limits,
setting Xmx to 75% of the memory limit and Xms to the lesser of the
memory request or 25% of the memory limits.

## Release note

```release-note
[keycloak] Calculate and pass Java heap parameters explicitly to prevent OOM errors.
[cozy-lib] Introduce helper function to calculate Java heap params based on memory requests and limits.
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added automatic calculation and injection of Java heap size settings
for the Keycloak container, based on resource requests and limits.
* **Improvements**
* Enhanced resource handling to ensure all resource values are
consistently formatted and sanitized.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-03 22:55:11 +03:00
Timofei Larkin
9adcd48c44 [keycloak, cozy-lib] Calculate Java heap params
This patch passes Java heap parameters to Keycloak to prevent OOM errors
when the JVM lacks compatibility with cgroups v2 and fails to recognize
container memory requests and limits. A new function is introduced in
cozy-lib to calculate the heap parameters from requests and limits,
setting Xmx to 75% of the memory limit and Xms to the lesser of the
memory request or 25% of the memory limits.

Change log:
[keycloak] Calculate and pass Java heap parameters explicitly to prevent
OOM errors.
[cozy-lib] Introduce helper function to calculate Java heap params based
on memory requests and limits.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-03 22:15:04 +03:00
Timofei Larkin
fb82bfae11 [platform] Always set resources for managed apps (#1156)
## What this PR does

This patch removes the loophole to leave resource requests and limits
unspecified in managed apps. Any of cpu, memory, and ephemeral storage
are now filled in from the resource preset (default or user-specified)
if not explicitly specified in .Values.resources. "none" is no longer an
accepted value in resourcePresets and the primary resources now always
have some explicit value for proper billing and isolation.

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->



### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[platform] Always set resources for managed apps. "none" is no longer valid in resourcePresets, deployed apps now always have explicitly specified cpu, memory, ephemeral-storage requests and limits.
```
2025-07-03 19:56:39 +04:00
Timofei Larkin
bd9e283d3b [platform] Always set resources for managed apps
This patch removes the loophole to leave resource requests and limits
unspecified in managed apps. Any of cpu, memory, and ephemeral storage
are now filled in from the resource preset (default or user-specified)
if not explicitly specified in .Values.resources. "none" is no longer an
accepted value in resourcePresets and the primary resources now always
have some explicit value for proper billing and isolation.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-03 17:45:32 +03:00
Andrei Kvapil
d2126b6703 Save a list of observed images after workflow (#1089)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added a process to list images used in the environment before deletion
during cleanup operations.
- **Chores**
- Enhanced environment cleanup workflow with improved visibility into
used images.
- Introduced a shared writable directory between host and container for
better file management during testing.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-03 15:45:12 +03:00
Andrei Kvapil
73fe621da1 [cozy-lib] refactor resources (#1155)
Add. missing commits from
https://github.com/cozystack/cozystack/pull/1127, which were skipped by
mistake

- [cozy-lib, bug] divf by cpu ratio, not mulf (#1125)
- [cozy-lib] remove handler for nested resources/requests map
- [cozy-lib] Introduce memory-allocation-ratio and
ephemeral-strorage-allocation-ratio options
- [system] Recuce resources for some system apps

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[cozy-lib] refactor resources
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Introduced support for memory and ephemeral storage allocation ratios,
allowing more flexible resource allocation.

* **Refactor**
* Simplified resource preset structure for easier configuration and
management.
* Updated resource preset logic to use a new sanitization process for
resource values.

* **Bug Fixes**
  * Improved error handling for invalid resource preset keys.

* **Chores**
* Adjusted resource requests and limits for Redis master, FluxCD
operator, and Vertical Pod Autoscaler components to optimize resource
usage.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-03 15:40:35 +03:00
Andrei Kvapil
0b7bbb1ba9 [system] Recuce resources for some system apps
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 15:00:41 +03:00
Andrei Kvapil
bb46aa4b7d [cozy-lib] Introduce memory-allocation-ratio and ephemeral-strorage-allocation-ratio options
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 15:00:41 +03:00
Andrei Kvapil
6256e40169 [cozy-lib] remove handler for nested resources/requests map
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 15:00:40 +03:00
Andrei Kvapil
22cda073b9 [cozy-lib, bug] divf by cpu ratio, not mulf (#1125)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

* **Refactor**
* Updated the structure of resource presets for improved clarity and
processing.
* Adjusted template logic to streamline resource handling and removed
previous resource limit calculations.
* Modified template parameters to enhance flexibility in resource
processing.
* **Chores**
* Improved internal template invocation for better compatibility with
resource data.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 15:00:24 +03:00
Andrei Kvapil
0d46393e8c [nfs-driver] Introduce new module (#1133)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


## What this PR does

This PR adds a new optional module to support nfs shares

## Way to test it:

#### driver and provisioner setup

```yaml
---
apiVersion: v1
kind: Namespace
metadata:
  labels:
    cozystack.io/system: "true"
    pod-security.kubernetes.io/enforce: privileged
  name: cozy-nfs-driver
spec:
  finalizers:
  - kubernetes
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  labels:
    cozystack.io/repository: system
    cozystack.io/system-app: "true"
  name: nfs-driver
  namespace: cozy-nfs-driver
spec:
  chart:
    spec:
      chart: cozy-nfs-driver
      reconcileStrategy: Revision
      sourceRef:
        kind: HelmRepository
        name: cozystack-system
        namespace: cozy-system
      version: '>= 0.0.0-0'
  dependsOn:
  - name: cilium
    namespace: cozy-cilium
  - name: kubeovn
    namespace: cozy-kubeovn
  install:
    crds: CreateReplace
    remediation:
      retries: -1
  interval: 5m
  releaseName: nfs-driver
  suspend: true
  upgrade:
    crds: CreateReplace
    remediation:
      retries: -1
```

Then `cd packages/system/csi-driver-nfs` and:

```
make apply
```

#### export share

```bash
apt install nfs-server
mkdir /data
chmod 777 /data
echo '/data *(rw,sync,no_subtree_check)' >> /etc/exports
exportfs -a
```

#### configure connection

```yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs
provisioner: nfs.csi.k8s.io
parameters:
  server: 10.244.57.210
  share: /data
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
  - nfsvers=4.1
```

#### order volume

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 3Gi
```

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[nfs-driver] Introduce new optional module to order volumes from NFS shares
```
2025-07-03 14:32:51 +03:00
Andrei Kvapil
193f43d7bb [kubernetes] Fix dead-lock while reattaching a KubeVirt-CSI volume (#1135)
## What this PR does


This pr imports upstream fix for volume reattaching procedure
- https://github.com/kubevirt/csi-driver/pull/143

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kubernetes] Fix dead-lock while reattaching a KubeVirt-CSI volume
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Improved volume management for virtual machines by adding checks to
skip unnecessary attach or detach operations when the volume is already
in the desired state.

* **Tests**
* Added new unit tests to verify optimized volume attach/detach
workflows and ensure fast-path logic is functioning correctly.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-03 14:27:10 +03:00
Andrei Kvapil
8ec882ca5f [dx] Refactor collect-images functionality
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 14:26:56 +03:00
Andrei Kvapil
c596805b60 [virtual-machines] Introduce golden disks functionality (#1112)
Use Golden Images to speed up VM / VMI deploy

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added support for using pre-imported "golden image" disks for virtual
machines, enabling faster provisioning by referencing existing images
instead of downloading via HTTP.
* Introduced a script to automate the import of golden images into the
system.

* **Improvements**
* Updated documentation and configuration to clarify and demonstrate how
to use golden images.
* Enhanced permission settings to support secure cloning of data
volumes.

* **Versioning**
  * Updated vm-disk package to version 0.3.0.
  * Updated virtual-machine app version to 0.12.0.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-03 14:25:12 +03:00
Timofei Larkin
f891d0bee6 Add exec bit to script, sanitize image list
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-03 13:56:41 +03:00
Andrei Kvapil
1f748d563f Copy contents of directory instead of directory
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-03 13:56:35 +03:00
Timofei Larkin
210f3c7b6b Save images with unique filename
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-03 13:54:32 +03:00
Timofei Larkin
433bfe7b6c Save image list outside of sandbox
Because the sandbox is torn down after successful tests

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-03 13:54:32 +03:00
Timofei Larkin
fa6442998a Save a list of observed images after workflow
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-07-03 13:54:32 +03:00
Andrei Kvapil
6d06d3b1fb [nfs-driver] Introduce new module
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 13:46:24 +03:00
Andrei Kvapil
4c347cc026 [kubernetes] Fix dead-lock while reattaching a KubeVirt-CSI volume
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 13:40:54 +03:00
Andrei Kvapil
986de717f1 [virtual-machine] Refactor golden images
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 13:33:44 +03:00
Andrei Kvapil
d38c8aa5ab [CDI] golden disks feature for reuse
Use Golden Images to speed up VM / VMI deploy

Signed-off-by: gwynbleidd <gwynbleidd2106@yandex.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 13:23:44 +03:00
Andrei Kvapil
7f9f850b47 [tests] Fix pre-commit check for kubernetes options
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-03 13:08:20 +03:00
klinch0
ca772fae2e platform add velero (#1132)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[]
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added Velero integration as an optional addon for Kubernetes cluster
backup and restore.
* Introduced configurable parameters to enable Velero and override its
settings.
* Included a comprehensive Helm chart, manifests, and configuration
files for deploying Velero.
* Added support for Velero-related Kubernetes resources, including
backup, restore, schedule, and data mover management.
* Enabled Prometheus monitoring and metrics for Velero components with
PodMonitor and ServiceMonitor support.
* Provided customizable backup storage and volume snapshot location
settings.
  * Added automated Helm hooks for CRD upgrades and cleanup jobs.
  * Included node-agent DaemonSet deployment for Velero.

* **Documentation**
* Updated documentation to describe new Velero addon parameters,
installation, upgrade, and usage instructions.

* **Chores**
  * Incremented Kubernetes app chart version to reflect new features.
  * Updated version mapping and bundle configurations to include Velero.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-03 09:59:34 +03:00
Andrei Kvapil
fb831c05c0 vms add sockets to resources (#1131)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
- Allow to set socket count for VM and VMI
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added support for specifying the number of CPU sockets
(resources.sockets) in virtual machine configurations for both
virtual-machine and vm-instance applications.

* **Documentation**
* Updated documentation to describe the new resources.sockets parameter
and its role in defining vCPU topology.

* **Chores**
* Incremented chart versions for virtual-machine (to 0.12.0) and
vm-instance (to 0.9.0).
  * Updated version mappings to reflect the latest releases.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 17:33:29 +03:00
Andrei Kvapil
f7f8020b9b [tenant] Respect cpu-allocation-ratio in resourceQuotas (#1119)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
  * Updated the tenant application version to 1.11.0.
  * Updated version mapping for the tenant package.

* **Refactor**
* Improved the formatting and processing of resource quota
specifications in the Kubernetes manifest template.

* **Documentation**
* Simplified and clarified the example resource quota configuration in
the configuration file comments.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 17:12:14 +03:00
kklinch0
98194a7414 platform add velero
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-02 16:47:44 +03:00
kklinch0
70c7978306 vms add sockets to resources
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-07-02 15:17:30 +03:00
Andrei Kvapil
d5521df9bd [tenant] Respect cpu-allocation-ratio in resourceQuotas
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-02 15:14:56 +03:00
Andrei Kvapil
6ed1243f86 [kubernetes] fix ingress template (#1143)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[]
```
2025-07-02 15:14:25 +03:00
Andrei Kvapil
d1275ecd08 [kubernetes] fix ingress template
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-02 15:13:50 +03:00
Andrei Kvapil
6c9d8bb47f [dx] fix: exclude ps from self destructing enviroments check (#1142)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[dx] fix: exclude ps from self destructing enviroments check
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Bug Fixes**
* Improved process filtering to exclude both "qemu" and "ps" commands
when identifying external processes during testing.
* Updated error handling in installation tests to provide warnings
without failing the test immediately.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 13:21:46 +02:00
Andrei Kvapil
1f240387f9 [dx] fix: exclude ps from self destructing enviroments check
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-02 13:37:15 +03:00
Andrei Kvapil
1d3964352e [ci] Skip Cozystack tests on PRs that only change the docs (#1136)
- Skip long workflows on PRs that only change files inside the `./docs`
directory.
- Not applicable to other docs in this repository, such as
`packages/apps/**/*.md`, as they're part of the build.



<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[ci] Skip Cozystack tests on PRs that only change the docs
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Updated automated workflows to skip running on pull requests that only
modify documentation files, reducing unnecessary workflow runs.
* Refined workflow triggers to exclude events triggered by labeling pull
requests, streamlining automation processes.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 10:57:03 +02:00
Andrei Kvapil
512277fa93 [kubernetes] Add option for exposing ingress-nginx via LoadBalancer (#1114)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added a new configuration option to choose the method for exposing the
Ingress-NGINX controller: "Proxied" or "LoadBalancer".
- **Documentation**
- Updated documentation to describe the new `exposeMethod` option and
clarified the conditions under which domain names are used.
- **Bug Fixes**
- Improved conditional logic to ensure Ingress resources are only
created when the appropriate expose method is selected.
- **Chores**
	- Incremented the chart version to 0.25.0.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 10:52:44 +02:00
Andrei Kvapil
cd7fec68fc [e2e] Add retries (#1123)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Chores**
* Improved reliability of automated testing workflows by adding retry
logic to key setup and test steps.
* Simplified resource management in end-to-end tests by switching to a
consistent apply command for creating or updating Kubernetes resources.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 10:46:09 +02:00
Andrei Kvapil
d12d07fd5c [etcd] Update etcd application (fix resources and headless services) (#1128)
ref to https://github.com/cozystack/cozystack/pull/1127,
https://github.com/clastix/kamaji/issues/856 and
https://github.com/aenix-io/etcd-operator/pull/291

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
  * Updated etcd chart to version 2.9.0.
* **Improvements**
* Simplified etcd endpoint configuration to use a single static
endpoint.
* Expanded TLS certificate DNS names to include additional service
addresses.
  * Streamlined resource configuration for etcd deployment.
* **Chores**
  * Updated version mapping for etcd package.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 10:45:37 +02:00
Andrei Kvapil
00bd212886 [dx] Introduce cozyreport tool (#1139)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[dx] Introduce cozyreport tool and enable collecting report in CI pipeline
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Introduced automated collection of detailed diagnostic reports from
Kubernetes clusters after test runs.
* Diagnostic reports are packaged and uploaded as artifacts for each
pull request.
* **Chores**
* Updated workflow to ensure cleanup steps wait until diagnostic report
collection is complete.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 10:45:06 +02:00
Andrei Kvapil
d19d6b58d0 [dx] better check for processes in self destructing enviroments
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-02 11:37:32 +03:00
Andrei Kvapil
f953db50da [dx] Introduce cozyreport tool
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-02 10:37:40 +03:00
Andrei Kvapil
55e11fcc7b [cozy-lib] refactor resources (#1127)
- [cozy-lib, bug] divf by cpu ratio, not mulf
- [cozy-lib] remove handler for nested resources/requests map
- [cozy-lib] Introduce memory-allocation-ratio and
ephemeral-strorage-allocation-ratio options
- [system] Recuce resources for some system apps
- [hack] Add migration script for fixing nested resource maps


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Introduced a migration process to enhance resource configuration by
consolidating CPU and memory settings.
* System version is automatically updated to reflect the latest changes.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 09:23:42 +02:00
Andrei Kvapil
12184bc2b9 [dx] better check for processes in self destructing enviroments (#1140)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[dx] better check for processes in self destructing enviroments
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated system image to include additional utilities for process
management.

* **Refactor**
* Simplified internal process filtering to improve reliability and
maintainability.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 09:07:58 +02:00
Andrei Kvapil
39daa3a38a [dx] better check for processes in self destructing enviroments
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-02 09:54:15 +03:00
Andrei Kvapil
a5ff9bf65b [etcd] Update etcd application (fix resources and headless services)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-02 06:15:38 +03:00
Andrei Kvapil
036fa6f888 [hack] Add migration script for fixing nested resource maps
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-02 06:15:04 +03:00
Andrei Kvapil
792f6b4af8 [tests] Introduce self destructing environments (#1138)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[tests] Introduce self destructing environments
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Introduced a process-monitoring entrypoint script for end-to-end
testing containers, allowing for customizable timeout intervals.

* **Chores**
* Updated the Docker image used for end-to-end testing to the latest
available version.
* Modified Docker build context and container runtime options for
testing environments.
* Removed systemd timer and service management steps from workflow
automation.
* Added a new test to verify the presence of required installer assets
before running end-to-end tests.
* Removed redundant installer asset checks from cluster preparation
tests.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-07-02 04:25:42 +02:00
Andrei Kvapil
52714f5cce [tests] Introduce self destructing environments
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-07-02 03:42:14 +03:00
Nick Volynkin
bc54bd7bb0 [ci] Don't restart tests and pre-commit checks when PR is labeled
I labeled my PR and CI was re-started, so now I have to wait even more.
We have no labels governing CI, so there's no reason to restart it on `labeled`.

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-01 19:37:27 +03:00
Nick Volynkin
0b85a52bee [ci] Skip Cozystack tests on PRs that only change the docs
- Skip long workflows on PRs that only change files inside `./docs` directory.
- Not applicable to other docs in this repository, such as `packages/apps/**/*.md`,
  as they're part of the build.

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-07-01 16:29:29 +03:00
klinch0
b3a2bc85e3 Disable sign up in alerta (monitoring) (#1129)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[]
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **New Features**
* Added a new environment variable to the monitoring alert system to
control signup availability.

* **Chores**
  * Updated the monitoring package version to 1.12.0.
* Revised version mapping for improved tracking of monitoring package
releases.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-30 18:42:21 +03:00
Andrei Kvapil
d097433266 [e2e] Add retries
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-30 11:54:31 +02:00
kklinch0
2d294f0546 monitoring disable alerta sign up
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-29 01:18:38 +03:00
Andrei Kvapil
78b4d06b25 [apps] Add enum of allowed values to resourcePreset in all applications (#1117)
It was present in some apps, such as managed kubernetes, but was missing
in others.

bitnami/readme-generator removes enums after re-generating README, so
now we patch them back using `yq` in Makefiles.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Resource preset options are now strictly limited to a predefined set
of values across multiple apps, ensuring only valid selections such as
"none", "nano", "micro", "small", "medium", "large", "xlarge", and
"2xlarge" can be used.
- **Bug Fixes**
- Improved validation for resource presets to prevent invalid entries
and enhance consistency in configuration.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-28 14:03:13 +02:00
Andrei Kvapil
ae90969b7e [platform] rm kk memory limit (#1122)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Removed the memory limit for Keycloak deployment, retaining only
resource requests for memory and CPU.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-28 13:56:12 +02:00
Andrei Kvapil
6732205b24 Create LoadBalancer service for single-node MySQL (#1113)
## Changelog
```
[mysql] Bugfix: external=true did not work for MySQL deployed with a single replica,
since the MariaDB operator does not create separate primary and secondary services for a single-node DB.
A special condition is added to make the "all-node" service a LoadBalancer if external=true and replicas=1.
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Improved handling of external service exposure for MySQL deployments,
with refined logic for LoadBalancer configuration based on the number of
replicas.
- **Chores**
  - Updated MySQL chart version to 0.8.2.
  - Adjusted version mapping to reflect the latest changes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Resolves https://github.com/cozystack/cozystack/issues/1095
2025-06-28 13:36:47 +02:00
Andrei Kvapil
60dee45a61 [dx] Fix Makefile envs for capi-providers (#1115)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated package naming conventions for multiple components to improve
consistency in build and deployment processes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-28 13:36:01 +02:00
Andrei Kvapil
70cd3ce3e7 [maintenance] Add a PR template (#1121)
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
  - For development and maintenance: [tests], [ci], [docs],  
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does

Adds a PR template that will be used for all new pull requests.
It promotes some good practices and has a designated space for a release
note that we can later compile to form a changelog.

### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[maintenance] Add a pull request template for promoting good practices and automating release notes generation.
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Added a new pull request template to guide contributors on formatting
PR titles, labeling, and writing release notes. The template also
encourages marking work-in-progress PRs as drafts and provides sections
for PR descriptions and release notes.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-28 13:35:27 +02:00
Andrei Kvapil
9dc21c6c2d [ci] Use Nexus as a pull-through cache for CI (#1124)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
* Updated registry mirror endpoints for improved cluster configuration,
adding multiple new mirrors for various registries.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-27 09:29:16 +02:00
Timofei Larkin
4648c7b4c1 [ci] Use Nexus as a pull-through cache for CI
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-26 16:45:45 +03:00
kklinch0
6a080fbf5d [platform] rm kk memory limit 2025-06-26 11:19:25 +03:00
Nick Volynkin
72f40f32ad [maintenance] Add a PR template
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-26 10:14:25 +03:00
Nick Volynkin
cfc8c269f3 [apps] Add enum of allowed values to resourcePreset in all applications
It was present in some apps, such as managed kubernetes, but missing in others.

bitnami/readme-generator removes enums after re-generating README,
so now we patch them back using `yq` in Makefiles.

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-25 16:48:20 +03:00
Andrei Kvapil
1da45ff039 [dx] Fix Makefile envs for capi-providers
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-25 14:50:12 +02:00
Andrei Kvapil
c6ee006d6b [kubernetes] Add option for exposing ingress-nginx via LoadBalancer
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-25 14:44:52 +02:00
Timofei Larkin
848abc4bd1 Create LoadBalancer service for single-node MySQL
[mysql] Bugfix: external=true did not work for MySQL deployed with a
single replica, since the MariaDB operator does not create separate
primary and secondary services for a single-node DB. A special condition
is added to make the "all-node" service a LoadBalancer if external=true
and replicas=1.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-25 14:24:45 +03:00
Andrei Kvapil
4369b03141 Release v0.32.1 (#1110)
This PR prepares the release `v0.32.1`.
2025-06-25 01:34:54 +02:00
github-actions
baefc78bfe Prepare release v0.32.1
Signed-off-by: github-actions <github-actions@github.com>
2025-06-24 23:07:51 +00:00
Nick Volynkin
1db08d0b73 [docs] Add release notes for v0.31.2
Resolves #1060

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-25 01:07:01 +02:00
Nick Volynkin
b2ed7525cd [docs] Add release notes for v0.31.1
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-25 01:07:00 +02:00
Andrei Kvapil
4f11814551 [kubernetes] remove useCustomSecretForPatchContainerd option, enable it by default (#1104)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- The application now always attempts to copy the "patch-containerd"
secret if it exists, removing previous conditional behavior.
- **Documentation**
- Removed references to the `useCustomSecretForPatchContainerd`
parameter from user documentation and configuration files for improved
clarity.
- **Chores**
- Updated the chart version to 0.24.2 and revised the version mapping to
reflect the latest release.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-25 01:04:11 +02:00
Andrei Kvapil
307b5617f0 [tests] don't wait for postgres ro service (#1109)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-25 01:03:30 +02:00
Andrei Kvapil
7cf0ce1abf [tests] don't wait for postgres ro service
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-25 01:03:17 +02:00
Andrei Kvapil
5602e9753f [ci] Refactor Github workflows (#1107)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

- **New Features**
- Pull request workflows now support release pull requests by fetching
artifacts from draft releases and running all jobs without label-based
exclusions.
- Test matrices are now generated dynamically, improving flexibility in
end-to-end application testing.
- Added a new end-to-end test verifying tenant creation with isolated
mode enabled.

- **Refactor**
- Workflow steps and job dependencies have been streamlined for improved
efficiency and maintainability.
- Workflow names and concurrency group names have been updated for
clarity.
- Environment preparation and artifact handling have been unified into
consolidated jobs.
	- Release-related workflow simplified to a single finalize job.
- Makefile targets for asset copying and test execution have been
reorganized for better modularity.

- **Tests**
	- End-to-end application and cluster test scripts have been removed.
- Removed collective end-to-end test target; individual app test targets
remain.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-25 00:18:42 +02:00
Andrei Kvapil
ab20502b37 [tests] increase postgres timeouts (#1108)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-25 00:17:20 +02:00
Andrei Kvapil
8369fcddbf [tests] increase postgres timeouts
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-25 00:16:38 +02:00
Andrei Kvapil
9f9ca50dd9 [ci] Refactor Github workflows (#1107)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Pull request workflows now support release pull requests by fetching
artifacts from draft releases and running all jobs without label-based
exclusions.
- Test matrices are now generated dynamically, improving flexibility in
end-to-end application testing.
- Added a new end-to-end test verifying tenant creation with isolated
mode enabled.

- **Refactor**
- Workflow steps and job dependencies have been streamlined for improved
efficiency and maintainability.
- Workflow names and concurrency group names have been updated for
clarity.
- Environment preparation and artifact handling have been unified into
consolidated jobs.
	- Release-related workflow simplified to a single finalize job.
- Makefile targets for asset copying and test execution have been
reorganized for better modularity.

- **Tests**
	- End-to-end application and cluster test scripts have been removed.
- Removed collective end-to-end test target; individual app test targets
remain.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-25 00:15:10 +02:00
Andrei Kvapil
e7681debe2 [ci] Refactor Github workflows
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 23:41:10 +02:00
Andrei Kvapil
36b10341ca [apps] Refactor resources (#1106)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Clarified and simplified descriptions for the `resourcesPreset`
parameter across all app documentation, emphasizing it is used only when
`resources` is not explicitly set and listing allowed values.
- Reformatted and improved consistency in parameter tables and comments
for better readability.

- **Style**
- Simplified commented examples for resource configuration in values
files, using flat CPU and memory entries instead of nested structures.

- **Chores**
  - Incremented chart versions for multiple applications.
  - Updated version mappings to reflect new patch releases.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 19:27:45 +02:00
Andrei Kvapil
0c234e400b [Tests] Add Kafka, Redis (#1077)
Add extra tests into e2e apps

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Tests**
- Added automated end-to-end tests for Kafka and Redis resources in
Kubernetes, including creation, readiness verification, and cleanup.
These tests ensure that the Kafka and Redis clusters are properly
deployed and their components are functioning as expected.
- Updated PostgreSQL test to improve cleanup by removing initialization
jobs after resource deletion.
- **Chores**
- Expanded the pull request testing workflow to include Kafka and Redis
applications in the test matrix for broader coverage.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 19:00:21 +02:00
Ahmad Murzahmatov
c0b7f4e938 [Tests] Add Kafka, Redis, also add to workflow
Remove postgres job after completion

Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 18:10:31 +02:00
Andrei Kvapil
654778a0c7 [apps] Refactor resources
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 17:35:26 +02:00
Andrei Kvapil
86fdb51236 [clickhouse][kafka] fix openapispec generation (#1105)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 17:11:29 +02:00
Andrei Kvapil
e8b83fbbda [clickhouse][kafka] fix openapispec generation
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 17:10:55 +02:00
Andrei Kvapil
29f26f4dd0 [clickhouse][kafka] increase resources (#1103)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 16:57:06 +02:00
Andrei Kvapil
a0526be17d [clickhouse][kafka] increase resources
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 16:56:00 +02:00
Andrei Kvapil
4e41c133b4 [kafka] downgrade operator to 0.45.1-rc1 (#1102)
fix regression introduced by
https://github.com/cozystack/cozystack/pull/1082, since v0.46 strimzi
does not support zookeeper
2025-06-24 13:39:04 +02:00
Andrei Kvapil
587904e8cc [kafka] downgrade operator to 0.45.1-rc1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 13:37:42 +02:00
Andrei Kvapil
6358fd7a45 Release v0.32.1 (#1101)
This PR prepares the release `v0.32.1`.
2025-06-24 12:49:45 +02:00
Andrei Kvapil
af595f34dc Update workflow
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 11:38:35 +02:00
github-actions
2832058036 Prepare release v0.32.1
Signed-off-by: github-actions <github-actions@github.com>
2025-06-24 08:55:52 +00:00
Andrei Kvapil
b9d3b43c3e Update Flux Operator (0.23.0) (#1078)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added new configuration options for workload identity, storage
selection, and scheduling in Flux operator CRDs.
- Enhanced support for semantic version filtering and new input provider
types.
- **Bug Fixes**
- Improved default values and descriptions for several configuration
fields.
- **Chores**
	- Updated Helm chart and documentation versions to 0.23.0.
	- Upgraded CRDs to use the latest controller-gen version.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 10:45:38 +02:00
Andrei Kvapil
bd0bc64c2a linstor fixes (#1094)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added configurable DRBD network options to the cluster resource,
allowing adjustment of connection and timeout settings.

- **Bug Fixes**
- Removed automatic reconnection attempts for DRBD devices stuck in the
"Connecting" state to improve stability.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 10:41:57 +02:00
Andrei Kvapil
2dd62f052e [docs] Release notes for v0.32.0 and two beta-versions (#1043)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Documentation**
- Added a changelog detailing new features, security and bug fixes,
dependency updates, and CI/CD improvements for the latest development
release.
- Included information on enhanced Kubernetes cluster configurations,
virtual machine support, monitoring enhancements, and updated
installation and management guides.
- Provided acknowledgments for new contributors and links to the full
changelog comparison.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 10:41:28 +02:00
Andrei Kvapil
778577e0d5 Wrap cert-manager CRDs in conditional (#1076)
There's no point in installing the CRDs if cert-manager itself is
disabled.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- cert-manager CRDs are now only installed when the cert-manager addon
is enabled, providing more control over addon management.

- **Chores**
  - Updated the Kubernetes chart version to 0.24.1.
- Adjusted version mapping to reflect the new chart version and
associated commit.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 10:40:05 +02:00
Andrei Kvapil
8568b9925f Make VMAgent extraArgs tunable (#1091)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Improved flexibility for VMAgent configuration by allowing users to
override default extra arguments through Helm values.

- **Chores**
- Centralized default argument definitions for VMAgent to simplify
configuration management.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 10:39:28 +02:00
Andrei Kvapil
46ad1b1cd8 [tests] Upd Kubernetes v1.33 (#1083)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Chores**
- Updated tool versions for kubectl, talosctl, and helm to the latest
releases in the testing environment.
- Introduced a configurable version for cozypkg to improve version
management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 10:38:58 +02:00
Andrei Kvapil
066ed77918 add some linstor fixes
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 10:38:15 +02:00
Andrei Kvapil
c7be1a5572 [tests] increase disk space for vms (#1097)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Increased the disk size for VM data images from 100GB to 200GB in
end-to-end cluster tests.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 10:37:33 +02:00
Andrei Kvapil
439e927f6b Update Kafka-operator v0.46.0 (#1082)
Fixes https://github.com/cozystack/cozystack/issues/937

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Enhanced support for KRaft mode, with related schema and documentation
updates.
- Added advanced DNS and pod security context configuration options for
Kafka, KafkaConnect, KafkaBridge, KafkaMirrorMaker2, and KafkaNodePool
resources.

- **Bug Fixes**
- Improved accuracy and clarity of Grafana dashboards, including unit
corrections and better descriptions.

- **Documentation**
- Updated documentation to reflect removal of ZooKeeper-based Kafka
clusters and MirrorMaker 1 support.
- Clarified upgrade instructions and revised image references to latest
versions.

- **Chores**
  - Upgraded default image tags to Strimzi 0.46.0 and Kafka 4.0.0.
- Removed deprecated MirrorMaker 1 CRD, configuration, and permissions.
  - Deleted ZooKeeper monitoring dashboard and related configuration.
  - Refined resource permissions for operator and admin roles.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 10:35:43 +02:00
Andrei Kvapil
c354d5adc6 [tests] increase disk space for vms
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-24 10:33:13 +02:00
Andrei Kvapil
5ffe11dfc6 [postgres] add backup and restore (#1086)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced support for cluster restoration from backup with new
bootstrap configuration options.
- Added a ScheduledBackup resource for automated PostgreSQL backups
using a more flexible backup configuration.

- **Improvements**
- Simplified and modernized backup configuration with new parameters for
retention policy, destination path, and endpoint URL.
- Updated backup scheduling to use a 6-field cron expression for more
precise timing.
- Changed default resource preset from "nano" to "micro" for improved
performance.

- **Removals**
- Removed legacy backup scripts, Docker image, and Kubernetes CronJob
templates related to the old backup system.

- **Documentation**
- Updated documentation to reflect the new backup and bootstrap
parameters, and revised backup instructions.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 10:28:17 +02:00
Andrei Kvapil
37a8bfaa06 [postgres] Escape users and database names (#1087)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Style**
- Updated initialization script to consistently use double quotes around
all PostgreSQL role and database identifiers in SQL commands.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-24 10:18:42 +02:00
Nick Volynkin
0b03768482 [docs] Release notes for v0.32.0
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-24 10:59:02 +03:00
Nick Volynkin
620d626887 [docs] Release notes for v0.32.0-beta2
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-24 10:59:02 +03:00
Nick Volynkin
4e2a081c8b [docs] Release notes for v0.32.0-beta1
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-24 10:59:01 +03:00
Timofei Larkin
fa09845ef9 wrap cron in quotes to avoid yaml issues
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-23 17:59:05 +03:00
Kingdon B
a2a79cb5d9 Upgrade to Flux Operator 0.23.0
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-06-23 15:40:27 +02:00
Kingdon B
7f7cb019e6 Update to Flux Instance chart 0.23.0
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-06-23 15:40:26 +02:00
Andrei Kvapil
ba74f397f5 [postgres] Escape users and database names
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-23 15:39:09 +02:00
kklinch0
7c45335abb [postgres] add backup and restore
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-23 15:38:34 +02:00
Andrei Kvapil
ae13b58d5f [tests] Upd Kubernetes v1.33
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-23 15:37:57 +02:00
Andrei Kvapil
3c7f7d1127 Update Kafka-operator v0.46.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-23 15:36:17 +02:00
Timofei Larkin
f0fc3238ca Run E2E tests as separate parallel jobs (#1093)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced comprehensive end-to-end tests for Kubernetes tenant
control planes, tenants, databases (MySQL, PostgreSQL, ClickHouse),
virtual machines, and VM disks/instances.
  - Added granular test targets to enable running individual app tests.

- **Chores**
- Improved workflow by centralizing workspace handling and automating
workspace cleanup.
- Enhanced CI jobs to streamline environment preparation and test
execution.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-23 16:29:09 +03:00
Timofei Larkin
b3380d8365 Copy instead of move to not confuse pull action
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-23 15:44:27 +03:00
Timofei Larkin
d97d6cb81d Merge branch 'main' into maintenance/parallel-tests
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-23 14:33:09 +03:00
Timofei Larkin
b2a697f98d Run E2E tests as separate parallel jobs
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-23 14:31:27 +03:00
Timofei Larkin
6e6a05d11e Setup systemd timer to tear down sandbox (#1092)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Added automated scheduling to delete sandboxes 24 hours after creation
in pull request workflows.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-23 13:58:41 +03:00
Timofei Larkin
5d76294ff0 Setup systemd timer to tear down sandbox
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-23 12:56:20 +03:00
Timofei Larkin
62a6da0063 Make VMAgent extraArgs tunable
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-23 11:15:42 +03:00
Andrei Kvapil
6a8530a00a Update cozy-proxy v0.2.0 (#1081)
This PR includes the following change
https://github.com/cozystack/cozy-proxy/pull/6

Which makes source-based-routing working with wholeIP services


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated Helm chart and Docker image versions for cozy-proxy to v0.2.0.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-20 15:56:52 +02:00
Andrei Kvapil
b3b40dcf9c Update cozy-proxy v0.2.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-19 16:23:25 +02:00
Andrei Kvapil
4479ed5e95 [bugfix] fix monitoring agents hr for tenant clusters (#1079)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Updated monitoring agents to use the correct namespaces for deployment
and data storage.

- **Chores**
  - Bumped the Kubernetes chart version to 0.24.1.
- Updated the versions map to reflect the latest chart version and
commit references.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-19 09:03:59 +02:00
kklinch0
b16e73ad42 [bugfix] fix monitoring agents hr for tenant clusters
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-18 18:38:54 +03:00
Andrei Kvapil
4631f85114 Split testing job into several (#1075)
This patch separates the Test job of the PR workflow into several
smaller jobs: 1) create a testing sandbox and deploy Talos, 2) install
Cozystack and configure it, 3) install managed applications and run e2e
tests. This lets developers shorten the feedback loop if tests are
merely acting flaky and aren't really broken. It's not the right way,
but it's 80/20.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced a multi-stage workflow for environment preparation,
Cozystack installation, application testing, and cleanup.
- Added automated end-to-end scripts for provisioning Talos clusters and
validating Cozystack installations.
- Added new Makefile targets to streamline cluster preparation and
Cozystack installation processes.
- **Bug Fixes**
- Removed obsolete annotation step in application testing to improve
resource handling.
- Added pre-checks and resource cleanup in application testing to
enhance test reliability.
- **Chores**
- Improved workflow structure for enhanced setup and testing
reliability.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-18 13:42:25 +02:00
Timofei Larkin
746641e523 Split testing job into several
This patch separates the Test job of the PR workflow into several
smaller jobs: 1) create a testing sandbox and deploy Talos, 2) install
Cozystack and configure it, 3) install managed applications and run e2e
tests. This lets developers shorten the feedback loop if tests are
merely acting flaky and aren't really broken. It's not the right way,
but it's 80/20.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-17 18:47:09 +03:00
Timofei Larkin
e848dde422 Wrap cert-manager CRDs in conditional
There's no point in installing the CRDs if cert-manager itself is
disabled.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-17 16:53:57 +03:00
Andrei Kvapil
3ce6dbe850 Release v0.32.0 (#1074)
This PR prepares the release `v0.32.0`.
2025-06-17 11:30:28 +02:00
Andrei Kvapil
8d5007919f [tests] fix avaiting for vm-disk
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 10:32:59 +02:00
github-actions
08e569918b Prepare release v0.32.0
Signed-off-by: github-actions <github-actions@github.com>
2025-06-16 23:54:35 +00:00
Andrei Kvapil
6498000721 [tests] VM Disk, VMI, VM, DBs (#1048)
Add 'Apps' tests for
Virtual Machine Disk
Virtual Machine Instance
Virtual Machine
PostgreSQL
MySQL
ClickHouse

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

## Summary by CodeRabbit

- **Tests**
- Added new end-to-end tests for creating and validating VM disks, VM
instances, virtual machines, and multiple database types (PostgreSQL,
MySQL, ClickHouse), ensuring correct provisioning and readiness of these
resources.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-17 01:50:32 +02:00
Andrei Kvapil
8486e6b3aa [kubernetes] Fixes for resources and migration to v0.32.4 (#1073)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 01:38:17 +02:00
Andrei Kvapil
3f6b6798f4 [kubernetes] Fixes for resources and migration to v0.32.4
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 01:34:54 +02:00
Andrei Kvapil
c1b928b8ef [cluster-api] Add missing migration for capi-providers (#1072)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Introduced a new migration script to update the system version and
manage related resources during the upgrade from version 14 to 15.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-17 01:34:11 +02:00
Andrei Kvapil
c2e8fba483 [cluster-api] Add missing migration for capi-providers
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-17 01:33:58 +02:00
Andrei Kvapil
62cb694d72 Release v0.32.0-beta.2 (#1049)
This PR prepares the release `v0.32.0-beta.2`.
2025-06-16 21:36:04 +02:00
github-actions
c619343aa2 Prepare release v0.32.0-beta.2
Signed-off-by: github-actions <github-actions@github.com>
2025-06-16 19:06:14 +00:00
Ahmad Murzahmatov
75ad26989d [tests] VM Disk, VMI, VM
Add 'Apps' tests for
Virtual Machine Disk
Virtual Machine Instance
Virtual Machine
PostgreSQL
MySQL
ClickHouse

Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
2025-06-16 21:00:22 +02:00
Andrei Kvapil
c4fc8c18df Use library chart with k8s managed app (#1026)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Refactor**
- Updated resource configuration rendering in cluster templates to use
standardized resource handling from a shared library, improving
consistency in resource definitions.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 20:55:57 +02:00
Timofei Larkin
8663dc940f Use library chart with k8s managed app
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-16 20:54:30 +02:00
Andrei Kvapil
cf983a8f9c [dashboard] Remove dependency on listing secrets (#1062)
This change includes the following commit
6856b66f92

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated the version of a core dependency used in the dashboard and
related services to a newer commit. No user-facing changes.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 20:48:01 +02:00
Andrei Kvapil
ad6aa0ca94 Refactor roles and permissions for tenants (#1067)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced advanced Helm template helpers for managing Kubernetes RBAC
(Role-Based Access Control), including access level mapping,
hierarchy-aware group subject generation, and tenant parsing.
- Added dynamic RoleBinding resources across multiple applications to
bind roles to appropriate subjects based on access levels and tenant
namespaces.
- **Bug Fixes**
- Refined tenant application roles by restricting resource permissions
to specific core Kubernetes resources, enhancing security and access
control granularity.
- **Chores**
- Updated chart versions across numerous applications to reflect new
releases.
- Added reference files linking to the shared library in multiple
application chart directories.
- Pinned package versions to specific commits for improved version
stability and tracking.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 20:47:09 +02:00
Andrei Kvapil
9dc5d62f47 [dashboard] Remove dependency on listing secrets
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 20:32:51 +02:00
Andrei Kvapil
3b8a9f9d2c Configure all apps to use new function to generate subjects
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 20:32:11 +02:00
Andrei Kvapil
ab9926a177 Update cozypkg v1.1.0 (#1063) 2025-06-16 20:12:21 +02:00
Andrei Kvapil
f83741eb09 Add extra helper function to generate subjects
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 20:11:41 +02:00
Timofei Larkin
028f2e4e8d Add helper function to generate subjects
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-16 19:19:57 +03:00
Andrei Kvapil
255fa8cbe1 [docs] Review the Clickhouse app docs (#1059)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Improved and clarified documentation for the Managed ClickHouse
Service, including enhanced introductory content and clearer backup
instructions.
- Updated and corrected parameter descriptions for accuracy, especially
regarding shards, replicas, storage sizes, and backup options.
- Expanded explanations and examples for resource configuration in
production environments.
  - Reformatted tables and notes for better readability and usability.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 18:14:02 +02:00
Andrei Kvapil
b42f5cdc01 [bugfix] fix distro full bundle (#1056)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added a new template to automatically create a self-signed
ClusterIssuer for certificate management if one does not already exist.
- **Chores**
- Updated dependency configuration for the snapshot-controller to
simplify its setup process.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-16 18:13:44 +02:00
Andrei Kvapil
74633ad699 Update cozypkg v1.1.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-16 18:11:27 +02:00
Nick Volynkin
980185ca2b [docs] Review the Clickhouse app docs
Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-16 08:40:46 +03:00
Andrei Kvapil
8eabe30548 [platform] Use cozypkg instead of helm (#1057)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced the use of the CozyPkg tool for package deployment and
management, replacing previous Helm-based workflows across installer,
platform, and system components.

- **Refactor**
- Updated Makefiles and scripts to use CozyPkg commands for showing,
applying, diffing, suspending, resuming, and deleting packages.
- Removed dynamic API version handling and simplified deployment command
structures.

- **Chores**
- Updated Docker images to newer base versions and included CozyPkg
installation steps.
- Changed installer image references to use the latest available build.
- Removed obsolete scripts and dependencies related to Helm and
Kustomize.
- Consolidated package installations and updated tooling in Dockerfiles
for improved efficiency.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-14 20:50:12 +02:00
Andrei Kvapil
0c9c688e6d [platform] decrease resources for system applications (#1054)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added resource constraints for the flux-operator and multiple kube-ovn
components, specifying CPU and memory requests and limits.

- **Improvements**
- Reduced default minimum CPU and memory requests for monitoring and
seaweedfs components, as well as for the Redis master in the dashboard,
to optimize resource usage.

- **Chores**
	- Updated version numbers for monitoring and seaweedfs packages.
	- Refreshed version mappings to reflect new releases.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-14 08:21:32 +02:00
Andrei Kvapil
908c75927e [platform] Use cozypkg instead of helm
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-13 19:02:15 +02:00
Andrei Kvapil
0a1f078384 [docs] Note that Cozystack is a CNCF Sandbox project in the readme (#1055)
Fix a few other things in the readme

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Documentation**
- Updated the README to highlight Cozystack's CNCF Sandbox status and
original sponsorship.
- Moved the user interface screenshot to appear directly after the
introduction.
- Reorganized community information into a dedicated section with
clearer invitations and calendar links for meetings.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-13 12:58:44 +02:00
kklinch0
6a713e5eb4 [bugfix] fix distro full bundle
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-13 10:59:14 +03:00
Nick Volynkin
8f0a28bad5 [docs] Note that Cozystack is a CNCF Sandbox project in the readme
Fix a few other things in the readme

Signed-off-by: Nick Volynkin <nick.volynkin@gmail.com>
2025-06-13 09:47:11 +03:00
kklinch0
0fa70d9d38 [platform] cut resources
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-13 01:06:05 +03:00
klinch0
b14c82d606 [bugfix] add-resource-quotas-for-pg-jobs-and-fix-install-generate (#1051)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added default resource specifications for PostgreSQL jobs to ensure
consistent CPU and memory allocation.
- **Chores**
  - Updated the chart version for the PostgreSQL application.
  - Refreshed version mapping to reflect the latest release.
- Improved Node.js setup and package installation in the pre-commit
workflow.
- **Tests**
- Increased memory allocation for QEMU virtual machines in end-to-end
tests.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-12 10:10:26 +03:00
kklinch0
8e79f24c5b add rq for pg
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-11 22:48:09 +03:00
Andrei Kvapil
3266a5514e Get instance type when reconciling WorkloadMonitor (#1030)
When the WorkloadMonitor is reconciled and child Workload objects are
created, they will now get additional labels in the
`workloads.cozystack.io` namespace, containing metadata about the
workload. This particular commit checks if a pod targeted by a Workload
is owned by a VirtualMachineInstance (i.e. it launches a KubeVirt VMI)
and, if so, gets the VMI instance type and puts it in the
`kubevirt-vmi-instance-type` label.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Workload objects created for Pods now include additional labels
extracted from their owner references, specifically for
VirtualMachineInstance resources.
- If a VirtualMachineInstance has a relevant annotation, its instance
type is now reflected as a label on the associated Workload.
- **Chores**
- Updated and added several dependencies to improve compatibility and
maintainability.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-11 12:55:42 +02:00
Andrei Kvapil
0c37323a15 [kubernetes] Update Kubevirt-CCM (#1052)
Fixes panic, upstream issue:

- https://github.com/kubevirt/cloud-provider-kubevirt/pull/354

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved filtering and error handling for endpoints and virtual
machines with missing or invalid data, ensuring only valid endpoints are
processed.
- **New Features**
- Enhanced support for multi-cluster environments by introducing cluster
name filtering for service and endpoint management.
- **Tests**
- Added new tests to verify correct handling of endpoints and services
across clusters and improved coverage for edge cases.
- **Chores**
- Updated Kubernetes app and image versions for improved tracking and
deployment consistency.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-11 12:55:06 +02:00
Andrei Kvapil
10af98e158 [kubernetes] Update Kubevirt-CCM
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-11 10:30:13 +02:00
Andrei Kvapil
632224a30a Update Kube-OVN v1.13.13 and enable db healthcheck (#1047)
This PR updates Kube-OVN to the latest version and also includes fix
https://github.com/kubeovn/kube-ovn/pull/5294

Ref
https://github.com/kubeovn/kube-ovn/issues/5125#issuecomment-2921920661

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 13:56:31 +02:00
Andrei Kvapil
e8d11e64a6 Update Metallb v0.15.2 (#1045)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added new configuration options to exclude specific address pools from
Prometheus alerts for address pool exhaustion and usage.
- Introduced a new CRD for ServiceBGPStatus to provide detailed BGP peer
status per service and node.
- Added new status fields to track assigned and available IPv4/IPv6
addresses in IPAddressPool.

- **Improvements**
  - Updated Helm chart and dependency versions to the latest releases.
- Enhanced validation for speaker configuration to prevent invalid
settings.
  - Clarified configuration descriptions for easier understanding.
- Increased file descriptor limits for FRR daemons to improve
reliability.
- Simplified Docker image handling by using pre-built MetalLB images
instead of local builds.

- **Bug Fixes**
- Updated RBAC roles to grant necessary permissions for new resources
and status updates.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 13:36:40 +02:00
Andrei Kvapil
27c7a2feb5 Update Cilium v1.17.4 (#1046)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added a new configuration option to require Kubernetes connectivity in
liveness probes.
  - Enabled Kafka API key redaction by default in Hubble settings.

- **Bug Fixes**
- Improved conditional logic for resource creation to prevent
unnecessary resources during preflight mode.
  - Corrected YAML indentation and formatting in configuration files.

- **Chores**
- Upgraded Cilium and related component images from version 1.17.3 to
1.17.4.
- Updated documentation and default configuration values to reflect new
versions and settings.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 11:53:33 +02:00
Andrei Kvapil
9555386bd7 Release v0.32.0-beta.1 (#1044)
This PR prepares the release `v0.32.0-beta.1`.
2025-06-10 11:37:31 +02:00
Andrei Kvapil
9733de38a3 Update Kube-OVN v1.13.13 and enable db healthcheck
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 11:33:19 +02:00
Andrei Kvapil
775a05cc3a Update Metallb v0.15.2
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 11:13:36 +02:00
Andrei Kvapil
4e5cc2ae61 Update Cilium v1.17.4
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 11:03:47 +02:00
github-actions
32adf5ab38 Prepare release v0.32.0-beta.1
Signed-off-by: github-actions <github-actions@github.com>
2025-06-10 08:28:28 +00:00
Andrei Kvapil
28302e776e [ci] fix clickhouse version parsing
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 10:22:06 +02:00
Timofei Larkin
911ca64de0 Get instance type when reconciling WorkloadMonitor
When the WorkloadMonitor is reconciled and child Workload objects are
created, they will now get additional labels in the
`workloads.cozystack.io` namespace, containing metadata about the
workload. This particular commit checks if a pod targeted by a Workload
is owned by a VirtualMachineInstance (i.e. it launches a KubeVirt VMI)
and, if so, gets the VMI instance type and puts it in the
`kubevirt-vmi-instance-type` label.

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-10 11:17:40 +03:00
Andrei Kvapil
045ea76539 [platform] Introduce cluster-domain option and unhardcode cozy.local (#1039)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Dynamic cluster domain configuration is now propagated to multiple
components, allowing them to use the cluster domain value from a central
ConfigMap instead of a hardcoded value.
- The cluster domain is now injected into ClickHouse, Kubernetes, NATS,
Keycloak, and various operator releases for improved flexibility and
consistency.

- **Chores**
- Updated chart versions for ClickHouse, Kubernetes, and NATS
applications.
- Refreshed version references in the versions map to reflect the latest
releases.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 10:12:11 +02:00
Andrei Kvapil
cee820e82c [platform] Introduce cluster-domain option and unhardcode cozy.local
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 10:11:09 +02:00
Andrei Kvapil
6183b715b7 [dashboard] Cumulative update (#1042)
This PR includes fixes and updates for cozystack dashboard:

### [fix client rate
limiter](b1467cecc1)

fixes the error `client rate limiter Wait returned an error: context
canceled`
The QPS and Burst options were set after the kubernetes client
initalized and had no effect

The limits are also increased fivefold:

```diff
-         - --kube-api-qps=50.0
-         - --kube-api-burst=100
+         - --kube-api-qps=250.0
+         - --kube-api-burst=500
```


### [fix relative
urls](e2153e26dd)

Fixes regression introduced in
https://github.com/cozystack/cozystack/pull/935 which suddenly removed
previus workaround https://github.com/cozystack/cozystack/pull/102

Now the proper fix prepared.

Related to upstream issue
https://github.com/vmware-tanzu/kubeapps/issues/7740

### [remove version
selector](f412a6aba4)

from both package insallation page and upgrading page
<img width="505" alt="Screenshot 2025-06-10 at 1 47 10"
src="https://github.com/user-attachments/assets/36068264-2878-4b82-a159-6c911f1c1eef"
/>

now it always will default to the latest package version

### [always fetch details from the latest
version](741a7ddb93)

If old package version installed it will display information from the
latest package in repository. This and previus fix actually remove the
need for having versions_map logic and pack multiple charts for the
release. But informs user about newer versions and allows to perform
upgrade on demand in specific time:

<img width="423" alt="Screenshot 2025-06-10 at 1 52 53"
src="https://github.com/user-attachments/assets/dd571c9f-c2bc-403f-9aa0-3d8853600241"
/>

### [Remove plugin name from
header]ffc0b0246b

We always use flux though

<img width="386" alt="Screenshot 2025-06-10 at 1 55 39"
src="https://github.com/user-attachments/assets/df6f52b5-82ab-4e7a-a973-2a82eb38ebfb"
/>

### [Fix switching context from app
view](d89e721fcb)

Fixes the error message while swtiching tenant from the application view

```
An error occurred while fetching the application: Unable to get installed package.
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Added new configuration options for API request rate limits in the
dashboard settings.

- **Style**
- Updated dashboard appearance to hide version information and specific
label elements.

- **Chores**
- Updated internal references to the latest version of the dashboard
source code.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 10:04:38 +02:00
Andrei Kvapil
2669ab6072 [dashboard] Cumulative update
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-10 02:00:22 +02:00
Andrei Kvapil
96506c7cce [kafka] specify mimimal working resource presets (#1040)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated default resource presets for Kafka (now "small") and ZooKeeper
(now "micro") to provide improved baseline resources.
- **Documentation**
- Updated documentation to reflect new default resource presets for
Kafka and ZooKeeper.
- **Chores**
- Incremented Kafka chart version to 0.6.1 and updated version mapping.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-10 00:34:28 +02:00
Andrei Kvapil
7bb70c839e [cozystack-controller] Introduce system helm reconciler (#1033)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced automated monitoring of key configuration changes to ensure
system applications are promptly updated when relevant settings are
modified.
- **Bug Fixes**
- Improved error logging for controller setup to display accurate
controller names and ensure consistent error handling.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 22:56:28 +02:00
Andrei Kvapil
ba97a4593c [kafka] specify mimimal working resource presets
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-09 15:52:08 +02:00
Andrei Kvapil
c467ed798a Update flux-operator to 0.22.0, Flux to 2.6.x (#1035)
Flux 2.6.1 is the latest Flux release now

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced validation for custom resources to ensure consistent naming
and conditional field requirements.
- Added support for referencing input providers using label selectors,
and expanded input provider types.
	- Extended reporting with new cluster information fields.

- **Bug Fixes**
- Improved schema constraints to prevent invalid or inconsistent
resource configurations.

- **Documentation**
- Updated version information in documentation and Helm chart metadata
to reflect the latest release.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 12:39:57 +02:00
Andrei Kvapil
ed881f0741 [Fix] CloudInit for VM Instance (#1020)
Made same changes as in
[PR](https://github.com/cozystack/cozystack/pull/1019)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Added support for defining a system disk with customizable storage and
image sources for virtual machines.
- **Improvements**
- Enhanced cloud-init configuration to require both SSH keys and
cloud-init data for certain volume setups, improving user data handling.
- Simplified disk configuration for virtual machines, making setup more
straightforward.
- Shortened and clarified error messages for missing configuration
fields.
- **Chores**
- Updated chart and package versions for virtual-machine and vm-instance
applications.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 12:37:22 +02:00
Andrei Kvapil
0e0dabdd08 [platform] Fix deps for paas-hosted bundle
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-09 12:36:32 +02:00
Andrei Kvapil
bd8f8bde95 [e2e] Increase timeout awaiting for machines (#1038)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Increased the timeout duration for waiting on specific resources
during end-to-end testing, improving reliability for longer operations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 12:36:14 +02:00
Andrei Kvapil
646dab497c [e2e] Increase timeout awaiting for machines
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-09 12:34:54 +02:00
Andrei Kvapil
dc3b61d164 [cozystack-controller] Fix RBAC for annotating namespaces (#1031)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Expanded permissions for managing namespaces, now allowing patch and
update actions in addition to viewing and listing.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 10:16:27 +02:00
Andrei Kvapil
4479a038cd [platform] Fix deps for paas-hosted bundle (#1034)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
  - Introduced a new release for object storage management.

- **Improvements**
- Updated dependencies for certain platform components to simplify
deployment.
- Network policy templates are now only applied when supported by the
cluster.

- **Chores**
  - Removed obsolete monitoring namespace configurations.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-09 10:16:12 +02:00
Andrei Kvapil
dfd01ff118 [platform] Fix deps for paas-hosted bundle
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-09 10:12:06 +02:00
Kingdon B
d2bb66db31 bump Flux to 2.6
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-06-08 10:27:01 -04:00
Kingdon B
7af97e2d9f Update flux-operator to 0.22.0
Signed-off-by: Kingdon B <kingdon@urmanac.com>
2025-06-06 19:07:21 -04:00
Andrei Kvapil
ac5145be87 [cozystack-controller] Fix RBAC for annotating namespaces
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2025-06-06 15:45:35 +02:00
Timofei Larkin
4779db2dda Use global scope for .Release object (#1032)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Bug Fixes**
- Improved reliability of secret references in Kubernetes cluster
templates to ensure correct retrieval and usage of release-specific
secrets.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-06 15:56:23 +03:00
Timofei Larkin
25c2774bc8 Use global scope for .Release object
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-06 15:14:46 +03:00
Ahmad Murzahmatov
bbee8103eb [fix] vm-instance: cloud-init
made same change as in
[PR](https://github.com/cozystack/cozystack/pull/1019)

Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
2025-06-06 16:52:54 +06:00
klinch0
730ea4d5ef [fix] CloudInit (#1019)
If ssh key provided - deploy
If cloudinit provided - deploy
If ssh key and cloudinit provided - deploy both
If none provided - init empty to avoid issues w/
network

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **Refactor**
- Improved handling of SSH keys and cloud-init data in the Virtual
Machine setup, clearly distinguishing cases when SSH keys, cloud-init,
or both are provided.
  - Enhanced template readability with added spacing for better clarity.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-05 15:53:08 +03:00
klinch0
13fccdc465 bump tenant version (#1028) 2025-06-05 15:44:53 +03:00
kklinch0
f1b66c80f6 bump tenant version
Signed-off-by: kklinch0 <kklinch0@gmail.com>
2025-06-05 15:40:45 +03:00
klinch0
f34f140d49 Add RBAC rules to allow portforward in kubevirt for SSH via virtctl (#1027)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Expanded user permissions to allow port forwarding for virtual machine
instances, enabling enhanced remote access capabilities.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-05 11:07:53 +03:00
mattia
520fbfb2e4 Add RBAC rules to allow portforward in kubevirt for SSH via virtctl
Signed-off-by: mattia <mattia@hidora.io>
2025-06-05 09:38:40 +02:00
klinch0
25016580c1 (k8s) configure containerd for client k8s cluster (#979)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced granular Helm charts for Cluster API providers: bootstrap,
core, control plane, and infrastructure, each with dedicated
configuration, metadata, and compressed component packaging.
- Added a new configuration option to the Kubernetes app to enable using
a custom secret for patching containerd.
- Enhanced Kubernetes deployment to conditionally manage containerd
registry certificates and configuration using custom or copied secrets.

- **Documentation**
- Updated Kubernetes app documentation to include the new containerd
patching secret configuration option.

- **Chores**
- Updated version mappings and chart versions for Kubernetes and Cluster
API-related components.
- Decomposed the monolithic Cluster API provider release into multiple,
more manageable releases with explicit namespaces and dependencies.

- **Refactor**
- Removed the previous unified Cluster API provider template in favor of
new, separate provider resource definitions.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-04 11:07:58 +03:00
kklinch0
f10f8455fc (k8s) configure containerd for client k8s cluster 2025-06-04 10:40:10 +03:00
Timofei Larkin
974581d39b [monitoring-agents] Add events and audit inputs (#948)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Enhanced log monitoring by adding support for Kubernetes events and
audit logs.
  - Introduced custom log parsers for improved log format handling.
  - Added log source tagging for easier identification of log origins.

- **Improvements**
- Refined log filtering and output formatting for better log
organization and delivery.
- Updated log outputs to support compressed JSON lines and ISO8601 date
formatting.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-04 10:33:58 +03:00
Timofei Larkin
7e24297913 Use library chart for resource management (#1025)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced a shared library for resource configuration management
across multiple application charts.

- **Refactor**
- Updated resource configuration handling in several application charts
to use new centralized helpers for improved consistency and
sanitization.

- **Chores**
- Added references to the shared library in various application chart
directories.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-04 09:42:31 +03:00
Timofei Larkin
b6142cd4f5 Use library chart for resource management
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-04 09:05:21 +03:00
Timofei Larkin
e87994c769 Capture all resources by WorkloadMonitors (#1024)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

- **New Features**
- Introduced WorkloadMonitor resources for tcp-balancer, vm-disk, and
VPN applications, enabling enhanced workload monitoring capabilities.

- **Bug Fixes**
- Standardized Kubernetes resource labels across multiple applications
for improved consistency and compatibility.

- **Chores**
- Updated chart versions for several applications, including ClickHouse,
FerretDB, http-cache, MySQL, Postgres, Redis, tcp-balancer,
virtual-machine, vm-disk, vm-instance, and VPN.
- Updated Docker image reference for the installer to use the latest
version.
  - Refreshed internal version mappings for multiple packages.
- Added standardized instance labels to Kubernetes resources across
multiple applications for better tracking and management.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-03 16:15:46 +03:00
Timofei Larkin
b140f1b57f Capture all resources by WorkloadMonitors
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-03 15:40:27 +03:00
Timofei Larkin
64936021d2 Release v0.31.0-rc.3 (#991)
This PR prepares the release `v0.31.0-rc.3`.

Signed-off by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-03 14:10:11 +06:00
Andrei Kvapil
a887e19e6c Capture all resources by WorkloadMonitors (#1018)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Introduced monitoring resources for HAProxy, NGINX, and generic HTTP
cache workloads, allowing improved workload observability.
- **Enhancements**
- Added standardized labels to MariaDB, Postgres, and Redis resources
for better integration and management within Kubernetes environments.
- Updated label selectors in Postgres resources to use standardized
Kubernetes app labels.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-02 09:28:30 +02:00
Andrei Kvapil
92b97a569e Fixed Gateway API manifest (#1016)
In current version of Cozystack, flux's HelmRelease will refuse to
install cozy-gateway-api-crds when gatewayAPI enabled, complaining
version '*'not found and breaking install of entire kubernetes app. This
patch adds working version match.

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **Chores**
- Updated configuration to allow compatibility with all available
versions of the gateway-api-crds chart.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->
2025-06-02 09:25:25 +02:00
Timofei Larkin
0e22358b30 Capture all resources by WorkloadMonitors
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-06-02 09:44:27 +03:00
Zdenek Deu Janda
7429daf99c Fixed Gateway API manifest
Signed-off-by: Zdenek Deu Janda <zdenek.janda@cloudevelops.com>
2025-06-01 23:49:42 +02:00
kevin880202
fc8b52d73d reset and add audit/event monitoring in fluentbit values
Signed-off-by: kevin880202 <dytoponts11@gmail.com>
2025-05-21 22:07:27 +08:00
640 changed files with 82319 additions and 54680 deletions

24
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,24 @@
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium], [kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres], [virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats, even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported to a previous version.
-->
## What this PR does
### Release note
<!-- Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->
```release-note
[]
```

View File

@@ -2,7 +2,7 @@ name: Pre-Commit Checks
on:
pull_request:
types: [labeled, opened, synchronize, reopened]
types: [opened, synchronize, reopened]
concurrency:
group: pre-commit-${{ github.workflow }}-${{ github.event.pull_request.number }}
@@ -28,15 +28,7 @@ jobs:
- name: Install generate
run: |
sudo apt update
sudo apt install curl -y
curl -fsSL https://deb.nodesource.com/setup_16.x | sudo -E bash -
sudo apt install nodejs -y
git clone https://github.com/bitnami/readme-generator-for-helm
cd ./readme-generator-for-helm
npm install
npm install -g pkg
pkg . -o /usr/local/bin/readme-generator
curl -sSL https://github.com/cozystack/readme-generator-for-helm/releases/download/v1.0.0/readme-generator-for-helm-linux-amd64.tar.gz | tar -xzvf- -C /usr/local/bin/ readme-generator-for-helm
- name: Run pre-commit hooks
run: |

View File

@@ -1,100 +1,17 @@
name: Releasing PR
name: "Releasing PR"
on:
pull_request:
types: [labeled, opened, synchronize, reopened, closed]
types: [closed]
paths-ignore:
- 'docs/**/*'
# Cancel inflight runs for the same PR when a new push arrives.
concurrency:
group: pull-requests-release-${{ github.workflow }}-${{ github.event.pull_request.number }}
group: pr-${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
verify:
name: Test Release
runs-on: [self-hosted]
permissions:
contents: read
packages: write
if: |
contains(github.event.pull_request.labels.*.name, 'release') &&
github.event.action != 'closed'
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
registry: ghcr.io
- name: Extract tag from PR branch
id: get_tag
uses: actions/github-script@v7
with:
script: |
const branch = context.payload.pull_request.head.ref;
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!m) {
core.setFailed(`❌ Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
return;
}
const tag = `v${m[1]}`;
core.setOutput('tag', tag);
- name: Find draft release and get asset IDs
id: fetch_assets
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GH_PAT }}
script: |
const tag = '${{ steps.get_tag.outputs.tag }}';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo,
per_page: 100
});
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) {
core.setFailed(`Draft release '${tag}' not found`);
return;
}
const findAssetId = (name) =>
draft.assets.find(a => a.name === name)?.id;
const installerId = findAssetId("cozystack-installer.yaml");
const diskId = findAssetId("nocloud-amd64.raw.xz");
if (!installerId || !diskId) {
core.setFailed("Missing required assets");
return;
}
core.setOutput("installer_id", installerId);
core.setOutput("disk_id", diskId);
- name: Download assets from GitHub API
run: |
mkdir -p _out/assets
curl -sSL \
-H "Authorization: token ${GH_PAT}" \
-H "Accept: application/octet-stream" \
-o _out/assets/cozystack-installer.yaml \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ steps.fetch_assets.outputs.installer_id }}"
curl -sSL \
-H "Authorization: token ${GH_PAT}" \
-H "Accept: application/octet-stream" \
-o _out/assets/nocloud-amd64.raw.xz \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ steps.fetch_assets.outputs.disk_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
- name: Run tests
run: make test
finalize:
name: Finalize Release
runs-on: [self-hosted]

View File

@@ -2,10 +2,13 @@ name: Pull Request
on:
pull_request:
types: [labeled, opened, synchronize, reopened]
types: [opened, synchronize, reopened]
paths-ignore:
- 'docs/**/*'
# Cancel inflight runs for the same PR when a new push arrives.
concurrency:
group: pull-requests-${{ github.workflow }}-${{ github.event.pull_request.number }}
group: pr-${{ github.workflow }}-${{ github.event.pull_request.number }}
cancel-in-progress: true
jobs:
@@ -43,6 +46,17 @@ jobs:
- name: Build Talos image
run: make -C packages/core/installer talos-nocloud
- name: Save git diff as patch
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
run: git diff HEAD > _out/assets/pr.patch
- name: Upload git diff patch
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/upload-artifact@v4
with:
name: pr-patch
path: _out/assets/pr.patch
- name: Upload installer
uses: actions/upload-artifact@v4
@@ -55,28 +69,283 @@ jobs:
with:
name: talos-image
path: _out/assets/nocloud-amd64.raw.xz
test:
name: Test
runs-on: [self-hosted]
needs: build
# Never run when the PR carries the "release" label.
if: |
!contains(github.event.pull_request.labels.*.name, 'release')
resolve_assets:
name: "Resolve assets"
runs-on: ubuntu-latest
if: contains(github.event.pull_request.labels.*.name, 'release')
outputs:
installer_id: ${{ steps.fetch_assets.outputs.installer_id }}
disk_id: ${{ steps.fetch_assets.outputs.disk_id }}
steps:
- name: Download installer
uses: actions/download-artifact@v4
- name: Checkout code
if: contains(github.event.pull_request.labels.*.name, 'release')
uses: actions/checkout@v4
with:
name: cozystack-installer
path: _out/assets/
fetch-depth: 0
fetch-tags: true
- name: Download Talos image
- name: Extract tag from PR branch (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
id: get_tag
uses: actions/github-script@v7
with:
script: |
const branch = context.payload.pull_request.head.ref;
const m = branch.match(/^release-(\d+\.\d+\.\d+(?:[-\w\.]+)?)$/);
if (!m) {
core.setFailed(`❌ Branch '${branch}' does not match 'release-X.Y.Z[-suffix]'`);
return;
}
core.setOutput('tag', `v${m[1]}`);
- name: Find draft release & asset IDs (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
id: fetch_assets
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GH_PAT }}
script: |
const tag = '${{ steps.get_tag.outputs.tag }}';
const releases = await github.rest.repos.listReleases({
owner: context.repo.owner,
repo: context.repo.repo,
per_page: 100
});
const draft = releases.data.find(r => r.tag_name === tag && r.draft);
if (!draft) {
core.setFailed(`Draft release '${tag}' not found`);
return;
}
const find = (n) => draft.assets.find(a => a.name === n)?.id;
const installerId = find('cozystack-installer.yaml');
const diskId = find('nocloud-amd64.raw.xz');
if (!installerId || !diskId) {
core.setFailed('Required assets missing in draft release');
return;
}
core.setOutput('installer_id', installerId);
core.setOutput('disk_id', diskId);
prepare_env:
name: "Prepare environment"
runs-on: [self-hosted]
permissions:
contents: read
packages: read
needs: ["build", "resolve_assets"]
if: ${{ always() && (needs.build.result == 'success' || needs.resolve_assets.result == 'success') }}
steps:
# ▸ Checkout and prepare the codebase
- name: Checkout code
uses: actions/checkout@v4
# ▸ Regular PR path download artefacts produced by the *build* job
- name: "Download Talos image (regular PR)"
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
with:
name: talos-image
path: _out/assets/
path: _out/assets
- name: Download PR patch
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
with:
name: pr-patch
path: _out/assets
- name: Apply patch
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
run: |
git apply _out/assets/pr.patch
# ▸ Release PR path fetch artefacts from the corresponding draft release
- name: Download assets from draft release (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
run: |
mkdir -p _out/assets
curl -sSL -H "Authorization: token ${GH_PAT}" -H "Accept: application/octet-stream" \
-o _out/assets/nocloud-amd64.raw.xz \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ needs.resolve_assets.outputs.disk_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
# ▸ Start actual job steps
- name: Prepare workspace
run: |
rm -rf /tmp/$SANDBOX_NAME
cp -r ${{ github.workspace }} /tmp/$SANDBOX_NAME
- name: Prepare environment
run: |
cd /tmp/$SANDBOX_NAME
attempt=0
until make SANDBOX_NAME=$SANDBOX_NAME prepare-env; do
attempt=$((attempt + 1))
if [ $attempt -ge 3 ]; then
echo "❌ Attempt $attempt failed, exiting..."
exit 1
fi
echo "❌ Attempt $attempt failed, retrying..."
done
echo "✅ The task completed successfully after $attempt attempts"
install_cozystack:
name: "Install Cozystack"
runs-on: [self-hosted]
permissions:
contents: read
packages: read
needs: ["prepare_env", "resolve_assets"]
if: ${{ always() && needs.prepare_env.result == 'success' }}
steps:
- name: Prepare _out/assets directory
run: mkdir -p _out/assets
# ▸ Regular PR path download artefacts produced by the *build* job
- name: "Download installer (regular PR)"
if: "!contains(github.event.pull_request.labels.*.name, 'release')"
uses: actions/download-artifact@v4
with:
name: cozystack-installer
path: _out/assets
# ▸ Release PR path fetch artefacts from the corresponding draft release
- name: Download assets from draft release (release PR)
if: contains(github.event.pull_request.labels.*.name, 'release')
run: |
mkdir -p _out/assets
curl -sSL -H "Authorization: token ${GH_PAT}" -H "Accept: application/octet-stream" \
-o _out/assets/cozystack-installer.yaml \
"https://api.github.com/repos/${GITHUB_REPOSITORY}/releases/assets/${{ needs.resolve_assets.outputs.installer_id }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
# ▸ Start actual job steps
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Sync _out/assets directory
run: |
mkdir -p /tmp/$SANDBOX_NAME/_out/assets
mv _out/assets/* /tmp/$SANDBOX_NAME/_out/assets/
- name: Install Cozystack into sandbox
run: |
cd /tmp/$SANDBOX_NAME
attempt=0
until make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME install-cozystack; do
attempt=$((attempt + 1))
if [ $attempt -ge 3 ]; then
echo "❌ Attempt $attempt failed, exiting..."
exit 1
fi
echo "❌ Attempt $attempt failed, retrying..."
done
echo "✅ The task completed successfully after $attempt attempts."
detect_test_matrix:
name: "Detect e2e test matrix"
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set.outputs.matrix }}
steps:
- uses: actions/checkout@v4
- id: set
run: |
apps=$(find hack/e2e-apps -maxdepth 1 -mindepth 1 -name '*.bats' | \
awk -F/ '{sub(/\..+/, "", $NF); print $NF}' | jq -R . | jq -cs .)
echo "matrix={\"app\":$apps}" >> "$GITHUB_OUTPUT"
test_apps:
strategy:
matrix: ${{ fromJson(needs.detect_test_matrix.outputs.matrix) }}
name: Test ${{ matrix.app }}
runs-on: [self-hosted]
needs: [install_cozystack,detect_test_matrix]
if: ${{ always() && (needs.install_cozystack.result == 'success' && needs.detect_test_matrix.result == 'success') }}
steps:
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: E2E Apps
run: |
cd /tmp/$SANDBOX_NAME
attempt=0
until make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME test-apps-${{ matrix.app }}; do
attempt=$((attempt + 1))
if [ $attempt -ge 3 ]; then
echo "❌ Attempt $attempt failed, exiting..."
exit 1
fi
echo "❌ Attempt $attempt failed, retrying..."
done
echo "✅ The task completed successfully after $attempt attempts"
collect_debug_information:
name: Collect debug information
runs-on: [self-hosted]
needs: [test_apps]
if: ${{ always() }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Collect report
run: |
cd /tmp/$SANDBOX_NAME
make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME collect-report
- name: Upload cozyreport.tgz
uses: actions/upload-artifact@v4
with:
name: cozyreport
path: /tmp/${{ env.SANDBOX_NAME }}/_out/cozyreport.tgz
- name: Collect images list
run: |
cd /tmp/$SANDBOX_NAME
make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME collect-images
- name: Upload image list
uses: actions/upload-artifact@v4
with:
name: image-list
path: /tmp/${{ env.SANDBOX_NAME }}/_out/images.txt
cleanup:
name: Tear down environment
runs-on: [self-hosted]
needs: [collect_debug_information]
if: ${{ always() && needs.test_apps.result == 'success' }}
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- name: Set sandbox ID
run: echo "SANDBOX_NAME=cozy-e2e-sandbox-$(echo "${GITHUB_REPOSITORY}:${GITHUB_WORKFLOW}:${GITHUB_REF}" | sha256sum | cut -c1-10)" >> $GITHUB_ENV
- name: Tear down sandbox
run: make -C packages/core/testing SANDBOX_NAME=$SANDBOX_NAME delete
- name: Remove workspace
run: rm -rf /tmp/$SANDBOX_NAME
- name: Test
run: make test

View File

@@ -112,9 +112,13 @@ jobs:
# Commit built artifacts
- name: Commit release artifacts
if: steps.check_release.outputs.skip == 'false'
env:
GH_PAT: ${{ secrets.GH_PAT }}
run: |
git config user.name "github-actions"
git config user.email "github-actions@github.com"
git config user.name "cozystack-bot"
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
git config --unset-all http.https://github.com/.extraheader || true
git add .
git commit -m "Prepare release ${GITHUB_REF#refs/tags/}" -s || echo "No changes to commit"
git push origin HEAD || true
@@ -189,7 +193,12 @@ jobs:
# Create release-X.Y.Z branch and push (force-update)
- name: Create release branch
if: steps.check_release.outputs.skip == 'false'
env:
GH_PAT: ${{ secrets.GH_PAT }}
run: |
git config user.name "cozystack-bot"
git config user.email "217169706+cozystack-bot@users.noreply.github.com"
git remote set-url origin https://cozystack-bot:${GH_PAT}@github.com/${GITHUB_REPOSITORY}
BRANCH="release-${GITHUB_REF#refs/tags/v}"
git branch -f "$BRANCH"
git push -f origin "$BRANCH"
@@ -199,6 +208,7 @@ jobs:
if: steps.check_release.outputs.skip == 'false'
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GH_PAT }}
script: |
const version = context.ref.replace('refs/tags/v', '');
const base = '${{ steps.get_base.outputs.branch }}';

View File

@@ -11,14 +11,14 @@ repos:
- id: run-make-generate
name: Run 'make generate' in all app directories
entry: |
/bin/bash -c '
for dir in ./packages/apps/*/; do
flock -x .git/pre-commit.lock sh -c '
for dir in ./packages/apps/*/ ./packages/extra/*/ ./packages/system/cozystack-api/; do
if [ -d "$dir" ]; then
echo "Running make generate in $dir"
(cd "$dir" && make generate)
make generate -C "$dir" || exit $?
fi
done
git diff --color=always | cat
'
language: script
language: system
files: ^.*$

151
CONTRIBUTOR_LADDER.md Normal file
View File

@@ -0,0 +1,151 @@
# Contributor Ladder Template
* [Contributor Ladder](#contributor-ladder-template)
* [Community Participant](#community-participant)
* [Contributor](#contributor)
* [Reviewer](#reviewer)
* [Maintainer](#maintainer)
* [Inactivity](#inactivity)
* [Involuntary Removal](#involuntary-removal-or-demotion)
* [Stepping Down/Emeritus Process](#stepping-downemeritus-process)
* [Contact](#contact)
## Contributor Ladder
Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
### Community Participant
Description: A Community Participant engages with the project and its community, contributing their time, thoughts, etc. Community participants are usually users who have stopped being anonymous and started being active in project discussions.
* Responsibilities:
* Must follow the [CNCF CoC](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)
* How users can get involved with the community:
* Participating in community discussions
* Helping other users
* Submitting bug reports
* Commenting on issues
* Trying out new releases
* Attending community events
### Contributor
Description: A Contributor contributes directly to the project and adds value to it. Contributions need not be code. People at the Contributor level may be new contributors, or they may only contribute occasionally.
* Responsibilities include:
* Follow the [CNCF CoC](https://github.com/cncf/foundation/blob/main/code-of-conduct.md)
* Follow the project [contributing guide] (https://github.com/cozystack/cozystack/blob/main/CONTRIBUTING.md)
* Requirements (one or several of the below):
* Report and sometimes resolve issues
* Occasionally submit PRs
* Contribute to the documentation
* Show up at meetings, takes notes
* Answer questions from other community members
* Submit feedback on issues and PRs
* Test releases and patches and submit reviews
* Run or helps run events
* Promote the project in public
* Help run the project infrastructure
* Privileges:
* Invitations to contributor events
* Eligible to become a Maintainer
### Reviewer
Description: A Reviewer has responsibility for specific code, documentation, test, or other project areas. They are collectively responsible, with other Reviewers, for reviewing all changes to those areas and indicating whether those changes are ready to merge. They have a track record of contribution and review in the project.
Reviewers are responsible for a "specific area." This can be a specific code directory, driver, chapter of the docs, test job, event, or other clearly-defined project component that is smaller than an entire repository or subproject. Most often it is one or a set of directories in one or more Git repositories. The "specific area" below refers to this area of responsibility.
Reviewers have all the rights and responsibilities of a Contributor, plus:
* Responsibilities include:
* Continues to contribute regularly, as demonstrated by having at least 15 PRs a year, as demonstrated by [Cozystack devstats](https://cozystack.devstats.cncf.io).
* Following the reviewing guide
* Reviewing most Pull Requests against their specific areas of responsibility
* Reviewing at least 40 PRs per year
* Helping other contributors become reviewers
* Requirements:
* Must have successful contributions to the project, including at least one of the following:
* 10 accepted PRs,
* Reviewed 20 PRs,
* Resolved and closed 20 Issues,
* Become responsible for a key project management area,
* Or some equivalent combination or contribution
* Must have been contributing for at least 6 months
* Must be actively contributing to at least one project area
* Must have two sponsors who are also Reviewers or Maintainers, at least one of whom does not work for the same employer
* Has reviewed, or helped review, at least 20 Pull Requests
* Has analyzed and resolved test failures in their specific area
* Has demonstrated an in-depth knowledge of the specific area
* Commits to being responsible for that specific area
* Is supportive of new and occasional contributors and helps get useful PRs in shape to commit
* Additional privileges:
* Has GitHub or CI/CD rights to approve pull requests in specific directories
* Can recommend and review other contributors to become Reviewers
* May be assigned Issues and Reviews
* May give commands to CI/CD automation
* Can recommend other contributors to become Reviewers
The process of becoming a Reviewer is:
1. The contributor is nominated by opening a PR against the appropriate repository, which adds their GitHub username to the OWNERS file for one or more directories.
2. At least two members of the team that owns that repository or main directory, who are already Approvers, approve the PR.
### Maintainer
Description: Maintainers are very established contributors who are responsible for the entire project. As such, they have the ability to approve PRs against any area of the project, and are expected to participate in making decisions about the strategy and priorities of the project.
A Maintainer must meet the responsibilities and requirements of a Reviewer, plus:
* Responsibilities include:
* Reviewing at least 40 PRs per year, especially PRs that involve multiple parts of the project
* Mentoring new Reviewers
* Writing refactoring PRs
* Participating in CNCF maintainer activities
* Determining strategy and policy for the project
* Participating in, and leading, community meetings
* Requirements
* Experience as a Reviewer for at least 6 months
* Demonstrates a broad knowledge of the project across multiple areas
* Is able to exercise judgment for the good of the project, independent of their employer, friends, or team
* Mentors other contributors
* Can commit to spending at least 10 hours per month working on the project
* Additional privileges:
* Approve PRs to any area of the project
* Represent the project in public as a Maintainer
* Communicate with the CNCF on behalf of the project
* Have a vote in Maintainer decision-making meetings
Process of becoming a maintainer:
1. Any current Maintainer may nominate a current Reviewer to become a new Maintainer, by opening a PR against the root of the cozystack repository adding the nominee as an Approver in the [MAINTAINERS](https://github.com/cozystack/cozystack/blob/main/MAINTAINERS.md) file.
2. The nominee will add a comment to the PR testifying that they agree to all requirements of becoming a Maintainer.
3. A majority of the current Maintainers must then approve the PR.
## Inactivity
It is important for contributors to be and stay active to set an example and show commitment to the project. Inactivity is harmful to the project as it may lead to unexpected delays, contributor attrition, and a lost of trust in the project.
* Inactivity is measured by:
* Periods of no contributions for longer than 6 months
* Periods of no communication for longer than 3 months
* Consequences of being inactive include:
* Involuntary removal or demotion
* Being asked to move to Emeritus status
## Involuntary Removal or Demotion
Involuntary removal/demotion of a contributor happens when responsibilities and requirements aren't being met. This may include repeated patterns of inactivity, extended period of inactivity, a period of failing to meet the requirements of your role, and/or a violation of the Code of Conduct. This process is important because it protects the community and its deliverables while also opens up opportunities for new contributors to step in.
Involuntary removal or demotion is handled through a vote by a majority of the current Maintainers.
## Stepping Down/Emeritus Process
If and when contributors' commitment levels change, contributors can consider stepping down (moving down the contributor ladder) vs moving to emeritus status (completely stepping away from the project).
Contact the Maintainers about changing to Emeritus status, or reducing your contributor level.
## Contact
* For inquiries, please reach out to: @kvaps, @tym83

View File

@@ -9,7 +9,6 @@ build-deps:
build: build-deps
make -C packages/apps/http-cache image
make -C packages/apps/postgres image
make -C packages/apps/mysql image
make -C packages/apps/clickhouse image
make -C packages/apps/kubernetes image
@@ -49,6 +48,10 @@ test:
make -C packages/core/testing apply
make -C packages/core/testing test
prepare-env:
make -C packages/core/testing apply
make -C packages/core/testing prepare-cluster
generate:
hack/update-codegen.sh

View File

@@ -12,11 +12,15 @@
**Cozystack** is a free PaaS platform and framework for building clouds.
Cozystack is a [CNCF Sandbox Level Project](https://www.cncf.io/sandbox-projects/) that was originally built and sponsored by [Ænix](https://aenix.io/).
With Cozystack, you can transform a bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters,
Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
Use Cozystack to build your own cloud or provide a cost-effective development environment.
![Cozystack user interface](https://cozystack.io/img/screenshot.png)
## Use-Cases
* [**Using Cozystack to build a public cloud**](https://cozystack.io/docs/guides/use-cases/public-cloud/)
@@ -28,9 +32,6 @@ You can use Cozystack as a platform to build a private cloud powered by Infrastr
* [**Using Cozystack as a Kubernetes distribution**](https://cozystack.io/docs/guides/use-cases/kubernetes-distribution/)
You can use Cozystack as a Kubernetes distribution for Bare Metal
## Screenshot
![Cozystack screenshot](https://cozystack.io/img/screenshot.png)
## Documentation
@@ -59,7 +60,10 @@ Commits are used to generate the changelog, and their author will be referenced
If you have **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/cozystack/cozystack/discussions/categories/feature-requests).
You are welcome to join our weekly community meetings (just add this events to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics)) or [Telegram group](https://t.me/cozystack).
## Community
You are welcome to join our [Telegram group](https://t.me/cozystack) and come to our weekly community meetings.
Add them to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics) for convenience.
## License

View File

@@ -194,7 +194,15 @@ func main() {
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Workload")
setupLog.Error(err, "unable to create controller", "controller", "TenantHelmReconciler")
os.Exit(1)
}
if err = (&controller.CozystackConfigReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CozystackConfigReconciler")
os.Exit(1)
}

View File

@@ -0,0 +1,11 @@
## Major Features and Improvements
## Security
## Fixes
## Dependencies
## Documentation
## Development, Testing, and CI/CD

View File

@@ -0,0 +1,8 @@
## Fixes
* [build] Update Talos Linux v1.10.3 and fix assets. (@kvaps in https://github.com/cozystack/cozystack/pull/1006)
* [ci] Fix uploading released artifacts to GitHub. (@kvaps in https://github.com/cozystack/cozystack/pull/1009)
* [ci] Separate build and testing jobs. (@kvaps in https://github.com/cozystack/cozystack/pull/1005)
* [docs] Write a full release post for v0.31.1. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/999)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.31.0...v0.31.1

View File

@@ -0,0 +1,12 @@
## Security
* Resolve a security problem that allowed a tenant administrator to gain enhanced privileges outside the tenant. (@kvaps in https://github.com/cozystack/cozystack/pull/1062, backported in https://github.com/cozystack/cozystack/pull/1066)
## Fixes
* [platform] Fix dependencies in `distro-full` bundle. (@klinch0 in https://github.com/cozystack/cozystack/pull/1056, backported in https://github.com/cozystack/cozystack/pull/1064)
* [platform] Fix RBAC for annotating namespaces. (@kvaps in https://github.com/cozystack/cozystack/pull/1031, backported in https://github.com/cozystack/cozystack/pull/1037)
* [platform] Reduce system resource consumption by using smaller resource presets for VerticalPodAutoscaler, SeaweedFS, and KubeOVN. (@klinch0 in https://github.com/cozystack/cozystack/pull/1054, backported in https://github.com/cozystack/cozystack/pull/1058)
* [dashboard] Fix a number of issues in the Cozystack Dashboard (@kvaps in https://github.com/cozystack/cozystack/pull/1042, backported in https://github.com/cozystack/cozystack/pull/1066)
* [apps] Specify minimal working resource presets. (@kvaps in https://github.com/cozystack/cozystack/pull/1040, backported in https://github.com/cozystack/cozystack/pull/1041)
* [apps] Update built-in documentation and configuration reference for managed Clickhouse application. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1059, backported in https://github.com/cozystack/cozystack/pull/1065)

View File

@@ -0,0 +1,71 @@
Cozystack v0.32.0 is a significant release that brings new features, key fixes, and updates to underlying components.
## Major Features and Improvements
* [platform] Use `cozypkg` instead of Helm (@kvaps in https://github.com/cozystack/cozystack/pull/1057)
* [platform] Introduce the HelmRelease reconciler for system components. (@kvaps in https://github.com/cozystack/cozystack/pull/1033)
* [kubernetes] Enable using container registry mirrors by tenant Kubernetes clusters. Configure containerd for tenant Kubernetes clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/979, patched by @lllamnyp in https://github.com/cozystack/cozystack/pull/1032)
* [platform] Allow users to specify CPU requests in VCPUs. Use a library chart for resource management. (@lllamnyp in https://github.com/cozystack/cozystack/pull/972 and https://github.com/cozystack/cozystack/pull/1025)
* [platform] Annotate all child objects of apps with uniform labels for tracking by WorkloadMonitors. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1018 and https://github.com/cozystack/cozystack/pull/1024)
* [platform] Introduce `cluster-domain` option and un-hardcode `cozy.local`. (@kvaps in https://github.com/cozystack/cozystack/pull/1039)
* [platform] Get instance type when reconciling WorkloadMonitor (https://github.com/cozystack/cozystack/pull/1030)
* [virtual-machine] Add RBAC rules to allow port forwarding in KubeVirt for SSH via `virtctl`. (@mattia-eleuteri in https://github.com/cozystack/cozystack/pull/1027, patched by @klinch0 in https://github.com/cozystack/cozystack/pull/1028)
* [monitoring] Add events and audit inputs (@kevin880202 in https://github.com/cozystack/cozystack/pull/948)
## Security
* Resolve a security problem that allowed tenant administrator to gain enhanced privileges outside the tenant. (@kvaps in https://github.com/cozystack/cozystack/pull/1062)
## Fixes
* [dashboard] Fix a number of issues in the Cozystack Dashboard (@kvaps in https://github.com/cozystack/cozystack/pull/1042)
* [kafka] Specify minimal working resource presets. (@kvaps in https://github.com/cozystack/cozystack/pull/1040)
* [cilium] Fixed Gateway API manifest. (@zdenekjanda in https://github.com/cozystack/cozystack/pull/1016)
* [platform] Fix RBAC for annotating namespaces. (@kvaps in https://github.com/cozystack/cozystack/pull/1031)
* [platform] Fix dependencies for paas-hosted bundle. (@kvaps in https://github.com/cozystack/cozystack/pull/1034)
* [platform] Reduce system resource consumption by using lesser resource presets for VerticalPodAutoscaler, SeaweedFS, and KubeOVN. (@klinch0 in https://github.com/cozystack/cozystack/pull/1054)
* [virtual-machine] Fix handling of cloudinit and ssh-key input for `virtual-machine` and `vm-instance` applications. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1019 and https://github.com/cozystack/cozystack/pull/1020)
* [apps] Fix Clickhouse version parsing. (@kvaps in https://github.com/cozystack/cozystack/commit/28302e776e9d2bb8f424cf467619fa61d71ac49a)
* [apps] Add resource quotas for PostgreSQL jobs and fix application readme generation check in CI. (@klinch0 in https://github.com/cozystack/cozystack/pull/1051)
* [kube-ovn] Enable database health check. (@kvaps in https://github.com/cozystack/cozystack/pull/1047)
* [kubernetes] Fix upstream issue by updating Kubevirt-CCM. (@kvaps in https://github.com/cozystack/cozystack/pull/1052)
* [kubernetes] Fix resources and introduce a migration when upgrading tenant Kubernetes to v0.32.4. (@kvaps in https://github.com/cozystack/cozystack/pull/1073)
* [cluster-api] Add a missing migration for `capi-providers`. (@kvaps in https://github.com/cozystack/cozystack/pull/1072)
## Dependencies
* Introduce cozykpg, update to v1.1.0. (@kvaps in https://github.com/cozystack/cozystack/pull/1057 and https://github.com/cozystack/cozystack/pull/1063)
* Update flux-operator to 0.22.0, Flux to 2.6.x. (@kingdonb in https://github.com/cozystack/cozystack/pull/1035)
* Update Talos Linux to v1.10.3. (@kvaps in https://github.com/cozystack/cozystack/pull/1006)
* Update Cilium to v1.17.4. (@kvaps in https://github.com/cozystack/cozystack/pull/1046)
* Update MetalLB to v0.15.2. (@kvaps in https://github.com/cozystack/cozystack/pull/1045)
* Update Kube-OVN to v1.13.13. (@kvaps in https://github.com/cozystack/cozystack/pull/1047)
## Documentation
* [Oracle Cloud Infrastructure installation guide](https://cozystack.io/docs/operations/talos/installation/oracle-cloud/). (@kvaps, @lllamnyp, and @NickVolynkin in https://github.com/cozystack/website/pull/168)
* [Cluster configuration with `talosctl`](https://cozystack.io/docs/operations/talos/configuration/talosctl/). (@NickVolynkin in https://github.com/cozystack/website/pull/211)
* [Configuring container registry mirrors for tenant Kubernetes clusters](https://cozystack.io/docs/operations/talos/configuration/air-gapped/#5-configure-container-registry-mirrors-for-tenant-kubernetes). (@klinch0 in https://github.com/cozystack/website/pull/210)
* [Explain application management strategies and available versions for managed applications.](https://cozystack.io/docs/guides/applications/). (@NickVolynkin in https://github.com/cozystack/website/pull/219)
* [How to clean up etcd state](https://cozystack.io/docs/operations/faq/#how-to-clean-up-etcd-state). (@gwynbleidd2106 in https://github.com/cozystack/website/pull/214)
* [State that Cozystack is a CNCF Sandbox project](https://github.com/cozystack/cozystack?tab=readme-ov-file#cozystack). (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1055)
## Development, Testing, and CI/CD
* [tests] Add tests for applications `virtual-machine`, `vm-disk`, `vm-instance`, `postgresql`, `mysql`, and `clickhouse`. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1048, patched by @kvaps in https://github.com/cozystack/cozystack/pull/1074)
* [tests] Fix concurrency for the `docker login` action. (@kvaps in https://github.com/cozystack/cozystack/pull/1014)
* [tests] Increase QEMU system disk size in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1011)
* [tests] Increase the waiting timeout for VMs in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1038)
* [ci] Separate build and testing jobs in CI. (@kvaps in https://github.com/cozystack/cozystack/pull/1005 and https://github.com/cozystack/cozystack/pull/1010)
* [ci] Fix the release assets. (@kvaps in https://github.com/cozystack/cozystack/pull/1006 and https://github.com/cozystack/cozystack/pull/1009)
## New Contributors
* @kevin880202 made their first contribution in https://github.com/cozystack/cozystack/pull/948
* @mattia-eleuteri made their first contribution in https://github.com/cozystack/cozystack/pull/1027
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.31.0...v0.32.0
<!--
HEAD https://github.com/cozystack/cozystack/commit/3ce6dbe8
-->

View File

@@ -0,0 +1,38 @@
## Major Features and Improvements
* [postgres] Introduce new functionality for backup and restore in PostgreSQL. (@klinch0 in https://github.com/cozystack/cozystack/pull/1086)
* [apps] Refactor resources in managed applications. (@kvaps in https://github.com/cozystack/cozystack/pull/1106)
* [system] Make VMAgent's `extraArgs` tunable. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1091)
## Fixes
* [postgres] Escape users and database names. (@kvaps in https://github.com/cozystack/cozystack/pull/1087)
* [tenant] Fix monitoring agents HelmReleases for tenant clusters. (@klinch0 in https://github.com/cozystack/cozystack/pull/1079)
* [kubernetes] Wrap cert-manager CRDs in a conditional. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1076)
* [kubernetes] Remove `useCustomSecretForPatchContainerd` option and enable it by default. (@kvaps in https://github.com/cozystack/cozystack/pull/1104)
* [apps] Increase default resource presets for Clickhouse and Kafka from `nano` to `small`. Update OpenAPI specs and readme's. (@kvaps in https://github.com/cozystack/cozystack/pull/1103 and https://github.com/cozystack/cozystack/pull/1105)
* [linstor] Add configurable DRBD network options for connection and timeout settings, replacing scripted logic for detecting devices that lost connection. (@kvaps in https://github.com/cozystack/cozystack/pull/1094)
## Dependencies
* Update cozy-proxy to v0.2.0 (@kvaps in https://github.com/cozystack/cozystack/pull/1081)
* Update Kafka Operator to 0.45.1-rc1 (@kvaps in https://github.com/cozystack/cozystack/pull/1082 and https://github.com/cozystack/cozystack/pull/1102)
* Update Flux Operator to 0.23.0 (@kingdonb in https://github.com/cozystack/cozystack/pull/1078)
## Documentation
* [docs] Release notes for v0.32.0 and two beta-versions. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1043)
## Development, Testing, and CI/CD
* [tests] Add Kafka, Redis. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1077)
* [tests] Increase disk space for VMs in tests. (@kvaps in https://github.com/cozystack/cozystack/pull/1097)
* [tests] Upd Kubernetes v1.33. (@kvaps in https://github.com/cozystack/cozystack/pull/1083)
* [tests] increase postgres timeouts. (@kvaps in https://github.com/cozystack/cozystack/pull/1108)
* [tests] don't wait for postgres ro service. (@kvaps in https://github.com/cozystack/cozystack/pull/1109)
* [ci] Setup systemd timer to tear down sandbox. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1092)
* [ci] Split testing job into several. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1075)
* [ci] Run E2E tests as separate parallel jobs. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1093)
* [ci] Refactor GitHub workflows. (@kvaps in https://github.com/cozystack/cozystack/pull/1107)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.32.0...v0.32.1

View File

@@ -0,0 +1,91 @@
> [!WARNING]
> A patch release [0.33.2](github.com/cozystack/cozystack/releases/tag/v0.33.2) fixing a regression in 0.33.0 has been released.
> It is recommended to skip this version and upgrade to [0.33.2](github.com/cozystack/cozystack/releases/tag/v0.33.2) instead.
## Feature Highlights
### Unified CPU and Memory Allocation Management
Since version 0.31.0, Cozystack introduced a single-point-of-truth configuration variable `cpu-allocation-ratio`,
making CPU resource requests and limits uniform in Virtual Machines managed by KubeVirt.
The new release 0.33.0 introduces `memory-allocation-ratio` and expands both variables to all managed applications and tenant resource quotas.
Resource presets also respect the allocation ratios and behave in the same way as explicit resource definitions.
The new resource definition format is concise and simple for platform users.
```yaml
# resource definition in the configuration
resources:
cpu: <defined cpu value>
memory: <defined memory value>
```
It results in Kubernetes resource requests and limits, based on defined values and the universal allocation ratios:
```yaml
# actual requests and limits, provided to the application
resources:
limits:
cpu: <defined cpu value>
memory: <defined memory value>
requests:
cpu: <defined cpu value / cpu-allocation-ratio>
memory: <defined memory value / memory-allocation-ratio>
```
When updating from earlier Cozystack versions, resource configuration in managed applications will be automatically migrated to the new format.
### Backing up and Restoring Data in Tenant Kubernetes
One of the main features of the release is backup capability for PVCs in tenant Kubernetes clusters.
It enables platform and tenant administrators to back up and restore data used by services in the tenant clusters.
This new functionality in Cozystack is powered by [Velero](https://velero.io/) and needs an external S3-compatible storage.
## Support for NFS Storage
Cozystack now supports using NFS shared storage with a new optional system module.
See the documentation: https://cozystack.io/docs/operations/storage/nfs/.
## Features and Improvements
* [kubernetes] Enable PVC backups in tenant Kubernetes clusters, powered by [Velero](https://velero.io/). (@klinch0 in https://github.com/cozystack/cozystack/pull/1132)
* [nfs-driver] Enable NFS support by introducing a new optional system module `nfs-driver`. (@kvaps in https://github.com/cozystack/cozystack/pull/1133)
* [virtual-machine] Configure CPU sockets available to VMs with the `resources.cpu.sockets` configuration value. (@klinch0 in https://github.com/cozystack/cozystack/pull/1131)
* [virtual-machine] Add support for using pre-imported "golden image" disks for virtual machines, enabling faster provisioning by referencing existing images instead of downloading via HTTP. (@gwynbleidd2106 in https://github.com/cozystack/cozystack/pull/1112)
* [kubernetes] Add an option to expose the Ingress-NGINX controller in tenant Kubernetes cluster via LoadBalancer. New configuration value `exposeMethod` offers a choice of `Proxied` and `LoadBalancer`. (@kvaps in https://github.com/cozystack/cozystack/pull/1114)
* [apps] When updating from earlier Cozystack versions, automatically migrate to the new resource definition format: from `resources.requests.[cpu,memory]` and `resources.limits.[cpu,memory]` to `resources.[cpu,memory]`. (@kvaps in https://github.com/cozystack/cozystack/pull/1127)
* [apps] Give examples of new resource definitions in the managed app README's. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1120)
* [tenant] Respect `cpu-allocation-ratio` in tenant's `resourceQuotas`.(@kvaps in https://github.com/cozystack/cozystack/pull/1119)
* [cozy-lib] Introduce helper function to calculate Java heap params based on memory requests and limits. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1157)
## Security
* [monitoring] Disable sign up in Alerta. (@klinch0 in https://github.com/cozystack/cozystack/pull/1129)
## Fixes
* [platform] Always set resources for managed apps . (@lllamnyp in https://github.com/cozystack/cozystack/pull/1156)
* [platform] Remove the memory limit for Keycloak deployment. (@klinch0 in https://github.com/cozystack/cozystack/pull/1122)
* [kubernetes] Fix a condition in the ingress template for tenant Kubernetes. (@kvaps in https://github.com/cozystack/cozystack/pull/1143)
* [kubernetes] Fix a deadlock on reattaching a KubeVirt-CSI volume. (@kvaps in https://github.com/cozystack/cozystack/pull/1135)
* [mysql] MySQL applications with a single replica now correctly create a `LoadBalancer` service. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1113)
* [etcd] Fix resources and headless services in the etcd application. (@kvaps in https://github.com/cozystack/cozystack/pull/1128)
* [apps] Enable selecting `resourcePreset` from a drop-down list for all applications by adding enum of allowed values in the config scheme. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1117)
* [apps] Refactor resource presets provided to managed apps by `cozy-lib`. (@kvaps in https://github.com/cozystack/cozystack/pull/1155)
* [keycloak] Calculate and pass Java heap parameters explicitly to prevent OOM errors. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1157)
## Development, Testing, and CI/CD
* [dx] Introduce cozyreport tool and gather reports in CI. (@kvaps in https://github.com/cozystack/cozystack/pull/1139)
* [ci] Use Nexus as a pull-through cache for CI. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1124)
* [ci] Save a list of observed images after each workflow run. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1089)
* [ci] Skip Cozystack tests on PRs that only change the docs. Don't restart CI when a PR is labeled. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1136)
* [dx] Fix Makefile variables for `capi-providers`. (@kvaps in https://github.com/cozystack/cozystack/pull/1115)
* [tests] Introduce self-destructing testing environments. (@kvaps in https://github.com/cozystack/cozystack/pull/1138, https://github.com/cozystack/cozystack/pull/1140, https://github.com/cozystack/cozystack/pull/1141, https://github.com/cozystack/cozystack/pull/1142)
* [e2e] Retry flaky application tests to improve total test time. (@kvaps in https://github.com/cozystack/cozystack/pull/1123)
* [maintenance] Add a PR template. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1121)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.32.1...v0.33.0

View File

@@ -0,0 +1,3 @@
## Fixes
* [kubevirt-csi] Fix a regression by updating the role of the CSI controller. (@lllamnyp in https://github.com/cozystack/cozystack/pull/1165)

View File

@@ -0,0 +1,19 @@
## Features and Improvements
* [vm-instance] Enable running [Windows](https://cozystack.io/docs/operations/virtualization/windows/) and [MikroTik RouterOS](https://cozystack.io/docs/operations/virtualization/mikrotik/) in Cozystack. Add `bus` option and always specify `bootOrder` for all disks. (@kvaps in https://github.com/cozystack/cozystack/pull/1168)
* [cozystack-api] Refactor OpenAPI Schema and support reading it from config. (@kvaps in https://github.com/cozystack/cozystack/pull/1173)
* [cozystack-api] Enable using singular resource names in Cozystack API. For example, `kubectl get tenant` is now a valid command, in addition to `kubectl get tenants`. (@kvaps in https://github.com/cozystack/cozystack/pull/1169)
* [postgres] Explain how to back up and restore PostgreSQL using Velero backups. (@klinch0 and @NickVolynkin in https://github.com/cozystack/cozystack/pull/1141)
## Fixes
* [virtual-machine,vm-instance] Adjusted RBAC role to let users read the service associated with the VMs they create. Consequently, users can now see details of the service in the dashboard and therefore read the IP address of the VM. (@klinch0 in https://github.com/cozystack/cozystack/pull/1161)
* [cozystack-api] Fix an error with `resourceVersion` which resulted in message 'failed to update HelmRelease: helmreleases.helm.toolkit.fluxcd.io "xxx" is invalid...'. (@kvaps in https://github.com/cozystack/cozystack/pull/1170)
* [cozystack-api] Fix an error in updating lists in Cozystack objects, which resulted in message "Warning: resource ... is missing the kubectl.kubernetes.io/last-applied-configuration annotation". (@kvaps in https://github.com/cozystack/cozystack/pull/1171)
* [cozystack-api] Disable `startegic-json-patch` support. (@kvaps in https://github.com/cozystack/cozystack/pull/1179)
* [dashboard] Fix the code for removing dashboard comments which used to mistakenly remove shebang from cloudInit scripts. (@kvaps in https://github.com/cozystack/cozystack/pull/1175).
* [virtual-machine] Fix cloudInit and sshKeys processing. (@kvaps in https://github.com/cozystack/cozystack/pull/1175 and https://github.com/cozystack/cozystack/commit/da3ee5d0ea9e87529c8adc4fcccffabe8782292e)
* [applications] Fix a typo in preset resource tables in the built-in documentation of managed applications. (@NickVolynkin in https://github.com/cozystack/cozystack/pull/1172)
* [kubernetes] Enable deleting Velero component from a tenant Kubernetes cluster. (@klinch0 in https://github.com/cozystack/cozystack/pull/1176)
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v0.33.1...v0.33.2

13
go.mod
View File

@@ -37,6 +37,7 @@ require (
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v4.12.0+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fluxcd/pkg/apis/kustomize v1.6.1 // indirect
@@ -91,14 +92,14 @@ require (
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.28.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.30.0 // indirect
golang.org/x/net v0.33.0 // indirect
golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/sync v0.8.0 // indirect
golang.org/x/sys v0.26.0 // indirect
golang.org/x/term v0.25.0 // indirect
golang.org/x/text v0.19.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.7.0 // indirect
golang.org/x/tools v0.26.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect

28
go.sum
View File

@@ -26,8 +26,8 @@ github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkp
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/evanphx/json-patch v0.5.2 h1:xVCHIVMUu1wtM/VkR9jVZ45N3FhZfYMMYGorLCR8P3k=
github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
@@ -212,8 +212,8 @@ go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw=
golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U=
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@@ -222,26 +222,26 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24=
golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M=
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -0,0 +1,32 @@
#!/bin/bash
set -e
name="$1"
url="$2"
if [ -z "$name" ] || [ -z "$url" ]; then
echo "Usage: <name> <url>"
echo "Example: 'ubuntu' 'https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img'"
exit 1
fi
#### create DV ubuntu source for CDI image cloning
kubectl create -f - <<EOF
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: "vm-image-$name"
namespace: cozy-public
annotations:
cdi.kubevirt.io/storage.bind.immediate.requested: "true"
spec:
source:
http:
url: "$url"
storage:
resources:
requests:
storage: 5Gi
storageClassName: replicated
EOF

8
hack/collect-images.sh Executable file
View File

@@ -0,0 +1,8 @@
#!/bin/sh
for node in 11 12 13; do
talosctl -n 192.168.123.${node} -e 192.168.123.${node} images ls >> images.tmp
talosctl -n 192.168.123.${node} -e 192.168.123.${node} images --namespace system ls >> images.tmp
done
while read _ name sha _ ; do echo $sha $name ; done < images.tmp | sort -u > images.txt

147
hack/cozyreport.sh Executable file
View File

@@ -0,0 +1,147 @@
#!/bin/sh
REPORT_DATE=$(date +%Y-%m-%d_%H-%M-%S)
REPORT_NAME=${1:-cozyreport-$REPORT_DATE}
REPORT_PDIR=$(mktemp -d)
REPORT_DIR=$REPORT_PDIR/$REPORT_NAME
# -- check dependencies
command -V kubectl >/dev/null || exit $?
command -V tar >/dev/null || exit $?
# -- cozystack module
echo "Collecting Cozystack information..."
mkdir -p $REPORT_DIR/cozystack
kubectl get deploy -n cozy-system cozystack -o jsonpath='{.spec.template.spec.containers[0].image}' > $REPORT_DIR/cozystack/image.txt 2>&1
kubectl get cm -n cozy-system --no-headers | awk '$1 ~ /^cozystack/' |
while read NAME _; do
DIR=$REPORT_DIR/cozystack/configs
mkdir -p $DIR
kubectl get cm -n cozy-system $NAME -o yaml > $DIR/$NAME.yaml 2>&1
done
# -- kubernetes module
echo "Collecting Kubernetes information..."
mkdir -p $REPORT_DIR/kubernetes
kubectl version > $REPORT_DIR/kubernetes/version.txt 2>&1
echo "Collecting nodes..."
kubectl get nodes -o wide > $REPORT_DIR/kubernetes/nodes.txt 2>&1
kubectl get nodes --no-headers | awk '$2 != "Ready"' |
while read NAME _; do
DIR=$REPORT_DIR/kubernetes/nodes/$NAME
mkdir -p $DIR
kubectl get node $NAME -o yaml > $DIR/node.yaml 2>&1
kubectl describe node $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting namespaces..."
kubectl get ns -o wide > $REPORT_DIR/kubernetes/namespaces.txt 2>&1
kubectl get ns --no-headers | awk '$2 != "Active"' |
while read NAME _; do
DIR=$REPORT_DIR/kubernetes/namespaces/$NAME
mkdir -p $DIR
kubectl get ns $NAME -o yaml > $DIR/namespace.yaml 2>&1
kubectl describe ns $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting helmreleases..."
kubectl get hr -A > $REPORT_DIR/kubernetes/helmreleases.txt 2>&1
kubectl get hr -A | awk '$4 != "True"' | \
while read NAMESPACE NAME _; do
DIR=$REPORT_DIR/kubernetes/helmreleases/$NAMESPACE/$NAME
mkdir -p $DIR
kubectl get hr -n $NAMESPACE $NAME -o yaml > $DIR/hr.yaml 2>&1
kubectl describe hr -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting pods..."
kubectl get pod -A -o wide > $REPORT_DIR/kubernetes/pods.txt 2>&1
kubectl get pod -A --no-headers | awk '$4 !~ /Running|Succeeded|Completed/' |
while read NAMESPACE NAME _ STATE _; do
DIR=$REPORT_DIR/kubernetes/pods/$NAMESPACE/$NAME
mkdir -p $DIR
CONTAINERS=$(kubectl get pod -o jsonpath='{.spec.containers[*].name}' -n $NAMESPACE $NAME)
kubectl get pod -n $NAMESPACE $NAME -o yaml > $DIR/pod.yaml 2>&1
kubectl describe pod -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
if [ "$STATE" != "Pending" ]; then
for CONTAINER in $CONTAINERS; do
kubectl logs -n $NAMESPACE $NAME $CONTAINER > $DIR/logs-$CONTAINER.txt 2>&1
kubectl logs -n $NAMESPACE $NAME $CONTAINER --previous > $DIR/logs-$CONTAINER-previous.txt 2>&1
done
fi
done
echo "Collecting virtualmachines..."
kubectl get vm -A > $REPORT_DIR/kubernetes/vms.txt 2>&1
kubectl get vm -A --no-headers | awk '$5 != "True"' |
while read NAMESPACE NAME _; do
DIR=$REPORT_DIR/kubernetes/vm/$NAMESPACE/$NAME
mkdir -p $DIR
kubectl get vm -n $NAMESPACE $NAME -o yaml > $DIR/vm.yaml 2>&1
kubectl describe vm -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting virtualmachine instances..."
kubectl get vmi -A > $REPORT_DIR/kubernetes/vmis.txt 2>&1
kubectl get vmi -A --no-headers | awk '$4 != "Running"' |
while read NAMESPACE NAME _; do
DIR=$REPORT_DIR/kubernetes/vmi/$NAMESPACE/$NAME
mkdir -p $DIR
kubectl get vmi -n $NAMESPACE $NAME -o yaml > $DIR/vmi.yaml 2>&1
kubectl describe vmi -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting services..."
kubectl get svc -A > $REPORT_DIR/kubernetes/services.txt 2>&1
kubectl get svc -A --no-headers | awk '$4 == "<pending>"' |
while read NAMESPACE NAME _; do
DIR=$REPORT_DIR/kubernetes/services/$NAMESPACE/$NAME
mkdir -p $DIR
kubectl get svc -n $NAMESPACE $NAME -o yaml > $DIR/service.yaml 2>&1
kubectl describe svc -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
done
echo "Collecting pvcs..."
kubectl get pvc -A > $REPORT_DIR/kubernetes/pvcs.txt 2>&1
kubectl get pvc -A | awk '$3 != "Bound"' |
while read NAMESPACE NAME _; do
DIR=$REPORT_DIR/kubernetes/pvc/$NAMESPACE/$NAME
mkdir -p $DIR
kubectl get pvc -n $NAMESPACE $NAME -o yaml > $DIR/pvc.yaml 2>&1
kubectl describe pvc -n $NAMESPACE $NAME > $DIR/describe.txt 2>&1
done
# -- kamaji module
if kubectl get deploy -n cozy-linstor linstor-controller >/dev/null 2>&1; then
echo "Collecting kamaji resources..."
DIR=$REPORT_DIR/kamaji
mkdir -p $DIR
kubectl logs -n cozy-kamaji deployment/kamaji > $DIR/kamaji-controller.log 2>&1
kubectl get kamajicontrolplanes.controlplane.cluster.x-k8s.io -A > $DIR/kamajicontrolplanes.txt 2>&1
kubectl get kamajicontrolplanes.controlplane.cluster.x-k8s.io -A -o yaml > $DIR/kamajicontrolplanes.yaml 2>&1
kubectl get tenantcontrolplanes.kamaji.clastix.io -A > $DIR/tenantcontrolplanes.txt 2>&1
kubectl get tenantcontrolplanes.kamaji.clastix.io -A -o yaml > $DIR/tenantcontrolplanes.yaml 2>&1
fi
# -- linstor module
if kubectl get deploy -n cozy-linstor linstor-controller >/dev/null 2>&1; then
echo "Collecting linstor resources..."
DIR=$REPORT_DIR/linstor
mkdir -p $DIR
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor --no-color n l > $DIR/nodes.txt 2>&1
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor --no-color sp l > $DIR/storage-pools.txt 2>&1
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor --no-color r l > $DIR/resources.txt 2>&1
fi
# -- finalization
echo "Creating archive..."
tar -czf $REPORT_NAME.tgz -C $REPORT_PDIR .
echo "Report created: $REPORT_NAME.tgz"
echo "Cleaning up..."
rm -rf $REPORT_PDIR

View File

@@ -1,94 +0,0 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Create tenant with isolated mode enabled" {
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Tenant
metadata:
name: test
namespace: tenant-root
spec:
etcd: false
host: ""
ingress: false
isolated: true
monitoring: false
resourceQuotas: {}
seaweedfs: false
EOF
kubectl wait hr/tenant-test -n tenant-root --timeout=1m --for=condition=ready
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
}
@test "Create a tenant Kubernetes control plane" {
kubectl create -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes
metadata:
name: test
namespace: tenant-test
spec:
addons:
certManager:
enabled: false
valuesOverride: {}
cilium:
valuesOverride: {}
fluxcd:
enabled: false
valuesOverride: {}
gatewayAPI:
enabled: false
gpuOperator:
enabled: false
valuesOverride: {}
ingressNginx:
enabled: true
hosts: []
valuesOverride: {}
monitoringAgents:
enabled: false
valuesOverride: {}
verticalPodAutoscaler:
valuesOverride: {}
controlPlane:
apiServer:
resources: {}
resourcesPreset: small
controllerManager:
resources: {}
resourcesPreset: micro
konnectivity:
server:
resources: {}
resourcesPreset: micro
replicas: 2
scheduler:
resources: {}
resourcesPreset: micro
host: ""
nodeGroups:
md0:
ephemeralStorage: 20Gi
gpus: []
instanceType: u1.medium
maxReplicas: 10
minReplicas: 0
resources:
cpu: ""
memory: ""
roles:
- ingress-nginx
storageClass: replicated
EOF
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
timeout 10 sh -ec 'until kubectl get kamajicontrolplane -n tenant-test kubernetes-test; do sleep 1; done'
kubectl wait --for=condition=TenantControlPlaneCreated kamajicontrolplane -n tenant-test kubernetes-test --timeout=4m
kubectl wait tcp -n tenant-test kubernetes-test --timeout=2m --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
kubectl wait deploy --timeout=4m --for=condition=available -n tenant-test kubernetes-test kubernetes-test-cluster-autoscaler kubernetes-test-kccm kubernetes-test-kcsi-controller
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
kubectl wait machinedeployment kubernetes-test-md0 -n tenant-test --timeout=5m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
}

View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bats
@test "Create DB ClickHouse" {
name='test'
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: ClickHouse
metadata:
name: $name
namespace: tenant-test
spec:
size: 10Gi
logStorageSize: 2Gi
shards: 1
replicas: 2
storageClass: ""
logTTL: 15
users:
testuser:
password: xai7Wepo
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/clickhouse-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr clickhouse-$name --timeout=20s --for=condition=ready
timeout 180 sh -ec "until kubectl -n tenant-test get svc chendpoint-clickhouse-$name -o jsonpath='{.spec.ports[*].port}' | grep -q '8123 9000'; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-0 --timeout=120s --for=jsonpath='{.status.replicas}'=1
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 100 sh -ec "until kubectl -n tenant-test get svc chi-clickhouse-$name-clickhouse-0-0 -o jsonpath='{.spec.ports[*].port}' | grep -q '9000 8123 9009'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get sts chi-clickhouse-$name-clickhouse-0-1 ; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/chi-clickhouse-$name-clickhouse-0-1 --timeout=140s --for=jsonpath='{.status.replicas}'=1
}

51
hack/e2e-apps/kafka.bats Normal file
View File

@@ -0,0 +1,51 @@
#!/usr/bin/env bats
@test "Create Kafka" {
name='test'
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kafka
metadata:
name: $name
namespace: tenant-test
spec:
external: false
kafka:
size: 10Gi
replicas: 2
storageClass: ""
resources: {}
resourcesPreset: "nano"
zookeeper:
size: 5Gi
replicas: 2
storageClass: ""
resources:
resourcesPreset: "nano"
topics:
- name: testResults
partitions: 1
replicas: 2
config:
min.insync.replicas: 2
- name: testOrders
config:
cleanup.policy: compact
segment.ms: 3600000
max.compaction.lag.ms: 5400000
min.insync.replicas: 2
partitions: 1
replicas: 2
EOF
sleep 5
kubectl -n tenant-test wait hr kafka-$name --timeout=30s --for=condition=ready
kubectl wait kafkas -n tenant-test test --timeout=60s --for=condition=ready
timeout 60 sh -ec "until kubectl -n tenant-test get pvc data-kafka-$name-zookeeper-0; do sleep 10; done"
kubectl -n tenant-test wait pvc data-kafka-$name-zookeeper-0 --timeout=50s --for=jsonpath='{.status.phase}'=Bound
timeout 40 sh -ec "until kubectl -n tenant-test get svc kafka-$name-zookeeper-client -o jsonpath='{.spec.ports[0].port}' | grep -q '2181'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc kafka-$name-zookeeper-nodes -o jsonpath='{.spec.ports[*].port}' | grep -q '2181 2888 3888'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints kafka-$name-zookeeper-nodes -o jsonpath='{.subsets[*].addresses[0].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete kafka.apps.cozystack.io $name
kubectl -n tenant-test delete pvc data-kafka-$name-zookeeper-0
kubectl -n tenant-test delete pvc data-kafka-$name-zookeeper-1
}

View File

@@ -0,0 +1,113 @@
#!/usr/bin/env bats
run_kubernetes_test() {
local version_expr="$1"
local test_name="$2"
local port="$3"
local k8s_version=$(yq "$version_expr" packages/apps/kubernetes/files/versions.yaml)
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Kubernetes
metadata:
name: "${test_name}"
namespace: tenant-test
spec:
addons:
certManager:
enabled: false
valuesOverride: {}
cilium:
valuesOverride: {}
fluxcd:
enabled: false
valuesOverride: {}
gatewayAPI:
enabled: false
gpuOperator:
enabled: false
valuesOverride: {}
ingressNginx:
enabled: true
hosts: []
valuesOverride: {}
monitoringAgents:
enabled: false
valuesOverride: {}
verticalPodAutoscaler:
valuesOverride: {}
controlPlane:
apiServer:
resources: {}
resourcesPreset: small
controllerManager:
resources: {}
resourcesPreset: micro
konnectivity:
server:
resources: {}
resourcesPreset: micro
replicas: 2
scheduler:
resources: {}
resourcesPreset: micro
host: ""
nodeGroups:
md0:
ephemeralStorage: 20Gi
gpus: []
instanceType: u1.medium
maxReplicas: 10
minReplicas: 0
resources:
cpu: ""
memory: ""
roles:
- ingress-nginx
storageClass: replicated
version: "${k8s_version}"
EOF
# Wait for the tenant-test namespace to be active
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
# Wait for the Kamaji control plane to be created (retry for up to 10 seconds)
timeout 10 sh -ec 'until kubectl get kamajicontrolplane -n tenant-test kubernetes-'"${test_name}"'; do sleep 1; done'
# Wait for the tenant control plane to be fully created (timeout after 4 minutes)
kubectl wait --for=condition=TenantControlPlaneCreated kamajicontrolplane -n tenant-test kubernetes-${test_name} --timeout=4m
# Wait for Kubernetes resources to be ready (timeout after 2 minutes)
kubectl wait tcp -n tenant-test kubernetes-${test_name} --timeout=2m --for=jsonpath='{.status.kubernetesResources.version.status}'=Ready
# Wait for all required deployments to be available (timeout after 4 minutes)
kubectl wait deploy --timeout=4m --for=condition=available -n tenant-test kubernetes-${test_name} kubernetes-${test_name}-cluster-autoscaler kubernetes-${test_name}-kccm kubernetes-${test_name}-kcsi-controller
# Wait for the machine deployment to scale to 2 replicas (timeout after 1 minute)
kubectl wait machinedeployment kubernetes-${test_name}-md0 -n tenant-test --timeout=1m --for=jsonpath='{.status.replicas}'=2
# Get the admin kubeconfig and save it to a file
kubectl get secret kubernetes-${test_name}-admin-kubeconfig -ojsonpath='{.data.super-admin\.conf}' -n tenant-test | base64 -d > tenantkubeconfig
# Update the kubeconfig to use localhost for the API server
yq -i ".clusters[0].cluster.server = \"https://localhost:${port}\"" tenantkubeconfig
# Set up port forwarding to the Kubernetes API server for a 40 second timeout
bash -c 'timeout 40s kubectl port-forward service/kubernetes-'"${test_name}"' -n tenant-test '"${port}"':6443 > /dev/null 2>&1 &'
# Verify the Kubernetes version matches what we expect (retry for up to 20 seconds)
timeout 20 sh -ec 'until kubectl --kubeconfig tenantkubeconfig version 2>/dev/null | grep -Fq "Server Version: ${k8s_version}"; do sleep 5; done'
# Wait for all machine deployment replicas to be ready (timeout after 10 minutes)
kubectl wait machinedeployment kubernetes-${test_name}-md0 -n tenant-test --timeout=10m --for=jsonpath='{.status.v1beta2.readyReplicas}'=2
# Clean up by deleting the Kubernetes resource
kubectl -n tenant-test delete kuberneteses.apps.cozystack.io $test_name
}
@test "Create a tenant Kubernetes control plane with latest version" {
run_kubernetes_test 'keys | sort_by(.) | .[-1]' 'test-latest-version' '59991'
}
@test "Create a tenant Kubernetes control plane with previous version" {
run_kubernetes_test 'keys | sort_by(.) | .[-2]' 'test-previous-version' '59992'
}

46
hack/e2e-apps/mysql.bats Normal file
View File

@@ -0,0 +1,46 @@
#!/usr/bin/env bats
@test "Create DB MySQL" {
name='test'
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: MySQL
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 2
storageClass: ""
users:
testuser:
maxUserConnections: 1000
password: xai7Wepo
databases:
testdb:
roles:
admin:
- testuser
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr mysql-$name --timeout=30s --for=condition=ready
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name -o jsonpath='{.spec.ports[0].port}' | grep -q '3306'; do sleep 10; done"
timeout 80 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test wait statefulset.apps/mysql-$name --timeout=110s --for=jsonpath='{.status.replicas}'=2
timeout 80 sh -ec "until kubectl -n tenant-test get svc mysql-$name-metrics -o jsonpath='{.spec.ports[0].port}' | grep -q '9104'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get endpoints mysql-$name-metrics -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test wait deployment.apps/mysql-$name-metrics --timeout=90s --for=jsonpath='{.status.replicas}'=1
kubectl -n tenant-test delete mysqls.apps.cozystack.io $name
}

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env bats
@test "Create DB PostgreSQL" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Postgres
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 10Gi
replicas: 2
storageClass: ""
postgresql:
parameters:
max_connections: 100
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
users:
testuser:
password: xai7Wepo
databases:
testdb:
roles:
admin:
- testuser
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr postgres-$name --timeout=100s --for=condition=ready
kubectl -n tenant-test wait job.batch postgres-$name-init-job --timeout=50s --for=condition=Complete
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-r -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-ro -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 40 sh -ec "until kubectl -n tenant-test get svc postgres-$name-rw -o jsonpath='{.spec.ports[0].port}' | grep -q '5432'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-r -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
# for some reason it takes longer for the read-only endpoint to be ready
#timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-ro -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
timeout 120 sh -ec "until kubectl -n tenant-test get endpoints postgres-$name-rw -o jsonpath='{.subsets[*].addresses[*].ip}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete postgreses.apps.cozystack.io $name
kubectl -n tenant-test delete job.batch/postgres-$name-init-job
}

26
hack/e2e-apps/redis.bats Normal file
View File

@@ -0,0 +1,26 @@
#!/usr/bin/env bats
@test "Create Redis" {
name='test'
kubectl apply -f- <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Redis
metadata:
name: $name
namespace: tenant-test
spec:
external: false
size: 1Gi
replicas: 2
storageClass: ""
authEnabled: true
resources: {}
resourcesPreset: "nano"
EOF
sleep 5
kubectl -n tenant-test wait hr redis-$name --timeout=20s --for=condition=ready
kubectl -n tenant-test wait pvc redisfailover-persistent-data-rfr-redis-$name-0 --timeout=50s --for=jsonpath='{.status.phase}'=Bound
kubectl -n tenant-test wait deploy rfs-redis-$name --timeout=90s --for=condition=available
kubectl -n tenant-test wait sts rfr-redis-$name --timeout=90s --for=jsonpath='{.status.replicas}'=2
kubectl -n tenant-test delete redis.apps.cozystack.io $name
}

View File

@@ -0,0 +1,47 @@
#!/usr/bin/env bats
@test "Create a Virtual Machine" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VirtualMachine
metadata:
name: $name
namespace: tenant-test
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
instanceType: "u1.medium"
instanceProfile: ubuntu
systemDisk:
image: ubuntu
storage: 5Gi
storageClass: replicated
gpus: []
resources:
cpu: ""
memory: ""
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
sleep 5
kubectl -n tenant-test wait hr virtual-machine-$name --timeout=10s --for=condition=ready
kubectl -n tenant-test wait dv virtual-machine-$name --timeout=150s --for=condition=ready
kubectl -n tenant-test wait pvc virtual-machine-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
kubectl -n tenant-test wait vm virtual-machine-$name --timeout=100s --for=condition=ready
timeout 120 sh -ec "until kubectl -n tenant-test get vmi virtual-machine-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 10; done"
kubectl -n tenant-test delete virtualmachines.apps.cozystack.io $name
}

View File

@@ -0,0 +1,68 @@
#!/usr/bin/env bats
@test "Create a VM Disk" {
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VMDisk
metadata:
name: $name
namespace: tenant-test
spec:
source:
http:
url: https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
optical: false
storage: 5Gi
storageClass: replicated
EOF
sleep 5
kubectl -n tenant-test wait hr vm-disk-$name --timeout=5s --for=condition=ready
kubectl -n tenant-test wait dv vm-disk-$name --timeout=150s --for=condition=ready
kubectl -n tenant-test wait pvc vm-disk-$name --timeout=100s --for=jsonpath='{.status.phase}'=Bound
}
@test "Create a VM Instance" {
diskName='test'
name='test'
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: VMInstance
metadata:
name: $name
namespace: tenant-test
spec:
external: false
externalMethod: PortList
externalPorts:
- 22
running: true
instanceType: "u1.medium"
instanceProfile: ubuntu
disks:
- name: $diskName
gpus: []
resources:
cpu: ""
memory: ""
sshKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF
test@test
cloudInit: |
#cloud-config
users:
- name: test
shell: /bin/bash
sudo: ['ALL=(ALL) NOPASSWD: ALL']
groups: sudo
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPht0dPk5qQ+54g1hSX7A6AUxXJW5T6n/3d7Ga2F8gTF test@test
cloudInitSeed: ""
EOF
sleep 5
timeout 20 sh -ec "until kubectl -n tenant-test get vmi vm-instance-$name -o jsonpath='{.status.interfaces[0].ipAddress}' | grep -q '[0-9]'; do sleep 5; done"
kubectl -n tenant-test wait hr vm-instance-$name --timeout=5s --for=condition=ready
kubectl -n tenant-test wait vm vm-instance-$name --timeout=20s --for=condition=ready
kubectl -n tenant-test delete vminstances.apps.cozystack.io $name
kubectl -n tenant-test delete vmdisks.apps.cozystack.io $diskName
}

260
hack/e2e-cluster.bats → hack/e2e-install-cozystack.bats Executable file → Normal file
View File

@@ -1,237 +1,10 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Required installer assets exist" {
if [ ! -f _out/assets/cozystack-installer.yaml ]; then
echo "Missing: _out/assets/cozystack-installer.yaml" >&2
exit 1
fi
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing: _out/assets/nocloud-amd64.raw.xz" >&2
exit 1
fi
}
@test "IPv4 forwarding is enabled" {
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
echo "IPv4 forwarding is disabled!" >&2
echo >&2
echo "Enable it with:" >&2
echo " echo 1 > /proc/sys/net/ipv4/ip_forward" >&2
exit 1
fi
}
@test "Clean previous VMs" {
kill $(cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid 2>/dev/null) 2>/dev/null || true
rm -rf srv1 srv2 srv3
}
@test "Prepare networking and masquerading" {
ip link del cozy-br0 2>/dev/null || true
ip link add cozy-br0 type bridge
ip link set cozy-br0 up
ip address add 192.168.123.1/24 dev cozy-br0
# Masquerading rule idempotent (delete first, then add)
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
}
@test "Prepare cloudinit drive for VMs" {
mkdir -p srv1 srv2 srv3
# Generate cloudinit ISOs
for i in 1 2 3; do
echo "hostname: srv${i}" > "srv${i}/meta-data"
cat > "srv${i}/user-data" <<'EOF'
#cloud-config
EOF
cat > "srv${i}/network-config" <<EOF
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- "192.168.123.1${i}/26"
gateway4: "192.168.123.1"
nameservers:
search: [cluster.local]
addresses: [8.8.8.8]
EOF
( cd "srv${i}" && genisoimage \
-output seed.img \
-volid cidata -rational-rock -joliet \
user-data meta-data network-config )
done
}
@test "Use Talos NoCloud image from assets" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing _out/assets/nocloud-amd64.raw.xz" 2>&1
exit 1
fi
rm -f nocloud-amd64.raw
cp _out/assets/nocloud-amd64.raw.xz .
xz --decompress nocloud-amd64.raw.xz
}
@test "Prepare VM disks" {
for i in 1 2 3; do
cp nocloud-amd64.raw srv${i}/system.img
qemu-img resize srv${i}/system.img 50G
qemu-img create srv${i}/data.img 100G
done
}
@test "Create tap devices" {
for i in 1 2 3; do
ip link del cozy-srv${i} 2>/dev/null || true
ip tuntap add dev cozy-srv${i} mode tap
ip link set cozy-srv${i} up
ip link set cozy-srv${i} master cozy-br0
done
}
@test "Boot QEMU VMs" {
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 16384 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
-drive file=srv${i}/system.img,if=virtio,format=raw \
-drive file=srv${i}/seed.img,if=virtio,format=raw \
-drive file=srv${i}/data.img,if=virtio,format=raw \
-display none -daemonize -pidfile srv${i}/qemu.pid
done
# Give qemu a few seconds to start up networking
sleep 5
}
@test "Wait until Talos API port 50000 is reachable on all machines" {
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Generate Talos cluster configuration" {
# Clusterwide patches
cat > patch.yaml <<'EOF'
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.123.0/24
extraConfig:
maxPods: 512
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
- name: spl
registries:
mirrors:
docker.io:
endpoints:
- https://mirror.gcr.io
files:
- content: |
[plugins]
[plugins."io.containerd.cri.v1.runtime"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
apiServer:
extraArgs:
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
oidc-client-id: "kubernetes"
oidc-username-claim: "preferred_username"
oidc-groups-claim: "groups"
network:
cni:
name: none
dnsDomain: cozy.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOF
# Controlplaneonly patches
cat > patch-controlplane.yaml <<'EOF'
machine:
nodeLabels:
node.kubernetes.io/exclude-from-external-load-balancers:
$patch: delete
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.123.10
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.123.0/24
EOF
# Generate secrets once
if [ ! -f secrets.yaml ]; then
talosctl gen secrets
fi
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 \
--config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
}
@test "Apply Talos configuration to the node" {
# Apply the configuration to all three nodes
for node in 11 12 13; do
talosctl apply -f controlplane.yaml -n 192.168.123.${node} -e 192.168.123.${node} -i
done
# Wait for Talos services to come up again
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Bootstrap Talos cluster" {
# Bootstrap etcd on the first node
timeout 10 sh -ec 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
# Wait until etcd is healthy
timeout 180 sh -ec 'until talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 >/dev/null 2>&1; do sleep 1; done'
timeout 60 sh -ec 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep -q "rpc error"; do sleep 1; done'
# Retrieve kubeconfig
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
# Wait until all three nodes register in Kubernetes
timeout 60 sh -ec 'until [ $(kubectl get node --no-headers | wc -l) -eq 3 ]; do sleep 1; done'
}
@test "Install Cozystack" {
@@ -254,14 +27,14 @@ EOF
kubectl wait deployment/cozystack -n cozy-system --timeout=1m --for=condition=Available
# Wait until HelmReleases appear & reconcile them
timeout 60 sh -ec 'until kubectl get hr -A | grep -q cozys; do sleep 1; done'
timeout 60 sh -ec 'until kubectl get hr -A -l cozystack.io/system-app=true | grep -q cozys; do sleep 1; done'
sleep 5
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n "$1" hr/"$2" &"} END {print "wait"}' | sh -ex
kubectl get hr -A -l cozystack.io/system-app=true | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n "$1" hr/"$2" &"} END {print "wait"}' | sh -ex
# Fail the test if any HelmRelease is not Ready
if kubectl get hr -A | grep -v " True " | grep -v NAME; then
kubectl get hr -A
fail "Some HelmReleases failed to reconcile"
echo "Some HelmReleases failed to reconcile" >&2
fi
}
@@ -276,7 +49,11 @@ EOF
kubectl wait deployment/linstor-controller -n cozy-linstor --timeout=5m --for=condition=available
timeout 60 sh -ec 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) -eq 3 ]; do sleep 1; done'
created_pools=$(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor sp l -s data --pastable | awk '$2 == "data" {printf " " $4} END{printf " "}')
for node in srv1 srv2 srv3; do
case $created_pools in
*" $node "*) echo "Storage pool 'data' already exists on node $node"; continue;;
esac
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs ${node} /dev/vdc --pool-name data --storage-pool data
done
@@ -311,7 +88,7 @@ parameters:
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: WaitForFirstConsumer
volumeBindingMode: Immediate
allowVolumeExpansion: true
EOF
}
@@ -389,3 +166,24 @@ EOF
timeout 120 sh -ec 'until kubectl get hr -n cozy-keycloak keycloak keycloak-configure keycloak-operator >/dev/null 2>&1; do sleep 1; done'
kubectl wait hr/keycloak hr/keycloak-configure hr/keycloak-operator -n cozy-keycloak --timeout=10m --for=condition=ready
}
@test "Create tenant with isolated mode enabled" {
kubectl -n tenant-root get tenants.apps.cozystack.io test ||
kubectl apply -f - <<EOF
apiVersion: apps.cozystack.io/v1alpha1
kind: Tenant
metadata:
name: test
namespace: tenant-root
spec:
etcd: false
host: ""
ingress: false
isolated: true
monitoring: false
resourceQuotas: {}
seaweedfs: false
EOF
kubectl wait hr/tenant-test -n tenant-root --timeout=1m --for=condition=ready
kubectl wait namespace tenant-test --timeout=20s --for=jsonpath='{.status.phase}'=Active
}

View File

@@ -0,0 +1,248 @@
#!/usr/bin/env bats
# -----------------------------------------------------------------------------
# Cozystack endtoend provisioning test (Bats)
# -----------------------------------------------------------------------------
@test "Required installer assets exist" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing: _out/assets/nocloud-amd64.raw.xz" >&2
exit 1
fi
}
@test "IPv4 forwarding is enabled" {
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
echo "IPv4 forwarding is disabled!" >&2
echo >&2
echo "Enable it with:" >&2
echo " echo 1 > /proc/sys/net/ipv4/ip_forward" >&2
exit 1
fi
}
@test "Clean previous VMs" {
kill $(cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid 2>/dev/null) 2>/dev/null || true
rm -rf srv1 srv2 srv3
}
@test "Prepare networking and masquerading" {
ip link del cozy-br0 2>/dev/null || true
ip link add cozy-br0 type bridge
ip link set cozy-br0 up
ip address add 192.168.123.1/24 dev cozy-br0
# Masquerading rule idempotent (delete first, then add)
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
}
@test "Prepare cloudinit drive for VMs" {
mkdir -p srv1 srv2 srv3
# Generate cloudinit ISOs
for i in 1 2 3; do
echo "hostname: srv${i}" > "srv${i}/meta-data"
cat > "srv${i}/user-data" <<'EOF'
#cloud-config
EOF
cat > "srv${i}/network-config" <<EOF
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- "192.168.123.1${i}/26"
gateway4: "192.168.123.1"
nameservers:
search: [cluster.local]
addresses: [8.8.8.8]
EOF
( cd "srv${i}" && genisoimage \
-output seed.img \
-volid cidata -rational-rock -joliet \
user-data meta-data network-config )
done
}
@test "Use Talos NoCloud image from assets" {
if [ ! -f _out/assets/nocloud-amd64.raw.xz ]; then
echo "Missing _out/assets/nocloud-amd64.raw.xz" 2>&1
exit 1
fi
rm -f nocloud-amd64.raw
cp _out/assets/nocloud-amd64.raw.xz .
xz --decompress nocloud-amd64.raw.xz
}
@test "Prepare VM disks" {
for i in 1 2 3; do
cp nocloud-amd64.raw srv${i}/system.img
qemu-img resize srv${i}/system.img 50G
qemu-img create srv${i}/data.img 100G
done
}
@test "Create tap devices" {
for i in 1 2 3; do
ip link del cozy-srv${i} 2>/dev/null || true
ip tuntap add dev cozy-srv${i} mode tap
ip link set cozy-srv${i} up
ip link set cozy-srv${i} master cozy-br0
done
}
@test "Boot QEMU VMs" {
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 8 -m 24576 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5${i} \
-netdev tap,id=net0,ifname=cozy-srv${i},script=no,downscript=no \
-drive file=srv${i}/system.img,if=virtio,format=raw \
-drive file=srv${i}/seed.img,if=virtio,format=raw \
-drive file=srv${i}/data.img,if=virtio,format=raw \
-display none -daemonize -pidfile srv${i}/qemu.pid
done
# Give qemu a few seconds to start up networking
sleep 5
}
@test "Wait until Talos API port 50000 is reachable on all machines" {
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Generate Talos cluster configuration" {
# Clusterwide patches
cat > patch.yaml <<'EOF'
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.123.0/24
extraConfig:
maxPods: 512
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
- name: spl
registries:
mirrors:
docker.io:
endpoints:
- https://dockerio.nexus.lllamnyp.su
cr.fluentbit.io:
endpoints:
- https://fluentbit.nexus.lllamnyp.su
docker-registry3.mariadb.com:
endpoints:
- https://mariadb.nexus.lllamnyp.su
gcr.io:
endpoints:
- https://gcr.nexus.lllamnyp.su
ghcr.io:
endpoints:
- https://ghcr.nexus.lllamnyp.su
quay.io:
endpoints:
- https://quay.nexus.lllamnyp.su
registry.k8s.io:
endpoints:
- https://k8s.nexus.lllamnyp.su
files:
- content: |
[plugins]
[plugins."io.containerd.cri.v1.runtime"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
apiServer:
extraArgs:
oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
oidc-client-id: "kubernetes"
oidc-username-claim: "preferred_username"
oidc-groups-claim: "groups"
network:
cni:
name: none
dnsDomain: cozy.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOF
# Controlplaneonly patches
cat > patch-controlplane.yaml <<'EOF'
machine:
nodeLabels:
node.kubernetes.io/exclude-from-external-load-balancers:
$patch: delete
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.123.10
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.123.0/24
EOF
# Generate secrets once
if [ ! -f secrets.yaml ]; then
talosctl gen secrets
fi
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 \
--config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
}
@test "Apply Talos configuration to the node" {
# Apply the configuration to all three nodes
for node in 11 12 13; do
talosctl apply -f controlplane.yaml -n 192.168.123.${node} -e 192.168.123.${node} -i
done
# Wait for Talos services to come up again
timeout 60 sh -ec 'until nc -nz 192.168.123.11 50000 && nc -nz 192.168.123.12 50000 && nc -nz 192.168.123.13 50000; do sleep 1; done'
}
@test "Bootstrap Talos cluster" {
# Bootstrap etcd on the first node
timeout 10 sh -ec 'until talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11; do sleep 1; done'
# Wait until etcd is healthy
timeout 180 sh -ec 'until talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 >/dev/null 2>&1; do sleep 1; done'
timeout 60 sh -ec 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep -q "rpc error"; do sleep 1; done'
# Retrieve kubeconfig
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
# Wait until all three nodes register in Kubernetes
timeout 60 sh -ec 'until [ $(kubectl get node --no-headers | wc -l) -eq 3 ]; do sleep 1; done'
}

View File

@@ -0,0 +1,139 @@
package controller
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"sort"
"time"
helmv2 "github.com/fluxcd/helm-controller/api/v2"
corev1 "k8s.io/api/core/v1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/predicate"
)
type CozystackConfigReconciler struct {
client.Client
Scheme *runtime.Scheme
}
var configMapNames = []string{"cozystack", "cozystack-branding", "cozystack-scheduling"}
const configMapNamespace = "cozy-system"
const digestAnnotation = "cozystack.io/cozy-config-digest"
const forceReconcileKey = "reconcile.fluxcd.io/forceAt"
const requestedAt = "reconcile.fluxcd.io/requestedAt"
func (r *CozystackConfigReconciler) Reconcile(ctx context.Context, _ ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
digest, err := r.computeDigest(ctx)
if err != nil {
log.Error(err, "failed to compute config digest")
return ctrl.Result{}, nil
}
var helmList helmv2.HelmReleaseList
if err := r.List(ctx, &helmList); err != nil {
return ctrl.Result{}, fmt.Errorf("failed to list HelmReleases: %w", err)
}
now := time.Now().Format(time.RFC3339Nano)
updated := 0
for _, hr := range helmList.Items {
isSystemApp := hr.Labels["cozystack.io/system-app"] == "true"
isTenantRoot := hr.Namespace == "tenant-root" && hr.Name == "tenant-root"
if !isSystemApp && !isTenantRoot {
continue
}
patchTarget := hr.DeepCopy()
if hr.Annotations == nil {
hr.Annotations = map[string]string{}
}
if hr.Annotations[digestAnnotation] == digest {
continue
}
patchTarget.Annotations[digestAnnotation] = digest
patchTarget.Annotations[forceReconcileKey] = now
patchTarget.Annotations[requestedAt] = now
patch := client.MergeFrom(hr.DeepCopy())
if err := r.Patch(ctx, patchTarget, patch); err != nil {
log.Error(err, "failed to patch HelmRelease", "name", hr.Name, "namespace", hr.Namespace)
continue
}
updated++
log.Info("patched HelmRelease with new config digest", "name", hr.Name, "namespace", hr.Namespace)
}
log.Info("finished reconciliation", "updatedHelmReleases", updated)
return ctrl.Result{}, nil
}
func (r *CozystackConfigReconciler) computeDigest(ctx context.Context) (string, error) {
hash := sha256.New()
for _, name := range configMapNames {
var cm corev1.ConfigMap
err := r.Get(ctx, client.ObjectKey{Namespace: configMapNamespace, Name: name}, &cm)
if err != nil {
if kerrors.IsNotFound(err) {
continue // ignore missing
}
return "", err
}
// Sort keys for consistent hashing
var keys []string
for k := range cm.Data {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
v := cm.Data[k]
fmt.Fprintf(hash, "%s:%s=%s\n", name, k, v)
}
}
return hex.EncodeToString(hash.Sum(nil)), nil
}
func (r *CozystackConfigReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
WithEventFilter(predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
cm, ok := e.ObjectNew.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
CreateFunc: func(e event.CreateEvent) bool {
cm, ok := e.Object.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
DeleteFunc: func(e event.DeleteEvent) bool {
cm, ok := e.Object.(*corev1.ConfigMap)
return ok && cm.Namespace == configMapNamespace && contains(configMapNames, cm.Name)
},
}).
For(&corev1.ConfigMap{}).
Complete(r)
}
func contains(slice []string, val string) bool {
for _, s := range slice {
if s == val {
return true
}
}
return false
}

View File

@@ -3,6 +3,7 @@ package controller
import (
"context"
"strings"
"time"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
@@ -15,6 +16,10 @@ import (
cozyv1alpha1 "github.com/cozystack/cozystack/api/v1alpha1"
)
const (
deletionRequeueDelay = 30 * time.Second
)
// WorkloadMonitorReconciler reconciles a WorkloadMonitor object
type WorkloadReconciler struct {
client.Client
@@ -52,6 +57,9 @@ func (r *WorkloadReconciler) Reconcile(ctx context.Context, req ctrl.Request) (c
// found object, nothing to do
if err == nil {
if !t.GetDeletionTimestamp().IsZero() {
return ctrl.Result{RequeueAfter: deletionRequeueDelay}, nil
}
return ctrl.Result{}, nil
}

View File

@@ -248,15 +248,24 @@ func (r *WorkloadMonitorReconciler) reconcilePodForMonitor(
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("pod-%s", pod.Name),
Namespace: pod.Namespace,
Labels: map[string]string{},
},
}
metaLabels := r.getWorkloadMetadata(&pod)
_, err := ctrl.CreateOrUpdate(ctx, r.Client, workload, func() error {
// Update owner references with the new monitor
updateOwnerReferences(workload.GetObjectMeta(), monitor)
// Copy labels from the Pod if needed
workload.Labels = pod.Labels
for k, v := range pod.Labels {
workload.Labels[k] = v
}
// Add workload meta to labels
for k, v := range metaLabels {
workload.Labels[k] = v
}
// Fill Workload status fields:
workload.Status.Kind = monitor.Spec.Kind
@@ -433,3 +442,12 @@ func mapObjectToMonitor[T client.Object](_ T, c client.Client) func(ctx context.
return requests
}
}
func (r *WorkloadMonitorReconciler) getWorkloadMetadata(obj client.Object) map[string]string {
labels := make(map[string]string)
annotations := obj.GetAnnotations()
if instanceType, ok := annotations["kubevirt.io/cluster-instancetype-name"]; ok {
labels["workloads.cozystack.io/kubevirt-vmi-instance-type"] = instanceType
}
return labels
}

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.1.0"
appVersion: "0.2.0"

View File

@@ -1,4 +1,4 @@
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
readme-generator-for-helm -v values.yaml -s values.schema.json -r README.md

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -18,3 +18,14 @@ rules:
resourceNames:
- {{ .Release.Name }}-ui
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -1,5 +1,5 @@
{
"properties": {},
"title": "Chart Values",
"type": "object",
"properties": {}
"type": "object"
}

View File

@@ -16,15 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.9.0
version: 0.11.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "24.9.2"
dependencies:
- name: cozy-lib
version: 0.1.0
repository: "http://cozystack.cozy-system.svc/repos/library"

View File

@@ -1,10 +1,12 @@
CLICKHOUSE_BACKUP_TAG = $(shell awk '$$1 == "version:" {print $$2}' Chart.yaml)
CLICKHOUSE_BACKUP_TAG = $(shell awk '$$0 ~ /^version:/ {print $$2}' Chart.yaml)
PRESET_ENUM := ["nano","micro","small","medium","large","xlarge","2xlarge"]
include ../../../scripts/common-envs.mk
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
readme-generator-for-helm -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
image:
docker buildx build images/clickhouse-backup \

View File

@@ -1,50 +1,80 @@
# Managed Clickhouse Service
# Managed ClickHouse Service
### How to restore backup:
ClickHouse is an open source high-performance and column-oriented SQL database management system (DBMS).
It is used for online analytical processing (OLAP).
find snapshot:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name snapshots
```
### How to restore backup from S3
restore:
```
restic -r s3:s3.example.org/clickhouse-backups/table_name restore latest --target /tmp/
```
1. Find the snapshot:
more details:
- https://itnext.io/restic-effective-backup-from-stdin-4bc1e8f083c1
```bash
restic -r s3:s3.example.org/clickhouse-backups/table_name snapshots
```
2. Restore it:
```bash
restic -r s3:s3.example.org/clickhouse-backups/table_name restore latest --target /tmp/
```
For more details, read [Restic: Effective Backup from Stdin](https://blog.aenix.io/restic-effective-backup-from-stdin-4bc1e8f083c1).
## Parameters
### Common parameters
| Name | Description | Value |
| ---------------- | ----------------------------------- | ------ |
| `size` | Persistent Volume size | `10Gi` |
| `logStorageSize` | Persistent Volume for logs size | `2Gi` |
| `shards` | Number of Clickhouse replicas | `1` |
| `replicas` | Number of Clickhouse shards | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `logTTL` | for query_log and query_thread_log | `15` |
| Name | Description | Value |
| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `replicas` | Number of Clickhouse replicas | `2` |
| `shards` | Number of Clickhouse shards | `1` |
| `resources` | Explicit CPU and memory configuration for each ClickHouse replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `small` |
| `size` | Persistent Volume Claim size, available for application data | `10Gi` |
| `storageClass` | StorageClass used to store the application data | `""` |
### Configuration parameters
### Application-specific parameters
| Name | Description | Value |
| ------- | ------------------- | ----- |
| `users` | Users configuration | `{}` |
| Name | Description | Value |
| ---------------- | -------------------------------------------------------- | ----- |
| `logStorageSize` | Size of Persistent Volume for logs | `2Gi` |
| `logTTL` | TTL (expiration time) for query_log and query_thread_log | `15` |
| `users` | Users configuration | `{}` |
### Backup parameters
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| Name | Description | Value |
| ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable periodic backups | `false` |
| `backup.s3Region` | AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | S3 bucket used for storing backups | `s3.example.org/clickhouse-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | Retention strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | Access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | Secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | Password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
```yaml
resources:
cpu: 4000m
memory: 4Gi
```
`resourcesPreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `250m` | `128Mi` |
| `micro` | `500m` | `256Mi` |
| `small` | `1` | `512Mi` |
| `medium` | `1` | `1Gi` |
| `large` | `2` | `2Gi` |
| `xlarge` | `4` | `4Gi` |
| `2xlarge` | `8` | `8Gi` |

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/clickhouse-backup:0.9.0@sha256:3faf7a4cebf390b9053763107482de175aa0fdb88c1e77424fd81100b1c3a205
ghcr.io/cozystack/cozystack/clickhouse-backup:0.11.1@sha256:3faf7a4cebf390b9053763107482de175aa0fdb88c1e77424fd81100b1c3a205

View File

@@ -1,3 +1,5 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $clusterDomain := (index $cozyConfig.data "cluster-domain") | default "cozy.local" }}
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-credentials" .Release.Name) }}
{{- $passwords := dict }}
{{- $users := .Values.users }}
@@ -32,7 +34,7 @@ kind: "ClickHouseInstallation"
metadata:
name: "{{ .Release.Name }}"
spec:
namespaceDomainPattern: "%s.svc.cozy.local"
namespaceDomainPattern: "%s.svc.{{ $clusterDomain }}"
defaults:
templates:
dataVolumeClaimTemplate: data-volume-template
@@ -92,6 +94,9 @@ spec:
templates:
volumeClaimTemplates:
- name: data-volume-template
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
accessModes:
- ReadWriteOnce
@@ -99,6 +104,9 @@ spec:
requests:
storage: {{ .Values.size }}
- name: log-volume-template
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
accessModes:
- ReadWriteOnce
@@ -107,6 +115,9 @@ spec:
storage: {{ .Values.logStorageSize }}
podTemplates:
- name: clickhouse-per-host
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
affinity:
podAntiAffinity:
@@ -121,11 +132,7 @@ spec:
containers:
- name: clickhouse
image: clickhouse/clickhouse-server:24.9.2.42
{{- if .Values.resources }}
resources: {{- include "cozy-lib.resources.sanitize" (list .Values.resources $) | nindent 16 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "cozy-lib.resources.preset" (list .Values.resourcesPreset $) | nindent 16 }}
{{- end }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resourcesPreset .Values.resources $) | nindent 16 }}
volumeMounts:
- name: data-volume-template
mountPath: /var/lib/clickhouse
@@ -133,6 +140,9 @@ spec:
mountPath: /var/log/clickhouse-server
serviceTemplates:
- name: svc-template
metadata:
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
generateName: chendpoint-{chi}
spec:
ports:

View File

@@ -24,3 +24,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -9,5 +9,5 @@ spec:
kind: clickhouse
type: clickhouse
selector:
clickhouse.altinity.com/chi: {{ $.Release.Name }}
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -1,91 +1,100 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"logStorageSize": {
"type": "string",
"description": "Persistent Volume for logs size",
"default": "2Gi"
},
"shards": {
"type": "number",
"description": "Number of Clickhouse replicas",
"default": 1
},
"replicas": {
"type": "number",
"description": "Number of Clickhouse shards",
"default": 2
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the data",
"default": ""
},
"logTTL": {
"type": "number",
"description": "for query_log and query_thread_log",
"default": 15
},
"backup": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable pereiodic backups",
"default": false
},
"s3Region": {
"type": "string",
"description": "The AWS S3 region where backups are stored",
"default": "us-east-1"
},
"s3Bucket": {
"type": "string",
"description": "The S3 bucket used for storing backups",
"default": "s3.example.org/clickhouse-backups"
},
"schedule": {
"type": "string",
"description": "Cron schedule for automated backups",
"default": "0 2 * * *"
},
"cleanupStrategy": {
"type": "string",
"description": "The strategy for cleaning up old backups",
"default": "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
"default": "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m",
"description": "Retention strategy for cleaning up old backups",
"type": "string"
},
"s3AccessKey": {
"type": "string",
"description": "The access key for S3, used for authentication",
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu"
},
"s3SecretKey": {
"type": "string",
"description": "The secret key for S3, used for authentication",
"default": "ju3eum4dekeich9ahM1te8waeGai0oog"
"enabled": {
"default": false,
"description": "Enable periodic backups",
"type": "boolean"
},
"resticPassword": {
"type": "string",
"description": "The password for Restic backup encryption",
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
"default": "ChaXoveekoh6eigh4siesheeda2quai0",
"description": "Password for Restic backup encryption",
"type": "string"
},
"s3AccessKey": {
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu",
"description": "Access key for S3, used for authentication",
"type": "string"
},
"s3Bucket": {
"default": "s3.example.org/clickhouse-backups",
"description": "S3 bucket used for storing backups",
"type": "string"
},
"s3Region": {
"default": "us-east-1",
"description": "AWS S3 region where backups are stored",
"type": "string"
},
"s3SecretKey": {
"default": "ju3eum4dekeich9ahM1te8waeGai0oog",
"description": "Secret key for S3, used for authentication",
"type": "string"
},
"schedule": {
"default": "0 2 * * *",
"description": "Cron schedule for automated backups",
"type": "string"
}
}
},
"type": "object"
},
"logStorageSize": {
"default": "2Gi",
"description": "Size of Persistent Volume for logs",
"type": "string"
},
"logTTL": {
"default": 15,
"description": "TTL (expiration time) for query_log and query_thread_log",
"type": "number"
},
"replicas": {
"default": 2,
"description": "Number of Clickhouse replicas",
"type": "number"
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
"default": {},
"description": "Explicit CPU and memory configuration for each ClickHouse replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"type": "object"
},
"resourcesPreset": {
"default": "small",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.",
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"shards": {
"default": 1,
"description": "Number of Clickhouse shards",
"type": "number"
},
"size": {
"default": "10Gi",
"description": "Persistent Volume Claim size, available for application data",
"type": "string"
},
"storageClass": {
"default": "",
"description": "StorageClass used to store the application data",
"type": "string"
}
}
}
},
"title": "Chart Values",
"type": "object"
}

View File

@@ -1,21 +1,29 @@
## @section Common parameters
## @param size Persistent Volume size
## @param logStorageSize Persistent Volume for logs size
## @param shards Number of Clickhouse replicas
## @param replicas Number of Clickhouse shards
## @param storageClass StorageClass used to store the data
## @param logTTL for query_log and query_thread_log
##
size: 10Gi
logStorageSize: 2Gi
shards: 1
## @param replicas Number of Clickhouse replicas
replicas: 2
## @param shards Number of Clickhouse shards
shards: 1
## @param resources Explicit CPU and memory configuration for each ClickHouse replica. When left empty, the preset defined in `resourcesPreset` is applied.
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.
resourcesPreset: "small"
## @param size Persistent Volume Claim size, available for application data
size: 10Gi
## @param storageClass StorageClass used to store the application data
storageClass: ""
## @section Application-specific parameters
##
## @param logStorageSize Size of Persistent Volume for logs
logStorageSize: 2Gi
## @param logTTL TTL (expiration time) for query_log and query_thread_log
logTTL: 15
## @section Configuration parameters
## @param users [object] Users configuration
## Example:
## users:
@@ -27,16 +35,17 @@ logTTL: 15
##
users: {}
## @section Backup parameters
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.enabled Enable periodic backups
## @param backup.s3Region AWS S3 region where backups are stored
## @param backup.s3Bucket S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.cleanupStrategy The strategy for cleaning up old backups
## @param backup.s3AccessKey The access key for S3, used for authentication
## @param backup.s3SecretKey The secret key for S3, used for authentication
## @param backup.resticPassword The password for Restic backup encryption
## @param backup.cleanupStrategy Retention strategy for cleaning up old backups
## @param backup.s3AccessKey Access key for S3, used for authentication
## @param backup.s3SecretKey Secret key for S3, used for authentication
## @param backup.resticPassword Password for Restic backup encryption
backup:
enabled: false
s3Region: us-east-1
@@ -47,15 +56,3 @@ backup:
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 1.0.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.24.0"
appVersion: 2.4.0

View File

@@ -1,4 +1,13 @@
include ../../../scripts/package.mk
PRESET_ENUM := ["nano","micro","small","medium","large","xlarge","2xlarge"]
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
readme-generator-for-helm -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
update:
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/FerretDB/FerretDB | awk -F'[/^]' '{sub("^v", "", $$3)} END{print $$3}') && \
pgtag=$$(skopeo list-tags docker://ghcr.io/ferretdb/postgres-documentdb | jq -r --arg tag "$$tag" '.Tags[] | select(endswith("ferretdb-" + $$tag))' | sort -V | tail -n1) && \
sed -i "s|\(imageName: ghcr.io/ferretdb/postgres-documentdb:\).*|\1$$pgtag|" templates/postgres.yaml && \
sed -i "s|\(image: ghcr.io/ferretdb/ferretdb:\).*|\1$$tag|" templates/ferretdb.yaml && \
sed -i "s|\(appVersion: \).*|\1$$tag|" Chart.yaml

View File

@@ -1,37 +1,72 @@
# Managed FerretDB Service
FerretDB is an open source MongoDB alternative.
It translates MongoDB wire protocol queries to SQL and can be used as a direct replacement for MongoDB 5.0+.
Internally, FerretDB service is backed by Postgres.
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `replicas` | Number of Postgres replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `quorum.minSyncReplicas` | Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed. | `0` |
| `quorum.maxSyncReplicas` | Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances). | `0` |
| Name | Description | Value |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `replicas` | Number of replicas | `2` |
| `resources` | Explicit CPU and memory configuration for each FerretDB replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge. | `micro` |
| `size` | Persistent Volume size | `10Gi` |
| `storageClass` | StorageClass used to store the data | `""` |
| `external` | Enable external access from outside the cluster | `false` |
### Configuration parameters
### Application-specific parameters
| Name | Description | Value |
| ------- | ------------------- | ----- |
| `users` | Users configuration | `{}` |
| Name | Description | Value |
| ------------------------ | --------------------------------------------------------------------------------------------------------------------------- | ----- |
| `quorum.minSyncReplicas` | Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed | `0` |
| `quorum.maxSyncReplicas` | Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the total number of replicas) | `0` |
| `users` | Users configuration | `{}` |
### Backup parameters
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |
| `resources` | Resources | `{}` |
| `resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| Name | Description | Value |
| ------------------------ | ---------------------------------------------------------- | ----------------------------------- |
| `backup.enabled` | Enable regular backups | `false` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * * *` |
| `backup.retentionPolicy` | Retention policy | `30d` |
| `backup.destinationPath` | Path to store the backup (i.e. s3://bucket/path/to/folder) | `s3://bucket/path/to/folder/` |
| `backup.endpointURL` | S3 Endpoint used to upload data to the cloud | `http://minio-gateway-service:9000` |
| `backup.s3AccessKey` | Access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | Secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
### Bootstrap (recovery) parameters
| Name | Description | Value |
| ------------------------ | -------------------------------------------------------------------------------------------------------------------- | ------- |
| `bootstrap.enabled` | Restore database cluster from a backup | `false` |
| `bootstrap.recoveryTime` | Timestamp (PITR) up to which recovery will proceed, expressed in RFC 3339 format. If left empty, will restore latest | `""` |
| `bootstrap.oldName` | Name of database cluster before deleting | `""` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
```yaml
resources:
cpu: 4000m
memory: 4Gi
```
`resourcesPreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `250m` | `128Mi` |
| `micro` | `500m` | `256Mi` |
| `small` | `1` | `512Mi` |
| `medium` | `1` | `1Gi` |
| `large` | `2` | `2Gi` |
| `xlarge` | `4` | `4Gi` |
| `2xlarge` | `8` | `8Gi` |

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -1 +0,0 @@
ghcr.io/cozystack/cozystack/postgres-backup:0.12.0@sha256:10179ed56457460d95cd5708db2a00130901255fa30c4dd76c65d2ef5622b61f

View File

@@ -1,99 +0,0 @@
{{- if .Values.backup.enabled }}
{{ $image := .Files.Get "images/backup.json" | fromJson }}
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ .Release.Name }}-backup
spec:
schedule: "{{ .Values.backup.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
backoffLimit: 2
template:
spec:
restartPolicy: OnFailure
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/backup-script.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/backup-secret.yaml") . | sha256sum }}
spec:
restartPolicy: Never
containers:
- name: pgdump
image: "{{ $.Files.Get "images/postgres-backup.tag" | trim }}"
command:
- /bin/sh
- /scripts/backup.sh
env:
- name: REPO_PREFIX
value: {{ required "s3Bucket is not specified!" .Values.backup.s3Bucket | quote }}
- name: CLEANUP_STRATEGY
value: {{ required "cleanupStrategy is not specified!" .Values.backup.cleanupStrategy | quote }}
- name: PGUSER
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: username
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: password
- name: PGHOST
value: {{ .Release.Name }}-postgres-rw
- name: PGPORT
value: "5432"
- name: PGDATABASE
value: postgres
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: s3AccessKey
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: s3SecretKey
- name: AWS_DEFAULT_REGION
value: {{ .Values.backup.s3Region }}
- name: RESTIC_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: resticPassword
volumeMounts:
- mountPath: /scripts
name: scripts
- mountPath: /tmp
name: tmp
- mountPath: /.cache
name: cache
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumes:
- name: scripts
secret:
secretName: {{ .Release.Name }}-backup-script
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
securityContext:
runAsNonRoot: true
runAsUser: 9000
runAsGroup: 9000
seccompProfile:
type: RuntimeDefault
{{- end }}

View File

@@ -1,50 +0,0 @@
{{- if .Values.backup.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backup-script
stringData:
backup.sh: |
#!/bin/sh
set -e
set -o pipefail
JOB_ID="job-$(uuidgen|cut -f1 -d-)"
DB_LIST=$(psql -Atq -c 'SELECT datname FROM pg_catalog.pg_database;' | grep -v '^\(postgres\|app\|template.*\)$')
echo DB_LIST=$(echo "$DB_LIST" | shuf) # shuffle list
echo "Job ID: $JOB_ID"
echo "Target repo: $REPO_PREFIX"
echo "Cleanup strategy: $CLEANUP_STRATEGY"
echo "Start backup for:"
echo "$DB_LIST"
echo
echo "Backup started at `date +%Y-%m-%d\ %H:%M:%S`"
for db in $DB_LIST; do
(
set -x
restic -r "s3:${REPO_PREFIX}/$db" cat config >/dev/null 2>&1 || \
restic -r "s3:${REPO_PREFIX}/$db" init --repository-version 2
restic -r "s3:${REPO_PREFIX}/$db" unlock --remove-all >/dev/null 2>&1 || true # no locks, k8s takes care of it
pg_dump -Z0 -Ft -d "$db" | \
restic -r "s3:${REPO_PREFIX}/$db" backup --tag "$JOB_ID" --stdin --stdin-filename dump.tar
restic -r "s3:${REPO_PREFIX}/$db" tag --tag "$JOB_ID" --set "completed"
)
done
echo "Backup finished at `date +%Y-%m-%d\ %H:%M:%S`"
echo
echo "Run cleanup:"
echo
echo "Cleanup started at `date +%Y-%m-%d\ %H:%M:%S`"
for db in $DB_LIST; do
(
set -x
restic forget -r "s3:${REPO_PREFIX}/$db" --group-by=tags --keep-tag "completed" # keep completed snapshots only
restic forget -r "s3:${REPO_PREFIX}/$db" --group-by=tags $CLEANUP_STRATEGY
restic prune -r "s3:${REPO_PREFIX}/$db"
)
done
echo "Cleanup finished at `date +%Y-%m-%d\ %H:%M:%S`"
{{- end }}

View File

@@ -1,11 +0,0 @@
{{- if .Values.backup.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backup
stringData:
s3AccessKey: {{ required "s3AccessKey is not specified!" .Values.backup.s3AccessKey }}
s3SecretKey: {{ required "s3SecretKey is not specified!" .Values.backup.s3SecretKey }}
resticPassword: {{ required "resticPassword is not specified!" .Values.backup.resticPassword }}
{{- end }}

View File

@@ -0,0 +1,12 @@
{{- if .Values.backup.enabled }}
---
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: {{ .Release.Name }}-postgres
spec:
schedule: {{ .Values.backup.schedule | quote }}
backupOwnerReference: self
cluster:
name: {{ .Release.Name }}-postgres
{{- end }}

View File

@@ -24,3 +24,14 @@ rules:
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -2,6 +2,8 @@ apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
type: {{ ternary "LoadBalancer" "ClusterIP" .Values.external }}
{{- if .Values.external }}

View File

@@ -12,15 +12,18 @@ spec:
metadata:
labels:
app: {{ .Release.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: ferretdb
image: ghcr.io/ferretdb/ferretdb:1.24.0
image: ghcr.io/ferretdb/ferretdb:2.4.0
ports:
- containerPort: 27017
env:
- name: FERRETDB_POSTGRESQL_URL
- name: POSTGRESQL_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-app
key: uri
name: {{ .Release.Name }}-postgres-superuser
key: password
- name: FERRETDB_POSTGRESQL_URL
value: "postgresql://postgres:$(POSTGRESQL_PASSWORD)@{{ .Release.Name }}-postgres-rw:5432/postgres"

View File

@@ -1,66 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-init-job
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
metadata:
name: {{ .Release.Name }}-init-job
annotations:
checksum/config: {{ include (print $.Template.BasePath "/init-script.yaml") . | sha256sum }}
spec:
restartPolicy: Never
containers:
- name: postgres
image: ghcr.io/cloudnative-pg/postgresql:15.3
command:
- bash
- /scripts/init.sh
env:
- name: PGUSER
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: username
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: password
- name: PGHOST
value: {{ .Release.Name }}-postgres-rw
- name: PGPORT
value: "5432"
- name: PGDATABASE
value: postgres
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumeMounts:
- mountPath: /etc/secret
name: secret
- mountPath: /scripts
name: scripts
securityContext:
fsGroup: 26
runAsGroup: 26
runAsNonRoot: true
runAsUser: 26
seccompProfile:
type: RuntimeDefault
volumes:
- name: secret
secret:
secretName: {{ .Release.Name }}-postgres-superuser
- name: scripts
secret:
secretName: {{ .Release.Name }}-init-script

View File

@@ -1,131 +0,0 @@
{{- $existingSecret := lookup "v1" "Secret" .Release.Namespace (printf "%s-credentials" .Release.Name) }}
{{- $passwords := dict }}
{{- with (index $existingSecret "data") }}
{{- range $k, $v := . }}
{{- $_ := set $passwords $k (b64dec $v) }}
{{- end }}
{{- end }}
{{- range $user, $u := .Values.users }}
{{- if $u.password }}
{{- $_ := set $passwords $user $u.password }}
{{- else if not (index $passwords $user) }}
{{- $_ := set $passwords $user (randAlphaNum 16) }}
{{- end }}
{{- end }}
{{- if .Values.users }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-credentials
stringData:
{{- range $user, $u := .Values.users }}
{{ quote $user }}: {{ quote (index $passwords $user) }}
{{- end }}
{{- end }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-init-script
stringData:
init.sh: |
#!/bin/bash
set -e
until pg_isready ; do sleep 5; done
echo "== create users"
{{- if .Values.users }}
psql -v ON_ERROR_STOP=1 <<\EOT
{{- range $user, $u := .Values.users }}
SELECT 'CREATE ROLE {{ $user }} LOGIN INHERIT;'
WHERE NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '{{ $user }}')\gexec
ALTER ROLE {{ $user }} WITH PASSWORD '{{ index $passwords $user }}' LOGIN INHERIT {{ ternary "REPLICATION" "NOREPLICATION" (default false $u.replication) }};
COMMENT ON ROLE {{ $user }} IS 'user managed by helm';
{{- end }}
EOT
{{- end }}
echo "== delete users"
MANAGED_USERS=$(echo '\du+' | psql | awk -F'|' '$4 == " user managed by helm" {print $1}' | awk NF=NF RS= OFS=' ')
DEFINED_USERS="{{ join " " (keys .Values.users) }}"
DELETE_USERS=$(for user in $MANAGED_USERS; do case " $DEFINED_USERS " in *" $user "*) :;; *) echo $user;; esac; done)
echo "users to delete: $DELETE_USERS"
for user in $DELETE_USERS; do
# https://stackoverflow.com/a/51257346/2931267
psql -v ON_ERROR_STOP=1 --echo-all <<EOT
REASSIGN OWNED BY $user TO postgres;
DROP OWNED BY $user;
DROP USER $user;
EOT
done
echo "== create roles"
psql -v ON_ERROR_STOP=1 --echo-all <<\EOT
SELECT 'CREATE ROLE app_admin NOINHERIT;'
WHERE NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'app_admin')\gexec
COMMENT ON ROLE app_admin IS 'role managed by helm';
EOT
echo "== grant privileges on databases to roles"
psql -v ON_ERROR_STOP=1 --echo-all -d "app" <<\EOT
ALTER DATABASE app OWNER TO app_admin;
DO $$
DECLARE
schema_record record;
BEGIN
-- Loop over all schemas
FOR schema_record IN SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('pg_catalog', 'information_schema') LOOP
-- Changing Schema Ownership
EXECUTE format('ALTER SCHEMA %I OWNER TO %I', schema_record.schema_name, 'app_admin');
-- Add rights for the admin role
EXECUTE format('GRANT ALL ON SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('GRANT ALL ON ALL TABLES IN SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('GRANT ALL ON ALL SEQUENCES IN SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('GRANT ALL ON ALL FUNCTIONS IN SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON TABLES TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON SEQUENCES TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON FUNCTIONS TO %I', schema_record.schema_name, 'app_admin');
END LOOP;
END$$;
EOT
echo "== setup event trigger for schema creation"
psql -v ON_ERROR_STOP=1 --echo-all -d "app" <<\EOT
CREATE OR REPLACE FUNCTION auto_grant_schema_privileges()
RETURNS event_trigger LANGUAGE plpgsql AS $$
DECLARE
obj record;
BEGIN
FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands() WHERE command_tag = 'CREATE SCHEMA' LOOP
-- Set owner for schema
EXECUTE format('ALTER SCHEMA %I OWNER TO %I', obj.object_identity, 'app_admin');
-- Set privileges for admin role
EXECUTE format('GRANT ALL ON SCHEMA %I TO %I', obj.object_identity, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON TABLES TO %I', obj.object_identity, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON SEQUENCES TO %I', obj.object_identity, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON FUNCTIONS TO %I', obj.object_identity, 'app_admin');
END LOOP;
END;
$$;
DROP EVENT TRIGGER IF EXISTS trigger_auto_grant;
CREATE EVENT TRIGGER trigger_auto_grant ON ddl_command_end
WHEN TAG IN ('CREATE SCHEMA')
EXECUTE PROCEDURE auto_grant_schema_privileges();
EOT
echo "== assign roles to users"
psql -v ON_ERROR_STOP=1 --echo-all <<\EOT
GRANT app_admin TO app;
{{- range $user, $u := $.Values.users }}
GRANT app_admin TO {{ $user }};
{{- end }}
EOT

View File

@@ -5,6 +5,50 @@ metadata:
name: {{ .Release.Name }}-postgres
spec:
instances: {{ .Values.replicas }}
{{- if .Values.backup.enabled }}
backup:
barmanObjectStore:
destinationPath: {{ .Values.backup.destinationPath }}
endpointURL: {{ .Values.backup.endpointURL }}
s3Credentials:
accessKeyId:
name: {{ .Release.Name }}-s3-creds
key: AWS_ACCESS_KEY_ID
secretAccessKey:
name: {{ .Release.Name }}-s3-creds
key: AWS_SECRET_ACCESS_KEY
retentionPolicy: {{ .Values.backup.retentionPolicy }}
{{- end }}
bootstrap:
initdb:
postInitSQL:
- 'CREATE EXTENSION IF NOT EXISTS documentdb CASCADE;'
{{- if .Values.bootstrap.enabled }}
recovery:
source: {{ .Values.bootstrap.oldName }}
{{- if .Values.bootstrap.recoveryTime }}
recoveryTarget:
targetTime: {{ .Values.bootstrap.recoveryTime }}
{{- end }}
{{- end }}
{{- if .Values.bootstrap.enabled }}
externalClusters:
- name: {{ .Values.bootstrap.oldName }}
barmanObjectStore:
destinationPath: {{ .Values.backup.destinationPath }}
endpointURL: {{ .Values.backup.endpointURL }}
s3Credentials:
accessKeyId:
name: {{ .Release.Name }}-s3-creds
key: AWS_ACCESS_KEY_ID
secretAccessKey:
name: {{ .Release.Name }}-s3-creds
key: AWS_SECRET_ACCESS_KEY
{{- end }}
imageName: ghcr.io/ferretdb/postgres-documentdb:17-0.105.0-ferretdb-2.4.0
postgresUID: 999
postgresGID: 999
enableSuperuserAccess: true
{{- $configMap := lookup "v1" "ConfigMap" "cozy-system" "cozystack-scheduling" }}
{{- if $configMap }}
@@ -18,14 +62,21 @@ spec:
{{- end }}
minSyncReplicas: {{ .Values.quorum.minSyncReplicas }}
maxSyncReplicas: {{ .Values.quorum.maxSyncReplicas }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 4 }}
{{- else if ne .Values.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.resourcesPreset "Release" .Release) | nindent 4 }}
{{- end }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.resourcesPreset .Values.resources $) | nindent 4 }}
monitoring:
enablePodMonitor: true
postgresql:
shared_preload_libraries:
- pg_cron
- pg_documentdb_core
- pg_documentdb
parameters:
cron.database_name: 'postgres'
pg_hba:
- host postgres postgres 127.0.0.1/32 trust
- host postgres postgres ::1/128 trust
storage:
size: {{ required ".Values.size is required" .Values.size }}
{{- with .Values.storageClass }}
@@ -35,6 +86,7 @@ spec:
inheritedMetadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"
app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.users }}
managed:
@@ -45,8 +97,6 @@ spec:
passwordSecret:
name: {{ printf "%s-user-%s" $.Release.Name $user }}
login: true
inRoles:
- app
{{- end }}
{{- end }}

View File

@@ -9,5 +9,5 @@ spec:
kind: ferretdb
type: ferretdb
selector:
app: {{ $.Release.Name }}
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -1,96 +1,120 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"replicas": {
"type": "number",
"description": "Number of Postgres replicas",
"default": 2
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the data",
"default": ""
},
"quorum": {
"type": "object",
"properties": {
"minSyncReplicas": {
"type": "number",
"description": "Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.",
"default": 0
},
"maxSyncReplicas": {
"type": "number",
"description": "Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).",
"default": 0
}
}
},
"backup": {
"type": "object",
"properties": {
"destinationPath": {
"default": "s3://bucket/path/to/folder/",
"description": "Path to store the backup (i.e. s3://bucket/path/to/folder)",
"type": "string"
},
"enabled": {
"type": "boolean",
"description": "Enable pereiodic backups",
"default": false
"default": false,
"description": "Enable regular backups",
"type": "boolean"
},
"s3Region": {
"type": "string",
"description": "The AWS S3 region where backups are stored",
"default": "us-east-1"
"endpointURL": {
"default": "http://minio-gateway-service:9000",
"description": "S3 Endpoint used to upload data to the cloud",
"type": "string"
},
"s3Bucket": {
"type": "string",
"description": "The S3 bucket used for storing backups",
"default": "s3.example.org/postgres-backups"
},
"schedule": {
"type": "string",
"description": "Cron schedule for automated backups",
"default": "0 2 * * *"
},
"cleanupStrategy": {
"type": "string",
"description": "The strategy for cleaning up old backups",
"default": "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
"retentionPolicy": {
"default": "30d",
"description": "Retention policy",
"type": "string"
},
"s3AccessKey": {
"type": "string",
"description": "The access key for S3, used for authentication",
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu"
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu",
"description": "Access key for S3, used for authentication",
"type": "string"
},
"s3SecretKey": {
"type": "string",
"description": "The secret key for S3, used for authentication",
"default": "ju3eum4dekeich9ahM1te8waeGai0oog"
"default": "ju3eum4dekeich9ahM1te8waeGai0oog",
"description": "Secret key for S3, used for authentication",
"type": "string"
},
"resticPassword": {
"type": "string",
"description": "The password for Restic backup encryption",
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
"schedule": {
"default": "0 2 * * * *",
"description": "Cron schedule for automated backups",
"type": "string"
}
}
},
"type": "object"
},
"bootstrap": {
"properties": {
"enabled": {
"default": false,
"description": "Restore database cluster from a backup",
"type": "boolean"
},
"oldName": {
"default": "",
"description": "Name of database cluster before deleting",
"type": "string"
},
"recoveryTime": {
"default": "",
"description": "Timestamp (PITR) up to which recovery will proceed, expressed in RFC 3339 format. If left empty, will restore latest",
"type": "string"
}
},
"type": "object"
},
"external": {
"default": false,
"description": "Enable external access from outside the cluster",
"type": "boolean"
},
"quorum": {
"properties": {
"maxSyncReplicas": {
"default": 0,
"description": "Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the total number of replicas)",
"type": "number"
},
"minSyncReplicas": {
"default": 0,
"description": "Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed",
"type": "number"
}
},
"type": "object"
},
"replicas": {
"default": 2,
"description": "Number of replicas",
"type": "number"
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
"default": {},
"description": "Explicit CPU and memory configuration for each FerretDB replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"type": "object"
},
"resourcesPreset": {
"default": "micro",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.",
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"size": {
"default": "10Gi",
"description": "Persistent Volume size",
"type": "string"
},
"storageClass": {
"default": "",
"description": "StorageClass used to store the data",
"type": "string"
}
}
}
},
"title": "Chart Values",
"type": "object"
}

View File

@@ -1,24 +1,30 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param replicas Number of Postgres replicas
## @param storageClass StorageClass used to store the data
##
external: false
size: 10Gi
## @param replicas Number of replicas
replicas: 2
## @param resources Explicit CPU and memory configuration for each FerretDB replica. When left empty, the preset defined in `resourcesPreset` is applied.
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
resourcesPreset: "micro"
## @param size Persistent Volume size
size: 10Gi
## @param storageClass StorageClass used to store the data
storageClass: ""
## @param external Enable external access from outside the cluster
external: false
## @section Application-specific parameters
##
## Configuration for the quorum-based synchronous replication
## @param quorum.minSyncReplicas Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.
## @param quorum.maxSyncReplicas Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).
## @param quorum.minSyncReplicas Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed
## @param quorum.maxSyncReplicas Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the total number of replicas)
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
## @section Configuration parameters
## @param users [object] Users configuration
## Example:
## users:
@@ -29,35 +35,36 @@ quorum:
##
users: {}
## @section Backup parameters
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @section Backup parameters
##
## @param backup.enabled Enable regular backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.cleanupStrategy The strategy for cleaning up old backups
## @param backup.s3AccessKey The access key for S3, used for authentication
## @param backup.s3SecretKey The secret key for S3, used for authentication
## @param backup.resticPassword The password for Restic backup encryption
## @param backup.retentionPolicy Retention policy
## @param backup.destinationPath Path to store the backup (i.e. s3://bucket/path/to/folder)
## @param backup.endpointURL S3 Endpoint used to upload data to the cloud
## @param backup.s3AccessKey Access key for S3, used for authentication
## @param backup.s3SecretKey Secret key for S3, used for authentication
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
retentionPolicy: 30d
destinationPath: s3://bucket/path/to/folder/
endpointURL: http://minio-gateway-service:9000
schedule: "0 2 * * * *"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0
## @param resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
## @section Bootstrap (recovery) parameters
##
## @param bootstrap.enabled Restore database cluster from a backup
## @param bootstrap.recoveryTime Timestamp (PITR) up to which recovery will proceed, expressed in RFC 3339 format. If left empty, will restore latest
## @param bootstrap.oldName Name of database cluster before deleting
##
bootstrap:
enabled: false
# example: 2020-11-26 15:22:00.00000+00
recoveryTime: ""
oldName: ""

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.5.0
version: 0.6.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1,4 +1,5 @@
NGINX_CACHE_TAG = $(shell awk '$$1 == "version:" {print $$2}' Chart.yaml)
PRESET_ENUM := ["nano","micro","small","medium","large","xlarge","2xlarge"]
include ../../../scripts/common-envs.mk
include ../../../scripts/package.mk
@@ -22,7 +23,9 @@ image-nginx:
rm -f images/nginx-cache.json
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
readme-generator-for-helm -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.haproxy.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
yq -i -o json --indent 4 '.properties.nginx.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
update:
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/chrislim2888/IP2Location-C-Library | awk -F'[/^]' 'END{print $$3}') && \

View File

@@ -1,8 +1,9 @@
# Managed Nginx Caching Service
# Managed Nginx-based HTTP Cache Service
The Nginx Caching Service is designed to optimize web traffic and enhance web application performance. This service combines custom-built Nginx instances with HAproxy for efficient caching and load balancing.
The Nginx-based HTTP caching service is designed to optimize web traffic and enhance web application performance.
This service combines custom-built Nginx instances with HAProxy for efficient caching and load balancing.
## Deployment infromation
## Deployment information
The Nginx instances include the following modules and features:
@@ -53,27 +54,77 @@ The deployment architecture is illustrated in the diagram below:
## Known issues
VTS module shows wrong upstream resonse time
- https://github.com/vozlt/nginx-module-vts/issues/198
- VTS module shows wrong upstream response time, [github.com/vozlt/nginx-module-vts#198](https://github.com/vozlt/nginx-module-vts/issues/198)
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `storageClass` | StorageClass used to store the data | `""` |
| `haproxy.replicas` | Number of HAProxy replicas | `2` |
| `nginx.replicas` | Number of Nginx replicas | `2` |
| `haproxy.resources` | Resources | `{}` |
| `haproxy.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| `nginx.resources` | Resources | `{}` |
| `nginx.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| Name | Description | Value |
| -------------- | ----------------------------------------------- | ------- |
| `size` | Persistent Volume size | `10Gi` |
| `storageClass` | StorageClass used to store the data | `""` |
| `external` | Enable external access from outside the cluster | `false` |
### Configuration parameters
### Application-specific parameters
| Name | Description | Value |
| ----------- | ----------------------- | ----- |
| `endpoints` | Endpoints configuration | `[]` |
### HAProxy parameters
| Name | Description | Value |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | ------ |
| `haproxy.replicas` | Number of HAProxy replicas | `2` |
| `haproxy.resources` | Explicit CPU and memory configuration for each HAProxy replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `haproxy.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `nano` |
### Nginx parameters
| Name | Description | Value |
| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | ------ |
| `nginx.replicas` | Number of Nginx replicas | `2` |
| `nginx.resources` | Explicit CPU and memory configuration for each nginx replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `nginx.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `nano` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
```yaml
resources:
cpu: 4000m
memory: 4Gi
```
`resourcesPreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `250m` | `128Mi` |
| `micro` | `500m` | `256Mi` |
| `small` | `1` | `512Mi` |
| `medium` | `1` | `1Gi` |
| `large` | `2` | `2Gi` |
| `xlarge` | `4` | `4Gi` |
| `2xlarge` | `8` | `8Gi` |
### endpoints
`endpoints` is a flat list of IP addresses:
```yaml
endpoints:
- 10.100.3.1:80
- 10.100.3.11:80
- 10.100.3.2:80
- 10.100.3.12:80
- 10.100.3.3:80
- 10.100.3.13:80
```

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/nginx-cache:0.5.0@sha256:99cd04f09f80eb0c60cc0b2f6bc8180ada7ada00cb594606447674953dfa1b67
ghcr.io/cozystack/cozystack/nginx-cache:0.6.1@sha256:e0a07082bb6fc6aeaae2315f335386f1705a646c72f9e0af512aebbca5cb2b15

View File

@@ -33,11 +33,7 @@ spec:
containers:
- image: haproxy:latest
name: haproxy
{{- if .Values.haproxy.resources }}
resources: {{- toYaml .Values.haproxy.resources | nindent 10 }}
{{- else if ne .Values.haproxy.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.haproxy.resourcesPreset "Release" .Release) | nindent 10 }}
{{- end }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.haproxy.resourcesPreset .Values.haproxy.resources $) | nindent 10 }}
ports:
- containerPort: 8080
name: http

View File

@@ -52,11 +52,7 @@ spec:
shareProcessNamespace: true
containers:
- name: nginx
{{- if $.Values.nginx.resources }}
resources: {{- toYaml $.Values.nginx.resources | nindent 10 }}
{{- else if ne $.Values.nginx.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" $.Values.nginx.resourcesPreset "Release" $.Release) | nindent 10 }}
{{- end }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list $.Values.nginx.resourcesPreset $.Values.nginx.resources $) | nindent 10 }}
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}"
readinessProbe:
httpGet:

View File

@@ -0,0 +1,39 @@
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-haproxy
spec:
replicas: {{ .Values.haproxy.replicas }}
minReplicas: 1
kind: http-cache
type: http-cache
selector:
app: {{ $.Release.Name }}-haproxy
version: {{ $.Chart.Version }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}-nginx
spec:
replicas: {{ .Values.nginx.replicas }}
minReplicas: 1
kind: http-cache
type: http-cache
selector:
app: {{ $.Release.Name }}-nginx-cache
version: {{ $.Chart.Version }}
---
apiVersion: cozystack.io/v1alpha1
kind: WorkloadMonitor
metadata:
name: {{ $.Release.Name }}
spec:
replicas: {{ .Values.replicas }}
minReplicas: 1
kind: http-cache
type: http-cache
selector:
app.kubernetes.io/instance: {{ $.Release.Name }}
version: {{ $.Chart.Version }}

View File

@@ -1,67 +1,85 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"endpoints": {
"default": [],
"description": "Endpoints configuration",
"items": {},
"type": "array"
},
"external": {
"type": "boolean",
"default": false,
"description": "Enable external access from outside the cluster",
"default": false
},
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the data",
"default": ""
"type": "boolean"
},
"haproxy": {
"type": "object",
"properties": {
"replicas": {
"type": "number",
"default": 2,
"description": "Number of HAProxy replicas",
"default": 2
"type": "number"
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
"default": {},
"description": "Explicit CPU and memory configuration for each HAProxy replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"type": "object"
},
"resourcesPreset": {
"default": "nano",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.",
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
}
}
},
"type": "object"
},
"nginx": {
"type": "object",
"properties": {
"replicas": {
"type": "number",
"default": 2,
"description": "Number of Nginx replicas",
"default": 2
"type": "number"
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
"default": {},
"description": "Explicit CPU and memory configuration for each nginx replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"type": "object"
},
"resourcesPreset": {
"default": "nano",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.",
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
}
}
},
"type": "object"
},
"endpoints": {
"type": "array",
"description": "Endpoints configuration",
"default": [],
"items": {}
"size": {
"default": "10Gi",
"description": "Persistent Volume size",
"type": "string"
},
"storageClass": {
"default": "",
"description": "StorageClass used to store the data",
"type": "string"
}
}
}
},
"title": "Chart Values",
"type": "object"
}

View File

@@ -1,45 +1,12 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param storageClass StorageClass used to store the data
## @param haproxy.replicas Number of HAProxy replicas
## @param nginx.replicas Number of Nginx replicas
##
external: false
## @param size Persistent Volume size
size: 10Gi
## @param storageClass StorageClass used to store the data
storageClass: ""
haproxy:
replicas: 2
## @param haproxy.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param haproxy.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
nginx:
replicas: 2
## @param nginx.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param nginx.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
## @section Configuration parameters
## @param external Enable external access from outside the cluster
external: false
## @section Application-specific parameters
## @param endpoints Endpoints configuration
## Example:
@@ -52,3 +19,29 @@ nginx:
## - 10.100.3.13:80
##
endpoints: []
## @section HAProxy parameters
haproxy:
## @param haproxy.replicas Number of HAProxy replicas
replicas: 2
## @param haproxy.resources Explicit CPU and memory configuration for each HAProxy replica. When left empty, the preset defined in `resourcesPreset` is applied.
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param haproxy.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.
resourcesPreset: "nano"
## @section Nginx parameters
nginx:
## @param nginx.replicas Number of Nginx replicas
replicas: 2
## @param nginx.resources Explicit CPU and memory configuration for each nginx replica. When left empty, the preset defined in `resourcesPreset` is applied.
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param nginx.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.
resourcesPreset: "nano"

View File

@@ -16,7 +16,7 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.6.0
version: 0.8.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to

View File

@@ -1,4 +1,7 @@
include ../../../scripts/package.mk
PRESET_ENUM := ["nano","micro","small","medium","large","xlarge","2xlarge"]
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
readme-generator-for-helm -v values.yaml -s values.schema.json -r README.md
yq -i -o json --indent 4 '.properties.kafka.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
yq -i -o json --indent 4 '.properties.zookeeper.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json

View File

@@ -4,22 +4,77 @@
### Common parameters
| Name | Description | Value |
| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `kafka.size` | Persistent Volume size for Kafka | `10Gi` |
| `kafka.replicas` | Number of Kafka replicas | `3` |
| `kafka.storageClass` | StorageClass used to store the Kafka data | `""` |
| `zookeeper.size` | Persistent Volume size for ZooKeeper | `5Gi` |
| `zookeeper.replicas` | Number of ZooKeeper replicas | `3` |
| `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data | `""` |
| `kafka.resources` | Resources | `{}` |
| `kafka.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| `zookeeper.resources` | Resources | `{}` |
| `zookeeper.resourcesPreset` | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | `nano` |
| Name | Description | Value |
| ---------- | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
### Configuration parameters
### Application-specific parameters
| Name | Description | Value |
| -------- | -------------------- | ----- |
| `topics` | Topics configuration | `[]` |
| Name | Description | Value |
| -------- | ---------------------------------- | ----- |
| `topics` | Topics configuration (see example) | `[]` |
### Kafka configuration
| Name | Description | Value |
| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `kafka.replicas` | Number of Kafka replicas | `3` |
| `kafka.resources` | Explicit CPU and memory configuration for each Kafka replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `kafka.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `small` |
| `kafka.size` | Persistent Volume size for Kafka | `10Gi` |
| `kafka.storageClass` | StorageClass used to store the Kafka data | `""` |
### Zookeeper configuration
| Name | Description | Value |
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `zookeeper.replicas` | Number of ZooKeeper replicas | `3` |
| `zookeeper.resources` | Explicit CPU and memory configuration for each Zookeeper replica. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `zookeeper.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `small` |
| `zookeeper.size` | Persistent Volume size for ZooKeeper | `5Gi` |
| `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data | `""` |
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
```yaml
resources:
cpu: 4000m
memory: 4Gi
```
`resourcesPreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `250m` | `128Mi` |
| `micro` | `500m` | `256Mi` |
| `small` | `1` | `512Mi` |
| `medium` | `1` | `1Gi` |
| `large` | `2` | `2Gi` |
| `xlarge` | `4` | `4Gi` |
| `2xlarge` | `8` | `8Gi` |
### topics
```yaml
topics:
- name: Results
partitions: 1
replicas: 3
config:
min.insync.replicas: 2
- name: Orders
config:
cleanup.policy: compact
segment.ms: 3600000
max.compaction.lag.ms: 5400000
min.insync.replicas: 2
partitions: 1
replicas: 3
```

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -25,3 +25,14 @@ rules:
- {{ .Release.Name }}
- {{ $.Release.Name }}-zookeeper
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-dashboard-resources
subjects:
{{ include "cozy-lib.rbac.subjectsForTenantAndAccessLevel" (list "use" .Release.Namespace) }}
roleRef:
kind: Role
name: {{ .Release.Name }}-dashboard-resources
apiGroup: rbac.authorization.k8s.io

View File

@@ -8,11 +8,7 @@ metadata:
spec:
kafka:
replicas: {{ .Values.kafka.replicas }}
{{- if .Values.kafka.resources }}
resources: {{- toYaml .Values.kafka.resources | nindent 6 }}
{{- else if ne .Values.kafka.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.kafka.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.kafka.resourcesPreset .Values.kafka.resources $) | nindent 6 }}
listeners:
- name: plain
port: 9092
@@ -70,11 +66,7 @@ spec:
key: kafka-metrics-config.yml
zookeeper:
replicas: {{ .Values.zookeeper.replicas }}
{{- if .Values.zookeeper.resources }}
resources: {{- toYaml .Values.zookeeper.resources | nindent 6 }}
{{- else if ne .Values.zookeeper.resourcesPreset "none" }}
resources: {{- include "resources.preset" (dict "type" .Values.zookeeper.resourcesPreset "Release" .Release) | nindent 6 }}
{{- end }}
resources: {{- include "cozy-lib.resources.defaultingSanitize" (list .Values.zookeeper.resourcesPreset .Values.zookeeper.resources $) | nindent 6 }}
storage:
type: persistent-claim
{{- with .Values.zookeeper.size }}

View File

@@ -1,77 +1,95 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"default": false,
"description": "Enable external access from outside the cluster",
"default": false
"type": "boolean"
},
"kafka": {
"type": "object",
"properties": {
"size": {
"type": "string",
"description": "Persistent Volume size for Kafka",
"default": "10Gi"
},
"replicas": {
"type": "number",
"default": 3,
"description": "Number of Kafka replicas",
"default": 3
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the Kafka data",
"default": ""
"type": "number"
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
"default": {},
"description": "Explicit CPU and memory configuration for each Kafka replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"type": "object"
},
"resourcesPreset": {
"default": "small",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.",
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
}
}
},
"zookeeper": {
"type": "object",
"properties": {
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"size": {
"type": "string",
"description": "Persistent Volume size for ZooKeeper",
"default": "5Gi"
},
"replicas": {
"type": "number",
"description": "Number of ZooKeeper replicas",
"default": 3
"default": "10Gi",
"description": "Persistent Volume size for Kafka",
"type": "string"
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the ZooKeeper data",
"default": ""
},
"resources": {
"type": "object",
"description": "Resources",
"default": {}
},
"resourcesPreset": {
"type": "string",
"description": "Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).",
"default": "nano"
"default": "",
"description": "StorageClass used to store the Kafka data",
"type": "string"
}
}
},
"type": "object"
},
"topics": {
"type": "array",
"description": "Topics configuration",
"default": [],
"items": {}
"description": "Topics configuration (see example)",
"items": {},
"type": "array"
},
"zookeeper": {
"properties": {
"replicas": {
"default": 3,
"description": "Number of ZooKeeper replicas",
"type": "number"
},
"resources": {
"default": {},
"description": "Explicit CPU and memory configuration for each Zookeeper replica. When left empty, the preset defined in `resourcesPreset` is applied.",
"type": "object"
},
"resourcesPreset": {
"default": "small",
"description": "Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.",
"type": "string",
"enum": [
"nano",
"micro",
"small",
"medium",
"large",
"xlarge",
"2xlarge"
]
},
"size": {
"default": "5Gi",
"description": "Persistent Volume size for ZooKeeper",
"type": "string"
},
"storageClass": {
"default": "",
"description": "StorageClass used to store the ZooKeeper data",
"type": "string"
}
},
"type": "object"
}
}
}
},
"title": "Chart Values",
"type": "object"
}

View File

@@ -1,52 +1,12 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param kafka.size Persistent Volume size for Kafka
## @param kafka.replicas Number of Kafka replicas
## @param kafka.storageClass StorageClass used to store the Kafka data
## @param zookeeper.size Persistent Volume size for ZooKeeper
## @param zookeeper.replicas Number of ZooKeeper replicas
## @param zookeeper.storageClass StorageClass used to store the ZooKeeper data
##
## @param external Enable external access from outside the cluster
external: false
kafka:
size: 10Gi
replicas: 3
storageClass: ""
## @param kafka.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param kafka.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
zookeeper:
size: 5Gi
replicas: 3
storageClass: ""
## @param zookeeper.resources Resources
resources: {}
# resources:
# limits:
# cpu: 4000m
# memory: 4Gi
# requests:
# cpu: 100m
# memory: 512Mi
## @param zookeeper.resourcesPreset Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).
resourcesPreset: "nano"
## @section Configuration parameters
## @param topics Topics configuration
## @section Application-specific parameters
##
## @param topics Topics configuration (see example)
## Example:
## topics:
## - name: Results
@@ -64,3 +24,41 @@ zookeeper:
## replicas: 3
##
topics: []
## @section Kafka configuration
##
kafka:
## @param kafka.replicas Number of Kafka replicas
replicas: 3
## @param kafka.resources Explicit CPU and memory configuration for each Kafka replica. When left empty, the preset defined in `resourcesPreset` is applied.
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param kafka.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.
resourcesPreset: "small"
## @param kafka.size Persistent Volume size for Kafka
size: 10Gi
## @param kafka.storageClass StorageClass used to store the Kafka data
storageClass: ""
## @section Zookeeper configuration
##
zookeeper:
## @param zookeeper.replicas Number of ZooKeeper replicas
replicas: 3
## @param zookeeper.resources Explicit CPU and memory configuration for each Zookeeper replica. When left empty, the preset defined in `resourcesPreset` is applied.
resources: {}
# resources:
# cpu: 4000m
# memory: 4Gi
## @param zookeeper.resourcesPreset Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge.
resourcesPreset: "small"
## @param zookeeper.size Persistent Volume size for ZooKeeper
size: 5Gi
## @param zookeeper.storageClass StorageClass used to store the ZooKeeper data
storageClass: ""

View File

@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.21.0
version: 0.26.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: 1.32.4
appVersion: 1.32.6

View File

@@ -1,15 +1,18 @@
KUBERNETES_VERSION = v1.32
KUBERNETES_PKG_TAG = $(shell awk '$$1 == "version:" {print $$2}' Chart.yaml)
PRESET_ENUM := ["nano","micro","small","medium","large","xlarge","2xlarge"]
include ../../../scripts/common-envs.mk
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
yq -o json -i '.properties.controlPlane.properties.apiServer.properties.resourcesPreset.enum = ["none","nano","micro","small","medium","large","xlarge","2xlarge"]' values.schema.json
yq -o json -i '.properties.controlPlane.properties.controllerManager.properties.resourcesPreset.enum = ["none","nano","micro","small","medium","large","xlarge","2xlarge"]' values.schema.json
yq -o json -i '.properties.controlPlane.properties.scheduler.properties.resourcesPreset.enum = ["none","nano","micro","small","medium","large","xlarge","2xlarge"]' values.schema.json
yq -o json -i '.properties.controlPlane.properties.konnectivity.properties.server.properties.resourcesPreset.enum = ["none","nano","micro","small","medium","large","xlarge","2xlarge"]' values.schema.json
readme-generator-for-helm -v values.yaml -s values.schema.json -r README.md
yq -o=json -i '.properties.version.enum = (load("files/versions.yaml") | keys)' values.schema.json
yq -o json -i '.properties.addons.properties.ingressNginx.properties.exposeMethod.enum = ["Proxied","LoadBalancer"]' values.schema.json
yq -o json -i '.properties.controlPlane.properties.apiServer.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
yq -o json -i '.properties.controlPlane.properties.controllerManager.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
yq -o json -i '.properties.controlPlane.properties.scheduler.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
yq -o json -i '.properties.controlPlane.properties.konnectivity.properties.server.properties.resourcesPreset.enum = $(PRESET_ENUM)' values.schema.json
image: image-ubuntu-container-disk image-kubevirt-cloud-provider image-kubevirt-csi-driver image-cluster-autoscaler
@@ -63,6 +66,8 @@ image-kubevirt-csi-driver:
--load=$(LOAD)
echo "$(REGISTRY)/kubevirt-csi-driver:$(call settag,$(KUBERNETES_PKG_TAG))@$$(yq e '."containerimage.digest"' images/kubevirt-csi-driver.json -o json -r)" \
> images/kubevirt-csi-driver.tag
IMAGE=$$(cat images/kubevirt-csi-driver.tag) \
yq -i '.csiDriver.image = strenv(IMAGE)' ../../system/kubevirt-csi-node/values.yaml
rm -f images/kubevirt-csi-driver.json

View File

@@ -11,6 +11,9 @@ Tenant clusters are fully separated from the management cluster and are intended
Within a tenant cluster, users can take advantage of LoadBalancer services and easily provision physical volumes as needed.
The control-plane operates within containers, while the worker nodes are deployed as virtual machines, all seamlessly managed by the application.
Kubernetes version in tenant clusters is independent of Kubernetes in the management cluster.
Users can select the latest patch versions from 1.28 to 1.33.
## Why Use a Managed Kubernetes Cluster?
Kubernetes has emerged as the industry standard, providing a unified and accessible API, primarily utilizing YAML for configuration.
@@ -81,62 +84,79 @@ See the reference for components utilized in this service:
### Common Parameters
| Name | Description | Value |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------- | ------------ |
| `host` | Hostname used to access the Kubernetes cluster externally. Defaults to `<cluster-name>.<tenant-host>` when empty. | `""` |
| `controlPlane.replicas` | Number of replicas for Kubernetes control-plane components. | `2` |
| `storageClass` | StorageClass used to store user data. | `replicated` |
| `nodeGroups` | nodeGroups configuration | `{}` |
| Name | Description | Value |
| -------------- | ------------------------------------- | ------------ |
| `storageClass` | StorageClass used to store user data. | `replicated` |
### Application-specific parameters
| Name | Description | Value |
| ------------ | ----------------------------------------------------------------------------------------------------------------- | ------- |
| `version` | Kubernetes version given as vMAJOR.MINOR. Available are versions from 1.28 to 1.33. | `v1.32` |
| `host` | Hostname used to access the Kubernetes cluster externally. Defaults to `<cluster-name>.<tenant-host>` when empty. | `""` |
| `nodeGroups` | Worker nodes configuration (see example) | `{}` |
### Cluster Addons
| Name | Description | Value |
| --------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `addons.certManager.enabled` | Enable cert-manager, which automatically creates and manages SSL/TLS certificates. | `false` |
| `addons.certManager.valuesOverride` | Custom values to override | `{}` |
| `addons.cilium.valuesOverride` | Custom values to override | `{}` |
| `addons.gatewayAPI.enabled` | Enable the Gateway API | `false` |
| `addons.ingressNginx.enabled` | Enable the Ingress-NGINX controller (requires nodes labeled with the 'ingress-nginx' role). | `false` |
| `addons.ingressNginx.valuesOverride` | Custom values to override | `{}` |
| `addons.ingressNginx.hosts` | List of domain names that the parent cluster should route to this tenant cluster. | `[]` |
| `addons.gpuOperator.enabled` | Enable the GPU-operator | `false` |
| `addons.gpuOperator.valuesOverride` | Custom values to override | `{}` |
| `addons.fluxcd.enabled` | Enable FluxCD | `false` |
| `addons.fluxcd.valuesOverride` | Custom values to override | `{}` |
| `addons.monitoringAgents.enabled` | Enable monitoring agents (Fluent Bit and VMAgents) to send logs and metrics. If tenant monitoring is enabled, data is sent to tenant storage; otherwise, it goes to root storage. | `false` |
| `addons.monitoringAgents.valuesOverride` | Custom values to override | `{}` |
| `addons.verticalPodAutoscaler.valuesOverride` | Custom values to override | `{}` |
| Name | Description | Value |
| --------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- |
| `addons.certManager.enabled` | Enable cert-manager, which automatically creates and manages SSL/TLS certificates. | `false` |
| `addons.certManager.valuesOverride` | Custom values to override | `{}` |
| `addons.cilium.valuesOverride` | Custom values to override | `{}` |
| `addons.gatewayAPI.enabled` | Enable the Gateway API | `false` |
| `addons.ingressNginx.enabled` | Enable the Ingress-NGINX controller (requires nodes labeled with the 'ingress-nginx' role). | `false` |
| `addons.ingressNginx.exposeMethod` | Method to expose the Ingress-NGINX controller. (allowed values: Proxied, LoadBalancer) | `Proxied` |
| `addons.ingressNginx.hosts` | List of domain names that the parent cluster should route to this tenant cluster. Taken into account only when `exposeMethod` is set to `Proxied`. | `[]` |
| `addons.ingressNginx.valuesOverride` | Custom values to override | `{}` |
| `addons.gpuOperator.enabled` | Enable the GPU-operator | `false` |
| `addons.gpuOperator.valuesOverride` | Custom values to override | `{}` |
| `addons.fluxcd.enabled` | Enable FluxCD | `false` |
| `addons.fluxcd.valuesOverride` | Custom values to override | `{}` |
| `addons.monitoringAgents.enabled` | Enable monitoring agents (Fluent Bit and VMAgents) to send logs and metrics. If tenant monitoring is enabled, data is sent to tenant storage; otherwise, it goes to root storage. | `false` |
| `addons.monitoringAgents.valuesOverride` | Custom values to override | `{}` |
| `addons.verticalPodAutoscaler.valuesOverride` | Custom values to override | `{}` |
| `addons.velero.enabled` | Enable velero for backup and restore k8s cluster. | `false` |
| `addons.velero.valuesOverride` | Custom values to override | `{}` |
### Kubernetes Control Plane Configuration
| Name | Description | Value |
| -------------------------------------------------- | ---------------------------------------------------------------------------- | ------- |
| `controlPlane.apiServer.resources` | Explicit CPU/memory resource requests and limits for the API server. | `{}` |
| `controlPlane.apiServer.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `small` |
| `controlPlane.controllerManager.resources` | Explicit CPU/memory resource requests and limits for the controller manager. | `{}` |
| `controlPlane.controllerManager.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.scheduler.resources` | Explicit CPU/memory resource requests and limits for the scheduler. | `{}` |
| `controlPlane.scheduler.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| `controlPlane.konnectivity.server.resources` | Explicit CPU/memory resource requests and limits for the Konnectivity. | `{}` |
| `controlPlane.konnectivity.server.resourcesPreset` | Use a common resources preset when `resources` is not set explicitly. | `micro` |
| Name | Description | Value |
| -------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | -------- |
| `controlPlane.replicas` | Number of replicas for Kubernetes control-plane components. | `2` |
| `controlPlane.apiServer.resources` | Explicit CPU and memory configuration for the API Server. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `controlPlane.apiServer.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `medium` |
| `controlPlane.controllerManager.resources` | Explicit CPU and memory configuration for the Controller Manager. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `controlPlane.controllerManager.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `micro` |
| `controlPlane.scheduler.resources` | Explicit CPU and memory configuration for the Scheduler. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `controlPlane.scheduler.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `micro` |
| `controlPlane.konnectivity.server.resources` | Explicit CPU and memory configuration for Konnectivity. When left empty, the preset defined in `resourcesPreset` is applied. | `{}` |
| `controlPlane.konnectivity.server.resourcesPreset` | Default sizing preset used when `resources` is omitted. Allowed values: nano, micro, small, medium, large, xlarge, 2xlarge. | `micro` |
In production environments, it's recommended to set `resources` explicitly.
Example of `controlPlane.*.resources`:
## Parameter examples and reference
### resources and resourcesPreset
`resources` sets explicit CPU and memory configurations for each replica.
When left empty, the preset defined in `resourcesPreset` is applied.
```yaml
resources:
limits:
cpu: 4000m
memory: 4Gi
requests:
cpu: 100m
memory: 512Mi
cpu: 4000m
memory: 4Gi
```
Allowed values for `controlPlane.*.resourcesPreset` are `none`, `nano`, `micro`, `small`, `medium`, `large`, `xlarge`, `2xlarge`.
This value is ignored if the corresponding `resources` value is set.
`resourcesPreset` sets named CPU and memory configurations for each replica.
This setting is ignored if the corresponding `resources` value is set.
## Resources Reference
| Preset name | CPU | memory |
|-------------|--------|---------|
| `nano` | `250m` | `128Mi` |
| `micro` | `500m` | `256Mi` |
| `small` | `1` | `512Mi` |
| `medium` | `1` | `1Gi` |
| `large` | `2` | `2Gi` |
| `xlarge` | `4` | `4Gi` |
| `2xlarge` | `8` | `8Gi` |
### instanceType Resources
@@ -300,4 +320,3 @@ Specific characteristics of this series are:
workload.
- *vCPU-To-Memory Ratio (1:4)* - A vCPU-to-Memory ratio of 1:4 starting from
the medium size.

View File

@@ -0,0 +1 @@
../../../library/cozy-lib

View File

@@ -0,0 +1,6 @@
"v1.28": "v1.28.15"
"v1.29": "v1.29.15"
"v1.30": "v1.30.14"
"v1.31": "v1.31.10"
"v1.32": "v1.32.6"
"v1.33": "v1.33.0"

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.21.0@sha256:7315850634728a5864a3de3150c12f0e1454f3f1ce33cdf21a278f57611dd5e9
ghcr.io/cozystack/cozystack/cluster-autoscaler:0.26.0@sha256:3a8170433e1632e5cc2b6d9db34d0605e8e6c63c158282c38450415e700e932e

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.21.0@sha256:6962bdf51ab2ff40b420b9cff7c850aeea02187da2a65a67f10e0471744649d7
ghcr.io/cozystack/cozystack/kubevirt-cloud-provider:0.26.0@sha256:71f9afa218693a890f827cb5cda98ba327302bd9f58afde767740557538e07d9

View File

@@ -8,7 +8,7 @@ ENV GOARCH=$TARGETARCH
RUN git clone https://github.com/kubevirt/cloud-provider-kubevirt /go/src/kubevirt.io/cloud-provider-kubevirt \
&& cd /go/src/kubevirt.io/cloud-provider-kubevirt \
&& git checkout 443a1fe
&& git checkout a0acf33
WORKDIR /go/src/kubevirt.io/cloud-provider-kubevirt

View File

@@ -37,7 +37,7 @@ index 74166b5d9..4e744f8de 100644
klog.Infof("Initializing kubevirtEPSController")
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 6f6e3d322..b56882c12 100644
index 53388eb8e..b56882c12 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -54,10 +54,10 @@ type Controller struct {
@@ -286,22 +286,6 @@ index 6f6e3d322..b56882c12 100644
for _, eps := range slicesToDelete {
err := c.infraClient.DiscoveryV1().EndpointSlices(eps.Namespace).Delete(context.TODO(), eps.Name, metav1.DeleteOptions{})
if err != nil {
@@ -474,11 +538,11 @@ func (c *Controller) reconcileByAddressType(service *v1.Service, tenantSlices []
// Create the desired port configuration
var desiredPorts []discovery.EndpointPort
- for _, port := range service.Spec.Ports {
+ for i := range service.Spec.Ports {
desiredPorts = append(desiredPorts, discovery.EndpointPort{
- Port: &port.TargetPort.IntVal,
- Protocol: &port.Protocol,
- Name: &port.Name,
+ Port: &service.Spec.Ports[i].TargetPort.IntVal,
+ Protocol: &service.Spec.Ports[i].Protocol,
+ Name: &service.Spec.Ports[i].Name,
})
}
@@ -588,55 +652,114 @@ func ownedBy(endpointSlice *discovery.EndpointSlice, svc *v1.Service) bool {
return false
}
@@ -437,18 +421,10 @@ index 6f6e3d322..b56882c12 100644
return nil
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1fb86e25f..14d92d340 100644
index 1c97035b4..14d92d340 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
@@ -13,6 +13,7 @@ import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/util/intstr"
+ "k8s.io/apimachinery/pkg/util/sets"
dfake "k8s.io/client-go/dynamic/fake"
"k8s.io/client-go/kubernetes/fake"
"k8s.io/client-go/testing"
@@ -189,7 +190,7 @@ func setupTestKubevirtEPSController() *testKubevirtEPSController {
@@ -190,7 +190,7 @@ func setupTestKubevirtEPSController() *testKubevirtEPSController {
}: "VirtualMachineInstanceList",
})
@@ -457,83 +433,87 @@ index 1fb86e25f..14d92d340 100644
err := controller.Init()
if err != nil {
@@ -686,5 +687,229 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
return false, err
}).Should(BeTrue(), "EndpointSlice in infra cluster should be recreated by the controller after deletion")
})
+
+ g.It("Should correctly handle multiple unique ports in EndpointSlice", func() {
+ // Create a VMI in the infra cluster
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+
+ // Create an EndpointSlice in the tenant cluster
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,
+ *createPort("http", 80, v1.ProtocolTCP),
+ []discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
+
@@ -697,51 +697,43 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
*createPort("http", 80, v1.ProtocolTCP),
[]discoveryv1.Endpoint{*createEndpoint("123.45.67.89", "worker-0-test", true, true, false)})
- // Define several unique ports for the Service
+ // Define multiple ports for the Service
+ servicePorts := []v1.ServicePort{
+ {
servicePorts := []v1.ServicePort{
{
- Name: "client",
- Protocol: v1.ProtocolTCP,
- Port: 10001,
- TargetPort: intstr.FromInt(30396),
- NodePort: 30396,
- AppProtocol: nil,
+ Name: "client",
+ Protocol: v1.ProtocolTCP,
+ Port: 10001,
+ TargetPort: intstr.FromInt(30396),
+ NodePort: 30396,
+ },
+ {
},
{
- Name: "dashboard",
- Protocol: v1.ProtocolTCP,
- Port: 8265,
- TargetPort: intstr.FromInt(31003),
- NodePort: 31003,
- AppProtocol: nil,
+ Name: "dashboard",
+ Protocol: v1.ProtocolTCP,
+ Port: 8265,
+ TargetPort: intstr.FromInt(31003),
+ NodePort: 31003,
+ },
+ {
},
{
- Name: "metrics",
- Protocol: v1.ProtocolTCP,
- Port: 8080,
- TargetPort: intstr.FromInt(30452),
- NodePort: 30452,
- AppProtocol: nil,
+ Name: "metrics",
+ Protocol: v1.ProtocolTCP,
+ Port: 8080,
+ TargetPort: intstr.FromInt(30452),
+ NodePort: 30452,
+ },
+ }
+
+ createAndAssertInfraServiceLB("infra-multiport-service", "tenant-service-name", "test-cluster",
},
}
- // Create a Service with the first port
createAndAssertInfraServiceLB("infra-multiport-service", "tenant-service-name", "test-cluster",
- servicePorts[0],
- v1.ServiceExternalTrafficPolicyLocal)
+ servicePorts[0], v1.ServiceExternalTrafficPolicyLocal)
+
+ svc, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(context.TODO(), "infra-multiport-service", metav1.GetOptions{})
+ Expect(err).To(BeNil())
+
+ svc.Spec.Ports = servicePorts
+ _, err = testVals.infraClient.CoreV1().Services(infraNamespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
+ Expect(err).To(BeNil())
+
+ var epsListMultiPort *discoveryv1.EndpointSliceList
+
+ Eventually(func() (bool, error) {
+ epsListMultiPort, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
+ if len(epsListMultiPort.Items) != 1 {
+ return false, err
+ }
+
+ createdSlice := epsListMultiPort.Items[0]
+ expectedPortNames := []string{"client", "dashboard", "metrics"}
+ foundPortNames := []string{}
+
+ for _, port := range createdSlice.Ports {
+ if port.Name != nil {
+ foundPortNames = append(foundPortNames, *port.Name)
+ }
+ }
+
+ if len(foundPortNames) != len(expectedPortNames) {
+ return false, err
+ }
+
+ portSet := sets.NewString(foundPortNames...)
+ expectedPortSet := sets.NewString(expectedPortNames...)
+ return portSet.Equal(expectedPortSet), err
+ }).Should(BeTrue(), "EndpointSlice should contain all unique ports from the Service without duplicates")
+ })
+
- // Update the Service by adding the remaining ports
svc, err := testVals.infraClient.CoreV1().Services(infraNamespace).Get(context.TODO(), "infra-multiport-service", metav1.GetOptions{})
Expect(err).To(BeNil())
svc.Spec.Ports = servicePorts
-
_, err = testVals.infraClient.CoreV1().Services(infraNamespace).Update(context.TODO(), svc, metav1.UpdateOptions{})
Expect(err).To(BeNil())
var epsListMultiPort *discoveryv1.EndpointSliceList
- // Verify that the EndpointSlice is created with correct unique ports
Eventually(func() (bool, error) {
epsListMultiPort, err = testVals.infraClient.DiscoveryV1().EndpointSlices(infraNamespace).List(context.TODO(), metav1.ListOptions{})
if len(epsListMultiPort.Items) != 1 {
@@ -758,7 +750,6 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
}
}
- // Verify that all expected ports are present and without duplicates
if len(foundPortNames) != len(expectedPortNames) {
return false, err
}
@@ -769,5 +760,156 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
}).Should(BeTrue(), "EndpointSlice should contain all unique ports from the Service without duplicates")
})
+ g.It("Should not panic when Service changes to have a non-nil selector, causing EndpointSlice deletion with no new slices to create", func() {
+ createAndAssertVMI("worker-0-test", "ip-10-32-5-13", "123.45.67.89")
+ createAndAssertTenantSlice("test-epslice", "tenant-service-name", discoveryv1.AddressTypeIPv4,

View File

@@ -0,0 +1,142 @@
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller.go b/pkg/controller/kubevirteps/kubevirteps_controller.go
index 53388eb8e..28644236f 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller.go
@@ -12,7 +12,6 @@ import (
apiequality "k8s.io/apimachinery/pkg/api/equality"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
@@ -669,35 +668,50 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
for _, slice := range tenantSlices {
for _, endpoint := range slice.Endpoints {
// find all unique nodes that correspond to an endpoint in a tenant slice
+ if endpoint.NodeName == nil {
+ klog.Warningf("Skipping endpoint without NodeName in slice %s/%s", slice.Namespace, slice.Name)
+ continue
+ }
nodeSet.Insert(*endpoint.NodeName)
}
}
- klog.Infof("Desired nodes for service %s in namespace %s: %v", service.Name, service.Namespace, sets.List(nodeSet))
+ klog.Infof("Desired nodes for service %s/%s: %v", service.Namespace, service.Name, sets.List(nodeSet))
for _, node := range sets.List(nodeSet) {
// find vmi for node name
- obj := &unstructured.Unstructured{}
- vmi := &kubevirtv1.VirtualMachineInstance{}
-
- obj, err := c.infraDynamic.Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).Namespace(c.infraNamespace).Get(context.TODO(), node, metav1.GetOptions{})
+ obj, err := c.infraDynamic.
+ Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).
+ Namespace(c.infraNamespace).
+ Get(context.TODO(), node, metav1.GetOptions{})
if err != nil {
- klog.Errorf("Failed to get VirtualMachineInstance %s in namespace %s:%v", node, c.infraNamespace, err)
+ klog.Errorf("Failed to get VMI %q in namespace %q: %v", node, c.infraNamespace, err)
continue
}
+ vmi := &kubevirtv1.VirtualMachineInstance{}
err = runtime.DefaultUnstructuredConverter.FromUnstructured(obj.Object, vmi)
if err != nil {
klog.Errorf("Failed to convert Unstructured to VirtualMachineInstance: %v", err)
- klog.Fatal(err)
+ continue
}
+ if vmi.Status.NodeName == "" {
+ klog.Warningf("Skipping VMI %s/%s: NodeName is empty", vmi.Namespace, vmi.Name)
+ continue
+ }
+ nodeNamePtr := &vmi.Status.NodeName
+
ready := vmi.Status.Phase == kubevirtv1.Running
serving := vmi.Status.Phase == kubevirtv1.Running
terminating := vmi.Status.Phase == kubevirtv1.Failed || vmi.Status.Phase == kubevirtv1.Succeeded
for _, i := range vmi.Status.Interfaces {
if i.Name == "default" {
+ if i.IP == "" {
+ klog.Warningf("VMI %s/%s interface %q has no IP, skipping", vmi.Namespace, vmi.Name, i.Name)
+ continue
+ }
desiredEndpoints = append(desiredEndpoints, &discovery.Endpoint{
Addresses: []string{i.IP},
Conditions: discovery.EndpointConditions{
@@ -705,9 +719,9 @@ func (c *Controller) getDesiredEndpoints(service *v1.Service, tenantSlices []*di
Serving: &serving,
Terminating: &terminating,
},
- NodeName: &vmi.Status.NodeName,
+ NodeName: nodeNamePtr,
})
- continue
+ break
}
}
}
diff --git a/pkg/controller/kubevirteps/kubevirteps_controller_test.go b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
index 1c97035b4..d205d0bed 100644
--- a/pkg/controller/kubevirteps/kubevirteps_controller_test.go
+++ b/pkg/controller/kubevirteps/kubevirteps_controller_test.go
@@ -771,3 +771,55 @@ var _ = g.Describe("KubevirtEPSController", g.Ordered, func() {
})
})
+
+var _ = g.Describe("getDesiredEndpoints", func() {
+ g.It("should skip endpoints without NodeName and VMIs without NodeName or IP", func() {
+ // Setup controller
+ ctrl := setupTestKubevirtEPSController().controller
+
+ // Manually inject dynamic client content (1 VMI with missing NodeName)
+ vmi := createUnstructuredVMINode("vmi-without-node", "", "10.0.0.1") // empty NodeName
+ _, err := ctrl.infraDynamic.
+ Resource(kubevirtv1.VirtualMachineInstanceGroupVersionKind.GroupVersion().WithResource("virtualmachineinstances")).
+ Namespace(infraNamespace).
+ Create(context.TODO(), vmi, metav1.CreateOptions{})
+ Expect(err).To(BeNil())
+
+ // Create service
+ svc := createInfraServiceLB("test-svc", "test-svc", "test-cluster",
+ v1.ServicePort{
+ Name: "http",
+ Port: 80,
+ TargetPort: intstr.FromInt(8080),
+ Protocol: v1.ProtocolTCP,
+ },
+ v1.ServiceExternalTrafficPolicyLocal,
+ )
+
+ // One endpoint has nil NodeName, another is valid
+ nodeName := "vmi-without-node"
+ tenantSlice := &discoveryv1.EndpointSlice{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "slice",
+ Namespace: tenantNamespace,
+ Labels: map[string]string{
+ discoveryv1.LabelServiceName: "test-svc",
+ },
+ },
+ AddressType: discoveryv1.AddressTypeIPv4,
+ Endpoints: []discoveryv1.Endpoint{
+ { // should be skipped due to nil NodeName
+ Addresses: []string{"10.1.1.1"},
+ NodeName: nil,
+ },
+ { // will hit VMI without NodeName and also be skipped
+ Addresses: []string{"10.1.1.2"},
+ NodeName: &nodeName,
+ },
+ },
+ }
+
+ endpoints := ctrl.getDesiredEndpoints(svc, []*discoveryv1.EndpointSlice{tenantSlice})
+ Expect(endpoints).To(HaveLen(0), "Expected no endpoints due to missing NodeName or IP")
+ })
+})

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.21.0@sha256:b1525163cd21938ac934bb1b860f2f3151464fa463b82880ab058167aeaf3e29
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.26.0@sha256:445c2727b04ac68595b43c988ff17b3d69a7b22b0644fde3b10c65b47a7bc036

View File

@@ -3,7 +3,7 @@ ARG builder_image=docker.io/library/golang:1.22.5
FROM ${builder_image} AS builder
RUN git clone https://github.com/kubevirt/csi-driver /src/kubevirt-csi-driver \
&& cd /src/kubevirt-csi-driver \
&& git checkout 35836e0c8b68d9916d29a838ea60cdd3fc6199cf
&& git checkout a8d6605bc9997bcfda3fb9f1f82ba6445b4984cc
ARG TARGETOS
ARG TARGETARCH
@@ -11,6 +11,7 @@ ENV GOOS=$TARGETOS
ENV GOARCH=$TARGETARCH
WORKDIR /src/kubevirt-csi-driver
RUN make build
FROM quay.io/centos/centos:stream9

View File

@@ -1 +1 @@
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.32@sha256:bfe568db4b768a4b6c67a8d562892bbba766d0245e140d431754589b347f0b41
ghcr.io/cozystack/cozystack/ubuntu-container-disk:v1.32@sha256:e53f2394c7aa76ad10818ffb945e40006cd77406999e47e036d41b8b0bf094cc

Some files were not shown because too many files have changed in this diff Show More