* Simplify test discovery logic in workflow.
* Delete Clickhouse after successful test.
* Separate two k8s tests into separate jobs.
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- **New Features**
- Added a process to list images used in the environment before deletion
during cleanup operations.
- **Chores**
- Enhanced environment cleanup workflow with improved visibility into
used images.
- Introduced a shared writable directory between host and container for
better file management during testing.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Chores**
* Improved reliability of automated testing workflows by adding retry
logic to key setup and test steps.
* Simplified resource management in end-to-end tests by switching to a
consistent apply command for creating or updating Kubernetes resources.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
This patch separates the Test job of the PR workflow into several
smaller jobs: 1) create a testing sandbox and deploy Talos, 2) install
Cozystack and configure it, 3) install managed applications and run e2e
tests. This lets developers shorten the feedback loop if tests are
merely acting flaky and aren't really broken. It's not the right way,
but it's 80/20.
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
Add 'Apps' tests for
Virtual Machine Disk
Virtual Machine Instance
Virtual Machine
PostgreSQL
MySQL
ClickHouse
Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
This patch introduces reusable library charts that provide
backward-compatibility for users that specify their resources as
explicit requests and limits for cpu, however this input is processed so
that limits are set equal to requests except for CPU which only gets
requests. Users can now embrace the new form by directly specifying
resources in the first level of nesting (e.g. resources.cpu=100m instead
of .resources.requests.cpu=100m). The order of precedence is top-level,
then requests, then limits, ensuring that nothing will break in terms of
scheduling, however workloads that specified limits much higher than
requests might get a performance hit, now that they cannot use all this
excess capacity. This should only affect memory-hungry workloads in
low-contention environments.
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- **New Features**
- Introduced a reusable Helm library chart, "cozy-lib", providing common
templates and resource helpers for other charts.
- Added resource preset and sanitization templates to standardize
Kubernetes resource configurations.
- ClickHouse chart now depends on "cozy-lib" for improved resource
handling.
- Added a new packaging script and streamlined Helm chart packaging
processes across multiple packages.
- **Bug Fixes**
- Resource configuration logic in the ClickHouse deployment was updated
to use the new library templates, ensuring more consistent resource
definitions.
- **Chores**
- Added new Makefiles and version mapping for streamlined Helm chart
packaging and validation.
- Updated ClickHouse chart version to 0.9.0 and reflected this in
version mapping files.
- Refactored Makefile targets to consolidate packaging logic and improve
maintainability.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
This patch introduces reusable library charts that provide
backward-compatibility for users that specify their resources as
explicit requests and limits for cpu, however this input is processed so
that limits are set equal to requests except for CPU which only gets
requests. Users can now embrace the new form by directly specifying
resources in the first level of nesting (e.g. resources.cpu=100m instead
of .resources.requests.cpu=100m). The order of precedence is top-level,
then requests, then limits, ensuring that nothing will break in terms of
scheduling, however workloads that specified limits much higher than
requests might get a performance hit, now that they cannot use all this
excess capacity. This should only affect memory-hungry workloads in
low-contention environments.
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
In our CI wget spams thousands of lines of the progress bar into the
output, making it hard to read. Turns out, it doesn't have an option to
just remove the progress bar, but explicitly directing wget's log to
stdout and invoking --show-progress sends that to stderr which we
redirect to dev/null. The downloaded size is still reported at regular
intervals, but --progress=dot:giga shortens that to one line per 32M
which is manageable.
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>