Compare commits

...

198 Commits

Author SHA1 Message Date
Andrei Kvapil
7931b061ce cilium: disable hostLegacyRouting 2024-08-29 13:20:35 +02:00
Andrei Kvapil
5b631a6def Update FerretDB v1.24.0 (#307)
This release includes fix for C# library

https://github.com/FerretDB/FerretDB/issues/4475#issuecomment-2315663589

as well many other improovments

https://github.com/FerretDB/FerretDB/releases/tag/v1.24.0

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

- **New Features**
- Updated the application to version 1.24.0, bringing enhancements and
improvements.
- Upgraded the container image to version 1.24.0 for the `ferretdb`
application, ensuring access to the latest features and fixes.

- **Chores**
- Incremented the chart version from 0.2.0 to 0.3.0 to reflect the new
release.

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-29 12:47:00 +02:00
Andrei Kvapil
adaf603bc2 Add fluent-bit and VictoriaLogs (#305)
![Screenshot 2024-08-28 at 15-10-20 Explore - vlog-generic -
Grafana](https://github.com/user-attachments/assets/4ba926d3-fb56-411b-88d5-a00d5d17b3dc)

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-29 12:46:46 +02:00
Andrei Kvapil
6c5cf5bf52 Prepare release v0.12.0 (#302)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-21 13:14:29 +02:00
Andrei Kvapil
9357ad4754 Prepare release v0.12.0 (#301)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-21 13:06:07 +02:00
Andrei Kvapil
fcccfd4f52 Update cilium v1.16.1 (#300) 2024-08-21 12:06:07 +02:00
Andrei Kvapil
710605100f Add opportunity to override values for tenant Kubernetes clusters (#297)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-21 11:55:46 +02:00
Timur Tukaev
14d54bc2d8 Update README.md (#298)
Links to community meetings and TG group have been added
2024-08-20 22:46:11 +02:00
Andrei Kvapil
c07c4bbdab Introduce stroageClass option for all applications (#290)
Provide the oportunity to specify StroageClass in applications

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-20 17:19:10 +02:00
Andrei Kvapil
5ca8823071 Fix e2e tests (#296)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-20 17:17:28 +02:00
Andrei Kvapil
9be774ad30 Add e2e testing sandbox (#295)
This PR introduces new functionality for running e2e-tests in
k8s-cluster.

`make test` from a root invokes deploying of new sandbox for testing
cozystack.

from `packages/core/testing`:

`make test` - runs the end-to-end tests.
`make exec` - opens an interactive shell in the sandbox container.
`make login` - downloads the kubeconfig into a temporary directory and
runs a shell with the sandbox environment; mirrord must be installed.
`make proxy` - enables a SOCKS5 proxy; mirrord and gost must be
installed.

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-20 09:07:43 +02:00
Artem Rootman
3b67f1fb27 Update Virtual Machine Chart Configuration and Documentation (#292)
- Refactored `values.yaml` to move disk size under `resources` and added
`service.ports` configuration.
- Updated `README.md` to include detailed parameter descriptions and
example configuration.
- Modified `service.yaml` to use dynamic port configuration from
`values.yaml`.
- Corrected `vm.yaml` to reference disk size from `resources` and
updated base image URL for Fedora.
- Revised `values.schema.json` to align with changes in `values.yaml`,
including added parameters and descriptions.

Enhancements include:
- Improved clarity of default values and parameter settings.
- Added flexibility for service port customization.
- Corrected and updated URLs and default values for better accuracy.
2024-08-19 15:02:16 +02:00
Andrei Kvapil
b3d4c9c6a2 fix CSI label for tenant Kubernetes clusters (#291) 2024-08-19 10:12:12 +02:00
Andrei Kvapil
4471b4ba2a Fix vmrules to process memory metrics (#289)
This PR fixes memory charts,

fixes https://github.com/aenix-io/cozystack/issues/285


![image](https://github.com/user-attachments/assets/3ceb8a4d-6fdf-49d3-80be-ff83567ba61c)

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-16 10:26:23 +02:00
Andrei Kvapil
a120ce726e DX: Use generic Makefile for packages (#288)
This change is aimed at improving the development experience.

- The option `make delete` has been added.
- Added check for `NAME` and `NAMESPACE` variables
- Now, any package (not just system ones) can include options such as
make show, make diff, make apply.
- Applications from packages/extra require explicit specification of the
`NAMESPACE`.
- Applications from packages/apps require explicit specification of both
`NAME` and `NAMESPACE`.

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-16 10:26:13 +02:00
Andrei Kvapil
a2bcf1006f Update VPN (#287)
Add new options: `host` and `externalIPs`.
Automatic password generation
Provide resource-view to dashboard for getting connection URLs

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-16 10:26:02 +02:00
Andrei Kvapil
71514249c4 Prepare release v0.11.0 (#280)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-12 20:37:20 +02:00
Andrei Kvapil
dd1d9121f2 Update Talos Linux v1.7.6 (#279) 2024-08-12 20:07:27 +02:00
Andrei Kvapil
bbdec9bc84 Update Cilium v1.16 (#277)
The new Cilium already enables our patch
https://github.com/cilium/cilium/pull/32730. It should be better to
update instead of keeping it in-tree

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-12 15:38:40 +02:00
Andrei Kvapil
40fd96dc3b Update dashboard icons (#274)
![image_2024-08-12_12-53-02
(2)](https://github.com/user-attachments/assets/8348e2ea-c89a-45aa-9ad3-de7c83f4ad1a)


![image_2024-08-12_12-53-02](https://github.com/user-attachments/assets/4b28228e-fcbe-4c03-b02a-d3c6d59f6b0a)


![image_2024-08-12_12-56-29](https://github.com/user-attachments/assets/d6852b43-1391-4bab-afc4-859433311ead)

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
Co-authored-by: Viktoriia Kvapil <159528100+kvapsova@users.noreply.github.com>
2024-08-12 14:47:11 +02:00
Andrei Kvapil
94c688f74c SeaweedFS (#131)
This PR adds SeaweedFS

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-12 14:33:48 +02:00
Andrei Kvapil
2f0373d26b Update LINSTOR v1.28 (#276) 2024-08-12 14:33:31 +02:00
Andrei Kvapil
c56e576906 fix network-policies (#272)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-12 10:10:18 +02:00
Andrei Kvapil
3dcc9ca6d0 Fix hardcoded values in ingress resource (#269) 2024-08-08 20:56:00 +02:00
Andrei Kvapil
00f7c3647b Upd dashboard and handle ResourceView (#262)
- Patch Dashboard to use specific role for resourceview
- Update kubeapps v2.11.0

partially fixes https://github.com/aenix-io/cozystack/issues/259

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-07 12:35:45 +02:00
Andrei Kvapil
cdb60f0cb1 Remove build artifacts from repository (#266)
Let's use approach suggested by @nbykov0 in
https://github.com/aenix-io/cozystack/pull/175

We will only update values.yaml and do not store build json artifact

The reset charts include this change in
- https://github.com/aenix-io/cozystack/pull/262
- https://github.com/aenix-io/cozystack/pull/263
- https://github.com/aenix-io/cozystack/pull/264
- https://github.com/aenix-io/cozystack/pull/265

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-07 12:35:24 +02:00
Andrei Kvapil
e249914865 Update kube-ovn manifests to 9e928d6 (#265)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-07 12:35:12 +02:00
Andrei Kvapil
0bdbce7991 Update Cilium v1.15.7 (#264)
Update Cilium v1.15.7
2024-08-07 12:35:00 +02:00
Andrei Kvapil
72711dfefc fix kamaji garbage collection (#263)
upstream issue https://github.com/clastix/kamaji/issues/508 

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-08-07 12:34:47 +02:00
Andrei Kvapil
13c9ec1626 add: objectstorage-controller (#244)
As part of
- https://github.com/aenix-io/cozystack/pull/131
- https://github.com/seaweedfs/seaweedfs/issues/5838

This controller will be used to provisioning S3 buckets in SeaweedFS

Upstream projects: 

-
https://github.com/kubernetes-sigs/container-object-storage-interface-api
-
https://github.com/kubernetes-sigs/container-object-storage-interface-controller

Docs:

- https://container-object-storage-interface.github.io/
2024-08-07 12:34:33 +02:00
Andrei Kvapil
fc3a6180c7 Add: CODEOWNERS file (#267) 2024-08-07 12:31:31 +02:00
klinch0
96f96a798a fix doc url (#257) 2024-08-05 23:37:42 +03:00
Andrei Kvapil
2ecaf24313 fix: kubeovn building (#253)
While update isn't possbile for now, let's use workaround, to hardcode
older ovn version

- details: https://github.com/aenix-io/cozystack/pull/252
2024-08-05 21:28:43 +02:00
Karabass-OFF
9db42ca7d7 Update ADOPTERS.md (#251) 2024-08-05 12:07:45 +02:00
Mr Khachaturov
fde10000de Update ADOPTERS.md (#247)
Added Bootstack to adopters
2024-08-03 01:02:12 +02:00
Evgeniy Kozhuhovskiy
6e31bec55a Update ADOPTERS.md (#245) 2024-08-02 09:02:05 +02:00
Andrei Kvapil
e54608d8dd Fix ingress forward both 80 and 443 ports to tenant clusters (#243) 2024-07-30 19:09:41 +02:00
Andrei Kvapil
4f6d33aaa8 remove kubeovn ependency from distro-full bundle (#240) 2024-07-26 18:31:01 +02:00
Mr Khachaturov
a17c622b00 Add snapshot-controller (#237)
Added snapshot-controller to system packages. 

It is included in pass-full bundle. 
Also added new cluster issuer `selfsigned-cluster-issuer`.
2024-07-26 18:27:34 +02:00
Andrei Kvapil
ac11056e0a Prepare release v0.10.1 (#238)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-26 13:58:08 +02:00
Andrei Kvapil
32f22adb26 ingress forward both 80 and 443 ports to tenant clusters (#235)
We need to separate HTTP and HTTPS traffic and send them into tenant
clusters.
Currently traffic was sending only on HTTPS port, this PR enables HTTP
traffic forwarding.

Nginx ingress does not support setting correct upstream according to
type of traffic (http or https)
There are set of issues in upstream.

- https://github.com/kubernetes/ingress-nginx/issues/1655
- https://github.com/kubernetes/ingress-nginx/issues/9061
- https://github.com/kubernetes/ingress-nginx/issues/11334

Good to know that we found reliable workaround

fixes:
https://github.com/aenix-io/cozystack/issues/209#issuecomment-2215021489
2024-07-26 12:01:28 +02:00
Andrei Kvapil
4c5a37d75b Kubernetes: fix node-role labels propogation (#234)
fixes https://github.com/aenix-io/cozystack/issues/209
2024-07-26 12:01:13 +02:00
Andrei Kvapil
7ad3725dad Fix kubelet garbage collection and introduce ephemeralStorage parameter (#239)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-26 12:00:42 +02:00
Marian Koreniuk
9f61510543 Merge pull request #236 from aenix-io/upd-nginx-ingress
Update ingress-nginx-controller v1.11
2024-07-26 12:19:29 +03:00
Andrei Kvapil
757caee765 Update ingress-nginx v1.11
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-26 10:07:36 +02:00
Andrei Kvapil
e97160918f Prepare release v0.10.0 (#230)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-23 18:58:08 +02:00
Andrei Kvapil
95b11a1082 Update etcd-operator v0.4 (#232)
This update enables resize operation for etcd clusters

https://github.com/aenix-io/etcd-operator/pull/254
2024-07-23 17:53:49 +02:00
Andrei Kvapil
d0758692d1 Fix Kafka topics creation (#231)
this PR fixes an error:
```spec.replicas: Invalid value: "string": spec.replicas in body must be of type integer```

---------

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-23 12:15:16 +02:00
Andrei Kvapil
bad59ec444 Add option to enable dashboard in ingress-nginx (#229)
Add option to enable dashboard in ingress

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-22 23:35:16 +02:00
Andrei Kvapil
ceefae03e9 Add network policies to enforce tenant isolation (#228)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-22 23:32:54 +02:00
Andrei Kvapil
5b39ced0a1 Add NATS (#224)
Very basic NATS application

![Screenshot 2024-07-19 at 14 33
54](https://github.com/user-attachments/assets/3e4e1df3-b548-434e-aaca-a09fb2642284)

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-22 23:31:56 +02:00
Andrei Kvapil
ec283c33a4 postgres: automatically set schema permissions (#216)
This PR refactors postgress configuration script:
- Added event trigger on creating new schemas for automatically set
owner
- Refactored logic for fixing permissions for all objects in all schemas

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-22 23:31:32 +02:00
Mr Khachaturov
8319a00193 Nginx whitelist and clouflareProxy (#211)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-22 12:43:32 +02:00
Marian Koreniuk
c6e1e4e4b8 Merge pull request #223 from aenix-io/cozy-rename
Rename system releases to have -system suffix
2024-07-19 13:32:31 +02:00
Andrei Kvapil
af75a32430 fix kubevirt infrastructure-provider version (#225)
Fix wrong version for KubeVirt CAPI provider

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-19 13:30:23 +02:00
Andrei Kvapil
c9e0d63b77 Rename system releases to have -system suffix
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-19 12:26:17 +02:00
Andrei Kvapil
7c77a6594a Unsuspend system helmreleases on cozystack restart (#219)
Developers ofthen forget to unsuspend helm releases after the local
development (I do!)
This change make ensure that all system helm charts are getting
reconciled by flux after cozystack container restart

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-18 20:55:24 +03:00
Marian Koreniuk
9bbdb11aab Merge pull request #218 from aenix-io/logos
Ship all logos with Cozystack
2024-07-18 19:53:20 +02:00
Andrei Kvapil
bbd2ca81a3 fix: ferretdb set schedma owner (#220)
undefined
2024-07-17 12:48:43 +02:00
Andrei Kvapil
e265e8bc43 Ship all logos with Cozystack
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-16 17:31:52 +02:00
Marian Koreniuk
5261145b2d Merge pull request #217 from aenix-io/ferretdb
FerretDB
2024-07-16 12:52:37 +02:00
Andrei Kvapil
4ffa861534 add ferretdb
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-16 10:23:27 +02:00
Andrei Kvapil
07d666c0be fix: scraping ingress-nginx metrics (#212)
Now grafana dashboards for ingress-nginx controller completely works!

![pic](https://github.com/user-attachments/assets/c2414cc7-9e0c-441e-9668-bf78ea3ef0c6)

![pic](https://github.com/user-attachments/assets/8ebe2488-0c53-4fc8-9e26-fc37e0047ebe)

![pic](https://github.com/user-attachments/assets/675a47b8-0304-4c58-9379-75e23c2db90f)
2024-07-16 08:06:16 +02:00
Andrei Kvapil
5bbc488e9c Prepare release 0.9.0 (#207) 2024-07-10 20:25:29 +02:00
Andrei Kvapil
4cbc8a2c33 Upgrade tenant Kubernetes v1.30.1 (#206)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-08 22:51:50 +02:00
Andrei Kvapil
9709059fb7 kubernetes: Allow upgrading existing node groups (#205)
This PR introduces change to allow upgrading existing node groups for
tenant Kubernetes cluster:

This fixes the error:
```
Status: Failed (UpgradeFailed: Helm upgrade failed for release tenant-test0/kubernetes-test0 with chart kubernetes@0.3.0: cannot patch "kubernetes-test0-md0" with kind KubevirtMachineTemplate: admission webhook "validation.kubevirtmachinetemplate.infrastructure.cluster.x-k8s.io" denied the request: KubevirtMachineTemplateSpec is immutable)
```

This is done by generating unique names for KubevirtMachineTemplate
based on hash from spec. Old KubevirtMachineTemplates keep existing in
the cluster until some MachineSet continues using them.

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-08 22:49:35 +02:00
Andrei Kvapil
4ec770996e Update Piraeus v2.5.1 (#204) 2024-07-08 22:47:10 +02:00
Andrei Kvapil
4972906e7a Update Cluster API and hardcode versions (#203)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-08 22:44:49 +02:00
Andrei Kvapil
2ea5e8b1a6 Update Kamaji v1.0.0 (#202)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-08 20:16:23 +02:00
Andrei Kvapil
db1d5cdf4f Update KubeVirt v1.2.2 (#201)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-08 20:16:12 +02:00
Kingdon Barrett
8664d5748e Fix nginx error related to passthrough TLS (#208)
I don't understand why "true" value isn't accepted here, but I have seen
this before. The `--enable-ssl-passthrough` parameter is not supposed to
accept any value, it's a stand-alone argument.

With this change I get my traffic appropriately passed through to
backend ingress on a kubevirt cluster that enabled TLS

Without it (the change is made on the addon ingress, which is very
strange, because that one isn't even configured to use a passthrough
annotation... the root ingress controller doesn't seem to care) I get
this error:

> 400 Bad Request
> The plain HTTP request was sent to HTTPS port
> ---
> nginx

Signed-off-by: Kingdon Barrett <kingdon+github@tuesdaystudios.com>
2024-07-08 18:46:40 +02:00
Kingdon Barrett
7a3e9f574c Fix nginx config error parsing configmap (#200)
The error manifests as:

W0705 16:07:35.694677 7 configmap.go:431] unexpected error merging
defaults: 2 error(s) decoding:

* cannot parse 'proxy-connect-timeout' as int: strconv.ParseInt: parsing
"10s": invalid syntax
* cannot parse 'proxy-read-timeout' as int: strconv.ParseInt: parsing
"10s": invalid syntax

I came across this trying to understand why my nginx ingress addon
config isn't working, (this didn't help, but at least the warning is
gone now.)

I'll continue to try to debug, but I think this can merge any time

Signed-off-by: Kingdon Barrett <kingdon+github@tuesdaystudios.com>
2024-07-08 18:02:56 +02:00
Andrei Kvapil
dfbc210bbd hotfix: handle missing flux-operator release during upgrade (#198)
image to test:

```
ghcr.io/aenix-io/cozystack/cozystack:v0.8.0@sha256:48e9f676f4eca5f7036648a56767c31beb0aca8fdc6d6798bd65de74886ed1ef
```


this PR should fix a problem of upgrading from older cozystack version

```
make: Leaving directory '/cozystack/packages/core/platform'
deployment.apps/source-controller condition met
deployment.apps/helm-controller condition met
Error from server (NotFound): helmreleases.helm.toolkit.fluxcd.io "fluxcd" not found
NAME                                        CREATED AT
helmreleases.helm.toolkit.fluxcd.io         2024-05-29T11:00:16Z
helmrepositories.source.toolkit.fluxcd.io   2024-05-29T11:00:17Z
make: Entering directory '/cozystack/packages/system/fluxcd-operator'
kubectl patch hr -n cozy-fluxcd fluxcd-operator -p '{"spec": {"suspend": true}}' --type=merge --field-manager=flux-client-side-apply
Error from server (NotFound): helmreleases.helm.toolkit.fluxcd.io "fluxcd-operator" not found
make: *** [../../../scripts/package-system.mk:20: suspend] Error 1
make: Leaving directory '/cozystack/packages/system/fluxcd-operator'
time="2024-07-04T12:50:05Z" level=fatal msg="failed to run" err="exit status 2"
```
2024-07-04 16:18:02 +03:00
Andrei Kvapil
3ac170184e Fix: kafka replicas and partitions (#192)
Fix kafka app to unhardcode partitions number
fixes problem with unability to specify number of partitions and
replicas for them

also possible fixes https://github.com/aenix-io/cozystack/issues/163

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-04 14:16:23 +02:00
Andrei Kvapil
15478a8807 Prepare release v0.8.0 (#194)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-04 10:32:26 +02:00
Andrei Kvapil
b23ad47f51 Update etcd-operator v0.3.1 (#197) 2024-07-04 10:25:58 +02:00
Kingdon Barrett
2ab9a386cd Fine-tuning Flux configuration (#196)
Fix #195

Don't set the `interval` so short on HelmReleases, with this many
HelmReleases that really hamstrings the control plane.

Also, copy the install/upgrade remediation config from system packages
to the Kubernetes templates for addon packages (cilium, flux, ingress) -
in my testing the ingress-nginx chart fails every time the first time.
Maybe that should be filed as a separate issue, I haven't looked into
detail, it is some issue related to a secret not being created, I think
it said something related to an admission controller.

Looks as though it's a conflict with being installed at the same time as
the cert-manager addon.

Signed-off-by: Kingdon Barrett <kingdon+github@tuesdaystudios.com>
2024-07-04 02:28:57 +02:00
Marian Koreniuk
7072ed98be Merge pull request #193 from aenix-io/upd-etcd-operator
Update etcd-operator v0.3.0
2024-07-03 16:36:04 +02:00
Andrei Kvapil
a798afc7e8 Update etcd-operator v0.3.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-07-03 16:28:24 +02:00
Marian Koreniuk
60c608cb00 Merge pull request #186 from aenix-io/tenant-nginx-ingress
Update Tenant Kubernetes Addons
2024-06-28 09:04:06 +02:00
Kingdon Barrett
07384c40f8 Tenant nginx ingress (fixes) (#191)
I am testing install with this PR #183 and I had some issues, these
should help

---------

Signed-off-by: Kingdon Barrett <kingdon+github@tuesdaystudios.com>
2024-06-28 09:02:41 +02:00
Andrei Kvapil
7462be79be add fluxcd addon 2024-06-26 03:12:21 +02:00
Andrei Kvapil
c01604fb7f fix typo in cert-manager addon 2024-06-26 03:10:09 +02:00
Andrei Kvapil
c22a6792c2 add tenant nginx-ingress
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-26 03:09:35 +02:00
Andrei Kvapil
a2cc83ddc4 move fluxcd and operator back to system (#188)
Separate and move fluxcd and fluxcd-operator from `core` to `system`.

It should not be problem with self-update now, since we correctly set
dependsOn option, it ensures ordered update of flux instance right after
flux-operator.

As part of https://github.com/aenix-io/cozystack/issues/184 and
https://github.com/aenix-io/cozystack/issues/185
fixes https://github.com/aenix-io/cozystack/issues/169

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-26 02:49:14 +02:00
Andrei Kvapil
cf1d9fabf4 add fluxcd labels post processor (#180)
This PR introduces a new fluxcd-kustomize.sh script that can be used as
post-processor for helm for adding a common fluxcd labels.

This is very useful for `make diff`, so it will not include diff between
these labels anymore

Also for debugging specific kustomize cases, eg:
- https://github.com/fluxcd/helm-controller/issues/283
- https://github.com/fluxcd/flux2/issues/4368

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-25 19:23:20 +02:00
Andrei Kvapil
91a1f4917c fix: ingress-nginx duplicate template (#182)
in addition to https://github.com/aenix-io/cozystack/pull/181
2024-06-25 17:33:28 +02:00
Marian Koreniuk
18579abdcd Merge pull request #183 from aenix-io/tenant-nginx-ingress
Managed tenant nginx ingress controller
2024-06-25 17:32:31 +02:00
Andrei Kvapil
6bd2d45531 add tenant nginx-ingress
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-25 15:50:43 +02:00
Andrei Kvapil
2145f41c7f Use patch with --no-backup-if-mismatch (#181)
Add option `--no-backup-if-mismatch` to every patch command, so it will
not create .orig and .diff files anymore
2024-06-25 14:33:07 +02:00
Kingdon Barrett
d841a20635 Fix typo (#179)
Signed-off-by: Kingdon Barrett <kingdon+github@tuesdaystudios.com>
2024-06-25 11:53:31 +02:00
Andrei Kvapil
246b44945e add certManager addon
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-25 11:08:00 +02:00
Andrei Kvapil
352920ea7e Merge pull request #170 from aenix-io/upd-flux
This cumulative PR includes the following changes:

- Migrate from fluxcd-community charts to Flux-Operator #166
- Upgrade to Flux 2.3.x #167
- Refactor Flux 2.3 update #172
- Update flux plugin for dashboard #171
- Flux Operator 0.6 #178
2024-06-24 15:33:27 +02:00
Kingdon Barrett
73b6f7f962 Flux Operator 0.6 (#178)
This PR upgrades to Flux-Operator 0.6 released this morning, also includes:

* #170
which is an aggregate PR, so #171 #172 etc. I think this PR now basically subsumes #170 and can replace it.

I have at least 80% confidence there are no errors in this PR. It also restores the networkPolicy default and the deleted cozy-dashboard network policy, which we will see fixed (restored to install NetworkPolicy resources by default) in the next `flux-operator` release.

Ref: https://github.com/controlplaneio-fluxcd/flux-operator/pull/52
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-24 13:35:26 +02:00
Andrei Kvapil
b8e5309fc4 Refactor fluxcd 2.3 update (#172)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-24 13:14:11 +02:00
Andrei Kvapil
97bd1634a7 Merge branch 'main' into upd-flux 2024-06-24 13:13:54 +02:00
Marian Koreniuk
33a9cb7358 Merge pull request #176 from aenix-io/initial-arm
Add initial ARM support
2024-06-21 14:51:09 +02:00
Marian Koreniuk
e6d60886b4 Merge pull request #177 from aenix-io/postgres-quorum
postgres: option to enable quorum-based replication
2024-06-21 11:25:30 +02:00
Andrei Kvapil
995dea6f5c postgres: option to enable quorum-based replication
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-21 10:12:32 +02:00
Andrei Kvapil
f12e2c300a add initial arm support
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-20 13:51:56 +02:00
Marian Koreniuk
1519f40767 Merge pull request #171 from aenix-io/flux-plugin-for-dashboard
Update flux plugin for dashboard
2024-06-19 16:57:46 +02:00
Andrei Kvapil
02a41e126b fix kubeovn and cilium tags (#174)
* fix: kube-ovn tag

* fix: cilium tag
2024-06-19 16:55:16 +02:00
Marian Koreniuk
2d40c8507b Merge pull request #165 from aenix-io/e2e
Add e2e tests
2024-06-17 19:14:42 +02:00
Marian Koreniuk
bcd1ee1b4f Add masquerade 2024-06-17 19:13:54 +02:00
Andrei Kvapil
2dd2b079b2 Update flux-plugin for dashboard
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-17 19:00:30 +02:00
Andrei Kvapil
3a0bad04b9 add check for forwarding and masquerading
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-17 18:17:08 +02:00
Kingdon Barrett
931e39fb5c Upgrade to Flux 2.3.x (#167)
Signed-off-by: Kingdon Barrett <kingdon+github@tuesdaystudios.com>
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-17 16:02:32 +02:00
Kingdon Barrett
54017b6e3e Migrate from fluxcd-community charts to Flux-Operator (#166)
Signed-off-by: Kingdon Barrett <kingdon+github@tuesdaystudios.com>
2024-06-17 15:58:13 +02:00
Andrei Kvapil
838bee5d25 Allow specify externalIPs for nginx-ingress (#164) 2024-06-14 15:28:10 +02:00
Andrei Kvapil
eedc4ebce1 Add e2e tests
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-06-12 19:47:36 +02:00
Andrei Kvapil
b30a9a6fcf fix: dependsOn kubeovn and cilium in -hosted bundles (#161) 2024-05-30 23:54:39 +03:00
Andrei Kvapil
8019256dfc Fix: clickhouse user login (#160) 2024-05-29 17:57:03 +02:00
Andrei Kvapil
d7cfa53cd4 Prepare release v0.7.0 (#156) 2024-05-29 10:04:22 +02:00
Andrei Kvapil
d7147c7fe1 kube-ovn: disable cozystack image tag (#153)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-27 22:47:12 +02:00
Andrei Kvapil
6211f9d876 cilium: enforce device detection and enable image building (#151)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-27 19:40:57 +02:00
Marian Koreniuk
b5f8006f3c Merge pull request #150 from aenix-io/upd-cilium
Update Cilium v1.15.5
2024-05-27 08:27:35 +02:00
Andrei Kvapil
e89926cca6 Update kube-ovn v1.13.0-ge1310e17 and enable image building (#149)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-26 18:11:36 +02:00
Andrei Kvapil
3254cc784e Update Cilium v1.15.5
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-24 17:43:54 +02:00
Marian Koreniuk
48df98230f change hardcode for talos registry (#148)
without this fix can't build project localy
2024-05-24 12:44:56 +02:00
Andrei Kvapil
5f01f30fe7 kubernetes: specify correct dns address (#147) 2024-05-22 08:32:06 +02:00
Andrei Kvapil
2cf23364b4 kamaji: unhardcode cluster.local domain (#145)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-21 11:23:10 +02:00
Andrei Kvapil
f30f7be6cc Unhardcode cluster.local domain (#142)
Allow using other domains for the cluster

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-21 11:22:54 +02:00
Andrei Kvapil
6cae6ce8ce kubernetes: enable bpf masqurade and tunnel routing (#144) 2024-05-21 11:22:37 +02:00
Andrei Kvapil
4a97e297d4 postgres: fix users and roles (#138)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-21 02:29:49 +02:00
Marian Koreniuk
6abaf7c0fa switched place -maxdepth im Makefiles (#140) 2024-05-21 02:29:34 +02:00
Andrei Kvapil
2b00fcf8f9 etcd: enable autocompact and defrag (#137)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-20 20:53:19 +02:00
Andrei Kvapil
007d414f0e Prepare release v0.6.0 (#135) 2024-05-16 16:11:37 +02:00
Andrei Kvapil
6fc1cc7d5d etcd: Add quota-backend-bytes calculations (#133) 2024-05-16 14:04:10 +02:00
Andrei Kvapil
7caccec11d upd kubernetes (#134)
* Allow root login without password

* add ephemeral volumes for containerd and kubelet

* update kubernetes application
2024-05-16 14:04:00 +02:00
Andrei Kvapil
c0685f4318 Prepare release v0.5.0 (#126)
* Prepare release v0.5.0

* fix mariadb
2024-05-10 12:52:57 +02:00
Andrei Kvapil
a9c42c8ef0 Update mariadb-operator v0.28.1 (#124) 2024-05-09 11:18:40 +02:00
Andrei Kvapil
0ea9ef3ae3 Update Cilium v1.14.10 (#125) 2024-05-09 11:18:27 +02:00
Andrei Kvapil
4da8ac3b77 Add schema generation and remove default values (#110)
* Add schema generation and remove default values

* fix monitoring schema generation

* fix default values


Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-09 10:45:57 +02:00
Nikita
781a531f62 Installer rollout startegy tuned to allow downtime (#123) 2024-05-09 10:44:43 +02:00
Andrei Kvapil
9c5318641d Fix assets building (#121) 2024-05-08 20:44:32 +02:00
Andrei Kvapil
53f2365e79 Fix: kubernetes and etcd-operator issues (#119)
* Fix datastore creation depends on created secrets

* Add basic topologySpreadConstraints

* Fix kubernetes chart post-rendering

* Update release images
2024-05-06 13:59:43 +02:00
Marian Koreniuk
9145be14c1 Merge pull request #117 from aenix-io/release-0.1.0v2
Prepare release v0.4.0
2024-05-06 09:25:39 +02:00
Andrei Kvapil
fca349c641 Update Talos v1.7.1 2024-05-04 07:32:08 +02:00
Andrei Kvapil
0b38599394 Prepare release v0.4.0
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-05-03 23:12:35 +02:00
Andrei Kvapil
0a33950a40 Prepare release v0.4.0 (#115) 2024-05-03 23:02:41 +02:00
Andrei Kvapil
e3376a223e Fix tolerations in Kubernetes chart (#116) 2024-05-03 13:26:02 +02:00
Marian Koreniuk
dee190ad4f Merge pull request #95 from aenix-io/etcd-operator
Replace kamaji-etcd with aenix-io/etcd-operator
2024-05-02 22:42:52 +02:00
Marian Koreniuk
66f963bfd0 Merge pull request #104 from aenix-io/replicas
Introduce replicas options
2024-04-26 16:03:09 +02:00
Andrei Kvapil
7cd7de73ee Introduce replicas options
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-26 15:19:25 +02:00
Andrei Kvapil
4f2757731a Fix: dashboard colors for dark mode (#108)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-26 12:12:00 +02:00
Andrei Kvapil
372c3cbd17 Update Kamaji v0.5.0 (#99)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-26 11:00:06 +02:00
Andrei Kvapil
ff9ab5ba85 Fix older versions in dashboard (#102)
Workaround for https://github.com/vmware-tanzu/kubeapps/issues/7740

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-26 10:41:05 +02:00
Andrei Kvapil
c7568d2312 Update kubeapps-15.0.2 (#103)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-26 10:18:22 +02:00
Marian Koreniuk
f4778abb3f Merge pull request #105 from aenix-io/upd-linstor
Update LISNTOR v1.27.1
2024-04-25 20:49:14 +02:00
Andrei Kvapil
68a7cc52c3 Update LISNTOR v1.27.1
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-25 18:29:23 +02:00
Marian Koreniuk
be508fd107 Fix etcd-operator Makefile 2024-04-24 16:21:06 +03:00
Andrei Kvapil
a6d0f7cfd4 Add etcd-operator
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-24 12:29:05 +02:00
Andrei Kvapil
a95671391f fix: Flux does not tolerate kubectl edits (#101)
https://fluxcd.io/flux/faq/#why-are-kubectl-edits-rolled-back-by-flux

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-24 11:31:32 +02:00
Andrei Kvapil
20fcd25d64 Calculate tags and version automatically (#100)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-24 11:31:22 +02:00
Andrei Kvapil
ca79f725a3 Prepare release v0.3.1 (#97)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-23 12:55:45 +03:00
Marian Koreniuk
be0603f139 Merge pull request #96 from aenix-io/missing-makefile
fix: missing package-system.mk
2024-04-23 12:53:00 +03:00
Andrei Kvapil
f8b87197d0 fix: flux dependency
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-23 09:43:04 +02:00
Andrei Kvapil
5d58e5ce7d fix: missing package-system.mk 2024-04-23 09:32:32 +02:00
Andrei Kvapil
a1340c1839 fix: clickhouse-operator watch namespaces (#93) 2024-04-23 08:50:45 +02:00
Marian Koreniuk
b838ee5729 Merge pull request #91 from artarik/main
remove duplicated entry for creating sa
2024-04-18 11:54:57 +03:00
Artem Starik
2baf532e1f HOTFIX: byump chart version 2024-04-18 11:10:14 +03:00
Artem Starik
7713e7de6b HOTFIX: remove duplicated sa from template 2024-04-18 11:07:21 +03:00
Artem Starik
aef38b6dec Merge pull request #1 from artarik/artarik-patch-1
HOTFIX: remove duplicated entry for sa
2024-04-18 11:02:12 +03:00
Artem Starik
b02c608d6c HOTFIX: remove duplicated entry for sa 2024-04-18 11:00:06 +03:00
Andrei Kvapil
f7eaab0aaa Prepare release v0.3.0 (#90) 2024-04-18 09:00:22 +02:00
Marian Koreniuk
05813c06dd Fix incorrect path to include in Makefiles (#89)
fix regression introduced by https://github.com/aenix-io/cozystack/pull/86

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 23:58:56 +02:00
Andrei Kvapil
038b3c08f4 fix: remove plus in kamaji-etcd image tag (#87)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 22:59:15 +02:00
Andrei Kvapil
5dd8d41907 fix: clickhouse-operator watch namespaces (#88)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 22:59:05 +02:00
Andrei Kvapil
2d21ed6ac9 fix: grafana ingress class (#85)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 22:51:54 +02:00
Marian Koreniuk
fe5d607cad Merge pull request #86 from aenix-io/83-refactor-makefiles
Refactor Makefiles #83
2024-04-17 22:39:14 +02:00
Marian Koreniuk
12b70d8f26 Fix victoria-metrics-operator Makefile 2024-04-17 23:30:19 +03:00
Marian Koreniuk
bc414d648d Fix redis-operator Makefile 2024-04-17 23:29:40 +03:00
Marian Koreniuk
9d4aacc832 Fix metallb Makefile 2024-04-17 23:28:40 +03:00
Marian Koreniuk
23ce7480c2 Fix mariadb-operator Makefile 2024-04-17 23:27:47 +03:00
Marian Koreniuk
994b5d97bd Fix kubevirt-operator Makefile 2024-04-17 23:26:48 +03:00
Marian Koreniuk
871f053e00 Fix kamaji Makefile 2024-04-17 23:25:37 +03:00
Marian Koreniuk
d3485eb0a3 Fix ingress-nginx Makefile 2024-04-17 23:25:06 +03:00
Marian Koreniuk
f3f65e9f9c Fix dashboard Makefile 2024-04-17 23:24:02 +03:00
Marian Koreniuk
1ef7d219de Fix cilium Makefile 2024-04-17 23:22:00 +03:00
Marian Koreniuk
3d0f65ff98 Fix cert-manager Makefile 2024-04-17 23:20:41 +03:00
Marian Koreniuk
451e124c56 Update hack/package-system.mk
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 23:02:27 +03:00
Marian Koreniuk
d86c1269eb Update hack/package-system.mk
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 23:02:14 +03:00
Marian Koreniuk
f4cf1af349 Update hack/package-system.mk
Co-authored-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 23:01:47 +03:00
Marian Koreniuk
758079520c fix case tabs in package-system.mk 2024-04-17 22:36:13 +03:00
Marian Koreniuk
fcebfdff24 Refactor Makefiles #83 2024-04-17 22:24:59 +03:00
Andrei Kvapil
8a2ad90882 Update clickhouse app (#82)
* Add users management
* Remove logs volume

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 20:09:16 +02:00
Andrei Kvapil
760f86d2ce Add application for Kafka (#78)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-17 14:23:56 +02:00
Andrei Kvapil
ad7d65f471 Add application for Clickhouse (#81) 2024-04-17 11:21:51 +02:00
Andrei Kvapil
c42dbcafc3 Add NoCloud asset for Hetzner installation (#80) 2024-04-16 21:52:50 +02:00
Andrei Kvapil
238061efbc Add clickhouse-operator (#75)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-13 08:57:49 +02:00
Andrei Kvapil
83bdc3f537 Add kafka-operator (#74)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-13 08:56:07 +02:00
Andrei Kvapil
c24a103fda Update mysql helm chart (#67)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-04 16:47:36 +02:00
Andrei Kvapil
8b975ff0cc Fix mysql app (#66)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-04 16:23:53 +02:00
Andrei Kvapil
e245d541b2 release v0.2.0 (#54)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-04 15:55:58 +02:00
Andrei Kvapil
f03f083c1a Rename bundles (#65)
- paas-full
- paas-hosted
- distro-full
- distro-hosted

Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-04 15:54:14 +02:00
Andrei Kvapil
d68c6c68f6 Enable versioning for cozy-* charts (#62)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-04 12:33:54 +02:00
Andrei Kvapil
d5eb4dd62e Move flux to core package and avoid Helm installation (#61)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-04-04 12:31:42 +02:00
Andrei Kvapil
97cf386fc6 Merge pull request #59 from aenix-io/fix-cilium
fix cilium installation
2024-04-04 12:31:05 +02:00
1171 changed files with 187584 additions and 88852 deletions

BIN
.DS_Store vendored Normal file

Binary file not shown.

1
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1 @@
* @kvaps

View File

@@ -26,3 +26,6 @@ This list is sorted in chronological order, based on the submission date.
| Organization | Contact | Date | Description of Use |
| ------------ | ------- | ---- | ------------------ |
| [Ænix](https://aenix.io/) | @kvaps | 2024-02-14 | Ænix provides consulting services for cloud providers and uses Cozystack as the main tool for organizing managed services for them. |
| [Mediatech](https://mediatech.dev/) | @ugenk | 2024-05-01 | We're developing and hosting software for our and our custmer services. We're using cozystack as a kubernetes distribution for that. |
| [Bootstack](https://bootstack.app/) | @mrkhachaturov | 2024-08-01| At Bootstack, we utilize a Kubernetes operator specifically designed to simplify and streamline cloud infrastructure creation.|
| [gohost](https://gohost.kz/) | @karabass_off | 2024-02-01| Our company has been working in the market of Kazakhstan for more than 15 years, providing clients with a standard set of services: VPS/VDC, IaaS, shared hosting, etc. Now we are expanding the lineup by introducing Bare Metal Kubenetes cluster under Cozystack management.|

View File

@@ -3,7 +3,11 @@
build:
make -C packages/apps/http-cache image
make -C packages/apps/kubernetes image
make -C packages/system/cilium image
make -C packages/system/kubeovn image
make -C packages/system/dashboard image
make -C packages/system/kamaji image
make -C packages/core/testing image
make -C packages/core/installer image
make manifests
@@ -18,6 +22,13 @@ repos:
make -C packages/system repo
make -C packages/apps repo
make -C packages/extra repo
mkdir -p _out/logos
cp ./packages/apps/*/logos/*.svg ./packages/extra/*/logos/*.svg _out/logos/
assets:
make -C packages/core/talos/ assets
make -C packages/core/installer/ assets
test:
make -C packages/core/testing apply
make -C packages/core/testing test
make -C packages/core/testing delete

View File

@@ -58,6 +58,8 @@ Commits are used to generate the changelog, and their author will be referenced
In case of **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/aenix-io/cozystack/discussions/categories/feature-requests).
You can join our weekly community meetings (just add this events to your [Google Calendar](https://calendar.google.com/calendar?cid=ZTQzZDIxZTVjOWI0NWE5NWYyOGM1ZDY0OWMyY2IxZTFmNDMzZTJlNjUzYjU2ZGJiZGE3NGNhMzA2ZjBkMGY2OEBncm91cC5jYWxlbmRhci5nb29nbGUuY29t) or [iCal](https://calendar.google.com/calendar/ical/e43d21e5c9b45a95f28c5d649c2cb1e1f433e2e653b56dbbda74ca306f0d0f68%40group.calendar.google.com/public/basic.ics)) or [Telegram group](https://t.me/cozystack).
## License
Cozystack is licensed under Apache 2.0.

323
hack/e2e.sh Executable file
View File

@@ -0,0 +1,323 @@
#!/bin/bash
if [ "$COZYSTACK_INSTALLER_YAML" = "" ]; then
echo 'COZYSTACK_INSTALLER_YAML variable is not set!' >&2
echo 'please set it with following command:' >&2
echo >&2
echo 'export COZYSTACK_INSTALLER_YAML=$(helm template -n cozy-system installer packages/core/installer)' >&2
echo >&2
exit 1
fi
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
echo "IPv4 forwarding is not enabled!" >&2
echo 'please enable forwarding with the following command:' >&2
echo >&2
echo 'echo 1 > /proc/sys/net/ipv4/ip_forward' >&2
echo >&2
exit 1
fi
set -x
set -e
kill `cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid` || true
ip link del cozy-br0 || true
ip link add cozy-br0 type bridge
ip link set cozy-br0 up
ip addr add 192.168.123.1/24 dev cozy-br0
# Enable masquerading
iptables -t nat -D POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE 2>/dev/null || true
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 ! -d 192.168.123.0/24 -j MASQUERADE
rm -rf srv1 srv2 srv3
mkdir -p srv1 srv2 srv3
# Prepare cloud-init
for i in 1 2 3; do
echo "local-hostname: srv$i" > "srv$i/meta-data"
echo '#cloud-config' > "srv$i/user-data"
cat > "srv$i/network-config" <<EOT
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- "192.168.123.1$i/26"
gateway4: "192.168.123.1"
nameservers:
search: [cluster.local]
addresses: [8.8.8.8]
EOT
( cd srv$i && genisoimage \
-output seed.img \
-volid cidata -rational-rock -joliet \
user-data meta-data network-config
)
done
# Prepare system drive
if [ ! -f nocloud-amd64.raw ]; then
wget https://github.com/aenix-io/cozystack/releases/latest/download/nocloud-amd64.raw.xz -O nocloud-amd64.raw.xz
rm -f nocloud-amd64.raw
xz --decompress nocloud-amd64.raw.xz
fi
for i in 1 2 3; do
cp nocloud-amd64.raw srv$i/system.img
qemu-img resize srv$i/system.img 20G
done
# Prepare data drives
for i in 1 2 3; do
qemu-img create srv$i/data.img 100G
done
# Prepare networking
for i in 1 2 3; do
ip link del cozy-srv$i || true
ip tuntap add dev cozy-srv$i mode tap
ip link set cozy-srv$i up
ip link set cozy-srv$i master cozy-br0
done
# Start VMs
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 4 -m 8192 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5$i -netdev tap,id=net0,ifname=cozy-srv$i,script=no,downscript=no \
-drive file=srv$i/system.img,if=virtio,format=raw \
-drive file=srv$i/seed.img,if=virtio,format=raw \
-drive file=srv$i/data.img,if=virtio,format=raw \
-display none -daemonize -pidfile srv$i/qemu.pid
done
sleep 5
# Wait for VM to start up
timeout 60 sh -c 'until nc -nzv 192.168.123.11 50000 && nc -nzv 192.168.123.12 50000 && nc -nzv 192.168.123.13 50000; do sleep 1; done'
cat > patch.yaml <<\EOT
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.123.0/24
extraConfig:
maxPods: 512
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
- name: spl
install:
image: ghcr.io/aenix-io/cozystack/talos:v1.7.1
files:
- content: |
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
network:
cni:
name: none
dnsDomain: cozy.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOT
cat > patch-controlplane.yaml <<\EOT
machine:
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.123.10
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.123.0/24
EOT
# Gen configuration
if [ ! -f secrets.yaml ]; then
talosctl gen secrets
fi
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 --config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
export TALOSCONFIG=$PWD/talosconfig
# Apply configuration
talosctl apply -f controlplane.yaml -n 192.168.123.11 -e 192.168.123.11 -i
talosctl apply -f controlplane.yaml -n 192.168.123.12 -e 192.168.123.12 -i
talosctl apply -f controlplane.yaml -n 192.168.123.13 -e 192.168.123.13 -i
# Wait for VM to be configured
timeout 60 sh -c 'until nc -nzv 192.168.123.11 50000 && nc -nzv 192.168.123.12 50000 && nc -nzv 192.168.123.13 50000; do sleep 1; done'
# Bootstrap
talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11
# Wait for etcd
timeout 120 sh -c 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep "rpc error"; do sleep 1; done'
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
export KUBECONFIG=$PWD/kubeconfig
# Wait for kubernetes nodes appear
timeout 60 sh -c 'until [ $(kubectl get node -o name | wc -l) = 3 ]; do sleep 1; done'
kubectl create ns cozy-system
kubectl create -f - <<\EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: cozystack
namespace: cozy-system
data:
bundle-name: "paas-full"
ipv4-pod-cidr: "10.244.0.0/16"
ipv4-pod-gateway: "10.244.0.1"
ipv4-svc-cidr: "10.96.0.0/16"
ipv4-join-cidr: "100.64.0.0/16"
EOT
#
echo "$COZYSTACK_INSTALLER_YAML" | kubectl apply -f -
# wait for cozystack pod to start
kubectl wait deploy --timeout=1m --for=condition=available -n cozy-system cozystack
# wait for helmreleases appear
timeout 60 sh -c 'until kubectl get hr -A | grep cozy; do sleep 1; done'
sleep 5
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n " $1 " hr/" $2 " &"} END{print "wait"}' | sh -x
# Wait for linstor controller
kubectl wait deploy --timeout=5m --for=condition=available -n cozy-linstor linstor-controller
# Wait for all linstor nodes become Online
timeout 60 sh -c 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) = 3 ]; do sleep 1; done'
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs srv1 /dev/vdc --pool-name data --storage-pool data
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs srv2 /dev/vdc --pool-name data --storage-pool data
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs srv3 /dev/vdc --pool-name data --storage-pool data
kubectl create -f- <<EOT
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOT
kubectl create -f- <<EOT
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools:
- cozystack
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses:
- 192.168.123.200-192.168.123.250
autoAssign: true
avoidBuggyIPs: false
EOT
kubectl patch -n tenant-root hr/tenant-root --type=merge -p '{"spec":{ "values":{
"host": "example.org",
"ingress": true,
"monitoring": true,
"etcd": true,
"isolated": true
}}}'
# Wait for HelmRelease be created
timeout 60 sh -c 'until kubectl get hr -n tenant-root etcd ingress monitoring tenant-root; do sleep 1; done'
# Wait for HelmReleases be installed
kubectl wait --timeout=2m --for=condition=ready -n tenant-root hr etcd ingress monitoring tenant-root
kubectl patch -n tenant-root hr/ingress --type=merge -p '{"spec":{ "values":{
"dashboard": true
}}}'
# Wait for nginx-ingress-controller
timeout 60 sh -c 'until kubectl get deploy -n tenant-root root-ingress-controller; do sleep 1; done'
kubectl wait --timeout=5m --for=condition=available -n tenant-root deploy root-ingress-controller
# Wait for etcd
kubectl wait --timeout=5m --for=jsonpath=.status.readyReplicas=3 -n tenant-root sts etcd
# Wait for Victoria metrics
kubectl wait --timeout=5m --for=condition=available deploy -n tenant-root vmalert-vmalert-longterm vmalert-vmalert-shortterm vminsert-longterm vminsert-shortterm
kubectl wait --timeout=5m --for=jsonpath=.status.readyReplicas=2 -n tenant-root sts vmalertmanager-alertmanager vmselect-longterm vmselect-shortterm vmstorage-longterm vmstorage-shortterm
# Wait for grafana
kubectl wait --timeout=5m --for=condition=ready -n tenant-root clusters.postgresql.cnpg.io grafana-db
kubectl wait --timeout=5m --for=condition=available -n tenant-root deploy grafana-deployment
# Get IP of nginx-ingress
ip=$(kubectl get svc -n tenant-root root-ingress-controller -o jsonpath='{.status.loadBalancer.ingress..ip}')
# Check Grafana
curl -sS -k "https://$ip" -H 'Host: grafana.example.org' | grep Found

View File

@@ -20,9 +20,28 @@ miss_map=$(echo "$new_map" | awk 'NR==FNR { new_map[$1 " " $2] = $3; next } { if
resolved_miss_map=$(
echo "$miss_map" | while read chart version commit; do
if [ "$commit" = HEAD ]; then
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
change_commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
commit=$(git describe --always "$change_commit~1")
line=$(awk '/^version:/ {print NR; exit}' "./$chart/Chart.yaml")
change_commit=$(git --no-pager blame -L"$line",+1 -- "$chart/Chart.yaml" | awk '{print $1}')
if [ "$change_commit" = "00000000" ]; then
# Not commited yet, use previus commit
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
if [ $(echo $commit | cut -c1) = "^" ]; then
# Previus commit not exists
commit=$(echo $commit | cut -c2-)
fi
else
# Commited, but version_map wasn't updated
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
change_commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
if [ $(echo $change_commit | cut -c1) = "^" ]; then
# Previus commit not exists
commit=$(echo $change_commit | cut -c2-)
else
commit=$(git describe --always "$change_commit~1")
fi
fi
fi
echo "$chart $version $commit"
done

View File

@@ -1,19 +0,0 @@
#!/bin/sh
set -e
if [ -e $1 ]; then
echo "Please pass version in the first argument"
echo "Example: $0 v0.0.2"
exit 1
fi
version=$1
talos_version=$(awk '/^version:/ {print $2}' packages/core/installer/images/talos/profiles/installer.yaml)
set -x
sed -i "/^TAG / s|=.*|= ${version}|" \
packages/apps/http-cache/Makefile \
packages/apps/kubernetes/Makefile \
packages/core/installer/Makefile \
packages/system/dashboard/Makefile

View File

@@ -15,13 +15,6 @@ metadata:
namespace: cozy-system
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cozystack
namespace: cozy-system
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
@@ -61,6 +54,11 @@ spec:
selector:
matchLabels:
app: cozystack
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
template:
metadata:
labels:
@@ -70,7 +68,7 @@ spec:
serviceAccountName: cozystack
containers:
- name: cozystack
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.1.0"
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.12.0"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
@@ -89,7 +87,7 @@ spec:
fieldRef:
fieldPath: metadata.name
- name: darkhttpd
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.1.0"
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.12.0"
command:
- /usr/bin/darkhttpd
- /cozystack/assets

View File

@@ -7,11 +7,11 @@ repo:
awk '$$3 != "HEAD" {print "mkdir -p $(TMP)/" $$1 "-" $$2}' versions_map | sh -ex
awk '$$3 != "HEAD" {print "git archive " $$3 " " $$1 " | tar -xf- --strip-components=1 -C $(TMP)/" $$1 "-" $$2 }' versions_map | sh -ex
helm package -d "$(OUT)" $$(find . $(TMP) -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")' | sort -V)
cd "$(OUT)" && helm repo index .
cd "$(OUT)" && helm repo index . --url http://cozystack.cozy-system.svc/repos/apps
rm -rf "$(TMP)"
fix-chartnames:
find . -name Chart.yaml -maxdepth 2 | awk -F/ '{print $$2}' | while read i; do sed -i "s/^name: .*/name: $$i/" "$$i/Chart.yaml"; done
find . -maxdepth 2 -name Chart.yaml | awk -F/ '{print $$2}' | while read i; do sed -i "s/^name: .*/name: $$i/" "$$i/Chart.yaml"; done
gen-versions-map: fix-chartnames
../../hack/gen_versions_map.sh

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: bucket
description: S3 compatible storage
icon: /logos/bucket.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "0.1.0"

View File

@@ -0,0 +1,4 @@
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -0,0 +1,12 @@
<svg width="144" height="144" viewBox="0 0 144 144" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="144" height="144" rx="24" fill="url(#paint0_linear_683_3091)"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M72 30.1641L117.983 36.7789V40.6739C117.983 46.4653 97.3862 51.1332 71.9827 51.1332C46.5792 51.1332 26 46.4653 26 40.6739V36.4431L72 30.1641ZM72 58.2678C91.2084 58.2678 107.658 55.5986 114.547 51.8048L116.803 48.111L117.723 44.753V48.9171L102.679 111.033C102.679 114.895 88.9533 118 72.0172 118C55.0812 118 41.3743 114.895 41.3743 111.033L26.33 48.9171V44.8369L29.8007 51.9382C36.7065 55.6653 52.9997 58.2678 72 58.2678Z" fill="#8C3123"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M72.0003 26C97.4038 26 118 30.6839 118 36.442C118 42.2 97.3866 46.8507 72.0003 46.8507C46.6141 46.8507 26.0176 42.2345 26.0176 36.442C26.0176 30.6494 46.5968 26 72.0003 26ZM72.0003 54.1037C95.6857 54.1037 115.172 50.058 117.706 44.8197L102.662 106.937C102.662 110.799 88.9364 113.905 72.0003 113.905C55.0643 113.905 41.339 110.816 41.339 106.954L26.2959 44.837C28.8466 50.058 48.3333 54.1037 72.0003 54.1037Z" fill="#E05243"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M61.1725 60.0293H81.0928V79.1676H61.1725V60.0293ZM45.3301 95.3688C45.3301 90.142 49.7104 85.9342 55.1511 85.9342C60.5917 85.9342 64.9721 90.142 64.9721 95.3688C64.9721 100.596 60.5917 104.803 55.1511 104.803C49.7104 104.803 45.3301 100.596 45.3301 95.3688ZM96.4487 104.368H76.7722L86.6105 86.7737L96.4487 104.368Z" fill="white"/>
<defs>
<linearGradient id="paint0_linear_683_3091" x1="0" y1="0" x2="151" y2="180" gradientUnits="userSpaceOnUse">
<stop stop-color="#FFF0EE"/>
<stop offset="1" stop-color="#EC887D"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@@ -0,0 +1,20 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $seaweedfs := index $myNS.metadata.annotations "namespace.cozystack.io/seaweedfs" }}
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketClaim
metadata:
name: {{ .Release.Name }}
spec:
bucketClassName: {{ $seaweedfs }}
protocols:
- s3
---
apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketAccess
metadata:
name: {{ .Release.Name }}
spec:
bucketAccessClassName: {{ $seaweedfs }}
bucketClaimName: {{ .Release.Name }}
credentialsSecretName: {{ .Release.Name }}
protocol: s3

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-dashboard-resources
rules:
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]

View File

@@ -0,0 +1,3 @@
.helmignore
/logos
/Makefile

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: clickhouse
description: Managed ClickHouse service
icon: /logos/clickhouse.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "24.3.0"

View File

@@ -0,0 +1,4 @@
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -0,0 +1,18 @@
# Managed Clickhouse Service
## Parameters
### Common parameters
| Name | Description | Value |
| -------------- | ----------------------------------- | ------ |
| `size` | Persistent Volume size | `10Gi` |
| `shards` | Number of Clickhouse replicas | `1` |
| `replicas` | Number of Clickhouse shards | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
### Configuration parameters
| Name | Description | Value |
| ------- | ------------------- | ----- |
| `users` | Users configuration | `{}` |

View File

@@ -0,0 +1,11 @@
<svg width="144" height="144" viewBox="0 0 144 144" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="144" height="144" rx="24" fill="url(#paint0_linear_683_3202)"/>
<path d="M23 105H34V116H23V105Z" fill="#FF0000"/>
<path d="M23 28H34V105H23V28ZM45 28H55.9999V116H45V28ZM66.9999 28H77.9999V116H66.9999V28ZM88.9999 28H99.9999V116H88.9999V28ZM111 63.7499H122V80.2499H111V63.7499Z" fill="white"/>
<defs>
<linearGradient id="paint0_linear_683_3202" x1="-0.499998" y1="1.5" x2="153.5" y2="162" gradientUnits="userSpaceOnUse">
<stop stop-color="#FFCC00"/>
<stop offset="1" stop-color="#FF7A00"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 634 B

View File

@@ -0,0 +1,40 @@
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
name: "{{ .Release.Name }}"
spec:
{{- with .Values.size }}
defaults:
templates:
dataVolumeClaimTemplate: data-volume-template
{{- end }}
configuration:
{{- with .Values.users }}
users:
{{- range $name, $u := . }}
{{ $name }}/password_sha256_hex: {{ sha256sum $u.password }}
{{ $name }}/profile: {{ ternary "readonly" "default" (index $u "readonly" | default false) }}
{{ $name }}/networks/ip: ["::/0"]
{{- end }}
{{- end }}
profiles:
readonly/readonly: "1"
clusters:
- name: "clickhouse"
layout:
shardsCount: {{ .Values.shards }}
replicasCount: {{ .Values.replicas }}
{{- with .Values.size }}
templates:
volumeClaimTemplates:
- name: data-volume-template
spec:
accessModes:
- ReadWriteOnce
{{- with .Values.stroageClass }}
storageClassName: {{ . }}
{{- end }}
resources:
requests:
storage: {{ . }}
{{- end }}

View File

@@ -0,0 +1,26 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"shards": {
"type": "number",
"description": "Number of Clickhouse replicas",
"default": 1
},
"replicas": {
"type": "number",
"description": "Number of Clickhouse shards",
"default": 2
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the data",
"default": ""
}
}
}

View File

@@ -0,0 +1,24 @@
## @section Common parameters
## @param size Persistent Volume size
## @param shards Number of Clickhouse replicas
## @param replicas Number of Clickhouse shards
## @param storageClass StorageClass used to store the data
##
size: 10Gi
shards: 1
replicas: 2
storageClass: ""
## @section Configuration parameters
## @param users [object] Users configuration
## Example:
## users:
## user1:
## password: strongpassword
## user2:
## readonly: true
## password: hackme
##
users: {}

View File

@@ -0,0 +1,3 @@
.helmignore
/logos
/Makefile

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: ferretdb
description: Managed FerretDB service
icon: /logos/ferretdb.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.24.0"

View File

@@ -0,0 +1,4 @@
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -0,0 +1,35 @@
# Managed FerretDB Service
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `replicas` | Number of Postgres replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
| `quorum.minSyncReplicas` | Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed. | `0` |
| `quorum.maxSyncReplicas` | Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances). | `0` |
### Configuration parameters
| Name | Description | Value |
| ------- | ------------------- | ----- |
| `users` | Users configuration | `{}` |
### Backup parameters
| Name | Description | Value |
| ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |

View File

@@ -0,0 +1,12 @@
<svg width="144" height="144" viewBox="0 0 144 144" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect x="-0.00195312" width="144" height="144" rx="24" fill="url(#paint0_linear_683_2952)"/>
<path d="M69.5923 22.131C58.2662 23.6787 46.9037 30.8714 40.3302 40.6679C39.274 42.2521 37.4531 45.548 37.4531 45.8757C37.4531 45.9122 38.3272 45.3841 39.3833 44.6921C52.3847 36.1156 67.8989 34.5314 80.5178 40.4858C83.2674 41.7787 84.9973 43.0351 87.4555 45.4933C91.589 49.645 94.6117 55.1988 96.7058 62.5007C97.7983 66.2518 98.7088 71.3686 98.9455 74.8465C99.0001 75.7934 99.1458 76.631 99.2369 76.6856C99.7467 76.9952 102.041 73.6629 103.662 70.276C106.229 64.8861 107.431 59.5872 107.413 53.7057C107.395 45.3841 104.518 38.3917 98.727 32.5648C93.592 27.3934 87.1095 23.8426 80.3175 22.4587C78.7333 22.1492 77.5679 22.0581 74.5999 22.0035C72.5422 21.9853 70.3025 22.0399 69.5923 22.131Z" fill="white"/>
<path d="M45.52 46.4402C44.3364 47.0229 42.3516 48.8438 40.6035 50.9379C39.8205 51.8666 38.6369 53.0137 37.7629 53.6693C35.7234 55.1989 32.2455 58.604 30.4792 60.8073C21.2654 72.2244 18.6979 85.244 23.0863 98.3182C26.6917 109.025 35.0315 116.127 47.8508 119.35C52.8401 120.624 60.324 121.335 63.456 120.843L64.2572 120.715L63.019 119.987C56.1906 116.018 51.4198 109.317 50.0905 101.869C49.6899 99.611 49.6717 95.605 50.0723 93.4017C50.9645 88.4488 53.4592 83.8965 56.8461 81.0559C58.4303 79.7266 61.1981 78.3609 63.4014 77.8329C66.7155 77.0317 68.7367 76.1212 70.8307 74.4642C72.1782 73.408 73.3618 71.8056 74.3451 69.7298C75.1827 67.9635 76.9672 62.3551 76.9672 61.4628C76.9672 60.8437 76.3299 60.0061 75.4195 59.4416C74.946 59.1502 74.1994 58.9864 72.2875 58.7861C64.0569 57.9302 59.9599 56.4371 55.007 52.5221C54.2968 51.9576 53.441 51.3203 53.095 51.1018C52.749 50.9015 52.0571 50.1367 51.5836 49.4265C50.1451 47.3325 48.3606 45.985 46.9949 45.9668C46.7036 45.9668 46.0298 46.1853 45.52 46.4402ZM54.4607 54.8711C55.0798 55.1806 55.7535 55.5812 55.972 55.7451L56.3727 56.0729L55.7353 58.6222C55.1891 60.8437 55.098 61.4082 55.1526 62.9924C55.2073 64.5584 55.2619 64.9043 55.6261 65.4142C56.227 66.2336 57.2649 66.7253 58.4303 66.7253C60.0873 66.7253 61.3802 65.7784 63.5289 62.956C64.148 62.1548 64.6396 61.7177 65.368 61.3718C66.497 60.8073 67.2982 60.7527 69.811 60.9712L71.4863 61.135V62.1183C71.4863 63.6661 72.3057 64.5584 73.9809 64.8133L74.7821 64.9226L74.4908 65.5963C73.2161 68.6736 69.9385 72.1516 66.8611 73.6994C66.3695 73.9361 65.2587 74.3731 64.4029 74.6645C63.0008 75.1197 62.6184 75.1743 60.2148 75.1743C57.8294 75.1743 57.4288 75.1197 56.1177 74.6827C52.1663 73.3716 49.2347 70.4581 47.9054 66.5432C47.4319 65.1593 47.4137 61.135 47.8872 59.4598C48.5245 57.1472 49.6535 55.2353 50.8371 54.4887C51.6018 53.997 53.0222 54.1609 54.4607 54.8711Z" fill="white"/>
<path d="M113.022 61.7361C113.022 62.5555 112.111 66.3431 111.347 68.7102C108.47 77.5781 103.262 85.5355 96.4697 91.3443C91.6989 95.4413 88.3119 97.244 82.9402 98.5733C79.4805 99.4291 77.2226 99.7023 72.8341 99.8115C67.3532 99.9572 61.9451 99.4655 57.1014 98.4094C56.1727 98.2091 55.3898 98.0816 55.3351 98.1363C55.1166 98.3366 55.9542 101.123 56.6826 102.598C58.0119 105.329 59.5232 107.368 62.2182 110.063C65.0588 112.904 67.1711 114.47 70.4487 116.163C78.57 120.351 87.8931 120.916 97.453 117.766C107.541 114.47 114.952 108.516 118.94 100.503C121.598 95.1864 122.691 89.5051 122.29 83.0227C121.799 75.0288 118.849 67.1989 114.57 62.5738C113.896 61.8454 113.277 61.2627 113.186 61.2627C113.095 61.2627 113.022 61.4812 113.022 61.7361Z" fill="white"/>
<defs>
<linearGradient id="paint0_linear_683_2952" x1="5.5" y1="11" x2="141" y2="124.5" gradientUnits="userSpaceOnUse">
<stop stop-color="#45ADC6"/>
<stop offset="1" stop-color="#216778"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 3.7 KiB

View File

@@ -0,0 +1,99 @@
{{- if .Values.backup.enabled }}
{{ $image := .Files.Get "images/backup.json" | fromJson }}
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ .Release.Name }}-backup
spec:
schedule: "{{ .Values.backup.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
backoffLimit: 2
template:
spec:
restartPolicy: OnFailure
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/backup-script.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/backup-secret.yaml") . | sha256sum }}
spec:
restartPolicy: Never
containers:
- name: mysqldump
image: "{{ index $image "image.name" }}@{{ index $image "containerimage.digest" }}"
command:
- /bin/sh
- /scripts/backup.sh
env:
- name: REPO_PREFIX
value: {{ required "s3Bucket is not specified!" .Values.backup.s3Bucket | quote }}
- name: CLEANUP_STRATEGY
value: {{ required "cleanupStrategy is not specified!" .Values.backup.cleanupStrategy | quote }}
- name: PGUSER
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: username
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: password
- name: PGHOST
value: {{ .Release.Name }}-postgres-rw
- name: PGPORT
value: "5432"
- name: PGDATABASE
value: postgres
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: s3AccessKey
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: s3SecretKey
- name: AWS_DEFAULT_REGION
value: {{ .Values.backup.s3Region }}
- name: RESTIC_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: resticPassword
volumeMounts:
- mountPath: /scripts
name: scripts
- mountPath: /tmp
name: tmp
- mountPath: /.cache
name: cache
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumes:
- name: scripts
secret:
secretName: {{ .Release.Name }}-backup-script
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
securityContext:
runAsNonRoot: true
runAsUser: 9000
runAsGroup: 9000
seccompProfile:
type: RuntimeDefault
{{- end }}

View File

@@ -0,0 +1,50 @@
{{- if .Values.backup.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backup-script
stringData:
backup.sh: |
#!/bin/sh
set -e
set -o pipefail
JOB_ID="job-$(uuidgen|cut -f1 -d-)"
DB_LIST=$(psql -Atq -c 'SELECT datname FROM pg_catalog.pg_database;' | grep -v '^\(postgres\|app\|template.*\)$')
echo DB_LIST=$(echo "$DB_LIST" | shuf) # shuffle list
echo "Job ID: $JOB_ID"
echo "Target repo: $REPO_PREFIX"
echo "Cleanup strategy: $CLEANUP_STRATEGY"
echo "Start backup for:"
echo "$DB_LIST"
echo
echo "Backup started at `date +%Y-%m-%d\ %H:%M:%S`"
for db in $DB_LIST; do
(
set -x
restic -r "s3:${REPO_PREFIX}/$db" cat config >/dev/null 2>&1 || \
restic -r "s3:${REPO_PREFIX}/$db" init --repository-version 2
restic -r "s3:${REPO_PREFIX}/$db" unlock --remove-all >/dev/null 2>&1 || true # no locks, k8s takes care of it
pg_dump -Z0 -Ft -d "$db" | \
restic -r "s3:${REPO_PREFIX}/$db" backup --tag "$JOB_ID" --stdin --stdin-filename dump.tar
restic -r "s3:${REPO_PREFIX}/$db" tag --tag "$JOB_ID" --set "completed"
)
done
echo "Backup finished at `date +%Y-%m-%d\ %H:%M:%S`"
echo
echo "Run cleanup:"
echo
echo "Cleanup started at `date +%Y-%m-%d\ %H:%M:%S`"
for db in $DB_LIST; do
(
set -x
restic forget -r "s3:${REPO_PREFIX}/$db" --group-by=tags --keep-tag "completed" # keep completed snapshots only
restic forget -r "s3:${REPO_PREFIX}/$db" --group-by=tags $CLEANUP_STRATEGY
restic prune -r "s3:${REPO_PREFIX}/$db"
)
done
echo "Cleanup finished at `date +%Y-%m-%d\ %H:%M:%S`"
{{- end }}

View File

@@ -0,0 +1,11 @@
{{- if .Values.backup.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backup
stringData:
s3AccessKey: {{ required "s3AccessKey is not specified!" .Values.backup.s3AccessKey }}
s3SecretKey: {{ required "s3SecretKey is not specified!" .Values.backup.s3SecretKey }}
resticPassword: {{ required "resticPassword is not specified!" .Values.backup.resticPassword }}
{{- end }}

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
spec:
type: {{ ternary "LoadBalancer" "ClusterIP" .Values.external }}
{{- if .Values.external }}
externalTrafficPolicy: Local
allocateLoadBalancerNodePorts: false
{{- end }}
ports:
- name: ferretdb
port: 27017
selector:
app: {{ .Release.Name }}

View File

@@ -0,0 +1,26 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
containers:
- name: ferretdb
image: ghcr.io/ferretdb/ferretdb:1.24.0
ports:
- containerPort: 27017
env:
- name: FERRETDB_POSTGRESQL_URL
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-app
key: uri

View File

@@ -0,0 +1,66 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-init-job
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
metadata:
name: {{ .Release.Name }}-init-job
annotations:
checksum/config: {{ include (print $.Template.BasePath "/init-script.yaml") . | sha256sum }}
spec:
restartPolicy: Never
containers:
- name: postgres
image: ghcr.io/cloudnative-pg/postgresql:15.3
command:
- bash
- /scripts/init.sh
env:
- name: PGUSER
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: username
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: password
- name: PGHOST
value: {{ .Release.Name }}-postgres-rw
- name: PGPORT
value: "5432"
- name: PGDATABASE
value: postgres
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumeMounts:
- mountPath: /etc/secret
name: secret
- mountPath: /scripts
name: scripts
securityContext:
fsGroup: 26
runAsGroup: 26
runAsNonRoot: true
runAsUser: 26
seccompProfile:
type: RuntimeDefault
volumes:
- name: secret
secret:
secretName: {{ .Release.Name }}-postgres-superuser
- name: scripts
secret:
secretName: {{ .Release.Name }}-init-script

View File

@@ -0,0 +1,101 @@
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-init-script
stringData:
init.sh: |
#!/bin/bash
set -e
echo "== create users"
{{- if .Values.users }}
psql -v ON_ERROR_STOP=1 <<\EOT
{{- range $user, $u := .Values.users }}
SELECT 'CREATE ROLE {{ $user }} LOGIN INHERIT;'
WHERE NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '{{ $user }}')\gexec
ALTER ROLE {{ $user }} WITH PASSWORD '{{ $u.password }}' LOGIN INHERIT {{ ternary "REPLICATION" "NOREPLICATION" (default false $u.replication) }};
COMMENT ON ROLE {{ $user }} IS 'user managed by helm';
{{- end }}
EOT
{{- end }}
echo "== delete users"
MANAGED_USERS=$(echo '\du+' | psql | awk -F'|' '$4 == " user managed by helm" {print $1}' | awk NF=NF RS= OFS=' ')
DEFINED_USERS="{{ join " " (keys .Values.users) }}"
DELETE_USERS=$(for user in $MANAGED_USERS; do case " $DEFINED_USERS " in *" $user "*) :;; *) echo $user;; esac; done)
echo "users to delete: $DELETE_USERS"
for user in $DELETE_USERS; do
# https://stackoverflow.com/a/51257346/2931267
psql -v ON_ERROR_STOP=1 --echo-all <<EOT
REASSIGN OWNED BY $user TO postgres;
DROP OWNED BY $user;
DROP USER $user;
EOT
done
echo "== create roles"
psql -v ON_ERROR_STOP=1 --echo-all <<\EOT
SELECT 'CREATE ROLE app_admin NOINHERIT;'
WHERE NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'app_admin')\gexec
COMMENT ON ROLE app_admin IS 'role managed by helm';
EOT
echo "== grant privileges on databases to roles"
psql -v ON_ERROR_STOP=1 --echo-all -d "app" <<\EOT
ALTER DATABASE app OWNER TO app_admin;
DO $$
DECLARE
schema_record record;
BEGIN
-- Loop over all schemas
FOR schema_record IN SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('pg_catalog', 'information_schema') LOOP
-- Changing Schema Ownership
EXECUTE format('ALTER SCHEMA %I OWNER TO %I', schema_record.schema_name, 'app_admin');
-- Add rights for the admin role
EXECUTE format('GRANT ALL ON SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('GRANT ALL ON ALL TABLES IN SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('GRANT ALL ON ALL SEQUENCES IN SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('GRANT ALL ON ALL FUNCTIONS IN SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON TABLES TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON SEQUENCES TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON FUNCTIONS TO %I', schema_record.schema_name, 'app_admin');
END LOOP;
END$$;
EOT
echo "== setup event trigger for schema creation"
psql -v ON_ERROR_STOP=1 --echo-all -d "app" <<\EOT
CREATE OR REPLACE FUNCTION auto_grant_schema_privileges()
RETURNS event_trigger LANGUAGE plpgsql AS $$
DECLARE
obj record;
BEGIN
FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands() WHERE command_tag = 'CREATE SCHEMA' LOOP
-- Set owner for schema
EXECUTE format('ALTER SCHEMA %I OWNER TO %I', obj.object_identity, 'app_admin');
-- Set privileges for admin role
EXECUTE format('GRANT ALL ON SCHEMA %I TO %I', obj.object_identity, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON TABLES TO %I', obj.object_identity, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON SEQUENCES TO %I', obj.object_identity, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON FUNCTIONS TO %I', obj.object_identity, 'app_admin');
END LOOP;
END;
$$;
DROP EVENT TRIGGER IF EXISTS trigger_auto_grant;
CREATE EVENT TRIGGER trigger_auto_grant ON ddl_command_end
WHEN TAG IN ('CREATE SCHEMA')
EXECUTE PROCEDURE auto_grant_schema_privileges();
EOT
echo "== assign roles to users"
psql -v ON_ERROR_STOP=1 --echo-all <<\EOT
GRANT app_admin TO app;
{{- range $user, $u := $.Values.users }}
GRANT app_admin TO {{ $user }};
{{- end }}
EOT

View File

@@ -0,0 +1,52 @@
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: {{ .Release.Name }}-postgres
spec:
instances: {{ .Values.replicas }}
enableSuperuserAccess: true
minSyncReplicas: {{ .Values.quorum.minSyncReplicas }}
maxSyncReplicas: {{ .Values.quorum.maxSyncReplicas }}
monitoring:
enablePodMonitor: true
storage:
size: {{ required ".Values.size is required" .Values.size }}
{{- with .Values.stroageClass }}
storageClass: {{ . }}
{{- end }}
inheritedMetadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"
{{- if .Values.users }}
managed:
roles:
{{- range $user, $config := .Values.users }}
- name: {{ $user }}
ensure: present
passwordSecret:
name: {{ printf "%s-user-%s" $.Release.Name $user }}
login: true
inRoles:
- app
{{- end }}
{{- end }}
{{- range $user, $config := .Values.users }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ printf "%s-user-%s" $.Release.Name $user }}
labels:
cnpg.io/reload: "true"
type: kubernetes.io/basic-auth
data:
username: {{ $user | b64enc }}
password: {{ $config.password | b64enc }}
{{- end }}

View File

@@ -0,0 +1,86 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"replicas": {
"type": "number",
"description": "Number of Postgres replicas",
"default": 2
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the data",
"default": ""
},
"quorum": {
"type": "object",
"properties": {
"minSyncReplicas": {
"type": "number",
"description": "Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.",
"default": 0
},
"maxSyncReplicas": {
"type": "number",
"description": "Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).",
"default": 0
}
}
},
"backup": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable pereiodic backups",
"default": false
},
"s3Region": {
"type": "string",
"description": "The AWS S3 region where backups are stored",
"default": "us-east-1"
},
"s3Bucket": {
"type": "string",
"description": "The S3 bucket used for storing backups",
"default": "s3.example.org/postgres-backups"
},
"schedule": {
"type": "string",
"description": "Cron schedule for automated backups",
"default": "0 2 * * *"
},
"cleanupStrategy": {
"type": "string",
"description": "The strategy for cleaning up old backups",
"default": "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
},
"s3AccessKey": {
"type": "string",
"description": "The access key for S3, used for authentication",
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu"
},
"s3SecretKey": {
"type": "string",
"description": "The secret key for S3, used for authentication",
"default": "ju3eum4dekeich9ahM1te8waeGai0oog"
},
"resticPassword": {
"type": "string",
"description": "The password for Restic backup encryption",
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
}
}
}

View File

@@ -0,0 +1,50 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param replicas Number of Postgres replicas
## @param storageClass StorageClass used to store the data
##
external: false
size: 10Gi
replicas: 2
storageClass: ""
## Configuration for the quorum-based synchronous replication
## @param quorum.minSyncReplicas Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.
## @param quorum.maxSyncReplicas Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
## @section Configuration parameters
## @param users [object] Users configuration
## Example:
## users:
## user1:
## password: strongpassword
## user2:
## password: hackme
##
users: {}
## @section Backup parameters
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.cleanupStrategy The strategy for cleaning up old backups
## @param backup.s3AccessKey The access key for S3, used for authentication
## @param backup.s3SecretKey The secret key for S3, used for authentication
## @param backup.resticPassword The password for Restic backup encryption
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0

View File

@@ -0,0 +1,56 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param replicas Number of Postgres replicas
##
external: false
size: 10Gi
replicas: 1
## Configuration for the quorum-based synchronous replication
## @param quorum.minSyncReplicas Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.
## @param quorum.maxSyncReplicas Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
## @section Configuration parameters
## @param users [object] Users configuration
## Example:
## users:
## user1:
## password: strongpassword
## user2:
## password: hackme
##
users:
foo:
password: asd
bar:
password: asd
baz:
password: asd
boo:
password: asd
## @section Backup parameters
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.cleanupStrategy The strategy for cleaning up old backups
## @param backup.s3AccessKey The access key for S3, used for authentication
## @param backup.s3SecretKey The secret key for S3, used for authentication
## @param backup.resticPassword The password for Restic backup encryption
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0

View File

@@ -1,23 +1,3 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
.helmignore
/logos
/Makefile

View File

@@ -1,7 +1,7 @@
apiVersion: v2
name: http-cache
description: Layer7 load balacner and caching service
icon: https://www.svgrepo.com/show/373924/nginx.svg
icon: /logos/nginx.svg
# A chart can be either an 'application' or a 'library' chart.
#
@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.3.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "1.25.3"

View File

@@ -1,22 +1,26 @@
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
NGINX_CACHE_TAG = v0.1.0
TAG := v0.2.0
include ../../../scripts/common-envs.mk
include ../../../scripts/package.mk
image: image-nginx
image-nginx:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/nginx-cache \
--provenance false \
--tag $(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG) \
--tag $(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG)-$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG) \
--tag $(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG)) \
--tag $(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/nginx-cache:latest \
--cache-to type=inline \
--metadata-file images/nginx-cache.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG)" > images/nginx-cache.tag
echo "$(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG))@$$(yq e '."containerimage.digest"' images/nginx-cache.json -o json -r)" \
> images/nginx-cache.tag
rm -f images/nginx-cache.json
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
update:
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/chrislim2888/IP2Location-C-Library | awk -F'[/^]' 'END{print $$3}') && \

View File

@@ -55,3 +55,21 @@ The deployment architecture is illustrated in the diagram below:
VTS module shows wrong upstream resonse time
- https://github.com/vozlt/nginx-module-vts/issues/198
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------ | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `storageClass` | StorageClass used to store the data | `""` |
| `haproxy.replicas` | Number of HAProxy replicas | `2` |
| `nginx.replicas` | Number of Nginx replicas | `2` |
### Configuration parameters
| Name | Description | Value |
| ----------- | ----------------------- | ----- |
| `endpoints` | Endpoints configuration | `[]` |

View File

@@ -1,4 +0,0 @@
{
"containerimage.config.digest": "sha256:318fd8d0d6f6127387042f6ad150e87023d1961c7c5059dd5324188a54b0ab4e",
"containerimage.digest": "sha256:e3cf145238e6e45f7f13b9acaea445c94ff29f76a34ba9fa50828401a5a3cc68"
}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0
ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0@sha256:556bc8d29ee9e90b3d64d0481dcfc66483d055803315bba3d9ece17c0d97f32b

View File

@@ -1,4 +0,0 @@
{
"containerimage.config.digest": "sha256:b1916dbacb372ed89ea3f920f08ee68730be2edc016f2caa373a7bbfbad25845",
"containerimage.digest": "sha256:f77d5b63f1ed9dfda4725696d9170130939219a2465260b6ba941947460de2da"
}

View File

@@ -0,0 +1,10 @@
<svg width="144" height="144" viewBox="0 0 144 144" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="144" height="144" rx="24" fill="url(#paint0_linear_681_2825)"/>
<path d="M26.0026 37.8588C26.0026 60.919 26.0026 83.9814 26.0026 107.046C25.973 108.323 26.1996 109.593 26.6692 110.783C27.1387 111.972 27.8418 113.056 28.7374 113.972C30.4539 115.659 32.7 116.709 35.1009 116.948C37.5019 117.187 39.9126 116.6 41.931 115.284C43.282 114.371 44.3881 113.143 45.1527 111.707C45.9174 110.271 46.3175 108.671 46.3181 107.046C46.3181 90.3528 46.2861 73.6597 46.3181 56.9666C61.6168 75.1889 76.9474 93.3856 92.31 111.557C94.4444 113.708 97.0875 115.291 99.997 116.162C102.906 117.032 105.989 117.162 108.962 116.539C111.061 116.128 112.973 115.057 114.415 113.485C115.857 111.913 116.754 109.921 116.974 107.804C117.009 84.2681 117.009 60.7343 116.974 37.2025C116.754 34.6907 115.595 32.3522 113.726 30.6486C111.858 28.945 109.415 28 106.881 28C104.346 28 101.903 28.945 100.035 30.6486C98.1663 32.3522 97.0074 34.6907 96.7869 37.2025C96.7869 54.1632 96.6844 71.1048 96.7869 88.0591C81.7616 70.4358 66.9219 52.6596 51.9543 34.9725C49.981 32.4554 47.3685 30.5073 44.3863 29.3291C41.4041 28.1509 38.1599 27.7852 34.9883 28.2698C32.5857 28.5359 30.3583 29.6493 28.7099 31.4084C27.0615 33.1675 26.101 35.4559 26.0026 37.8588Z" fill="white"/>
<defs>
<linearGradient id="paint0_linear_681_2825" x1="10" y1="15.5" x2="144" y2="131.5" gradientUnits="userSpaceOnUse">
<stop stop-color="#00C54A"/>
<stop offset="1" stop-color="#019639"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@@ -74,7 +74,7 @@ data:
option redispatch 1
default-server observe layer7 error-limit 10 on-error mark-down
{{- range $i, $e := until (int $.Values.replicas) }}
{{- range $i, $e := until (int $.Values.nginx.replicas) }}
server cache{{ $i }} {{ $.Release.Name }}-nginx-cache-{{ $i }}:80 check
{{- end }}
{{- range $i, $e := $.Values.endpoints }}

View File

@@ -7,7 +7,7 @@ metadata:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 2
replicas: {{ .Values.haproxy.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}-haproxy

View File

@@ -11,7 +11,7 @@ spec:
selector:
matchLabels:
app: {{ $.Release.Name }}-nginx-cache
{{- range $i := until 3 }}
{{- range $i := until (int $.Values.nginx.replicas) }}
---
apiVersion: apps/v1
kind: Deployment
@@ -52,7 +52,7 @@ spec:
shareProcessNamespace: true
containers:
- name: nginx
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}@{{ index ($.Files.Get "images/nginx-cache.json" | fromJson) "containerimage.digest" }}"
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}"
readinessProbe:
httpGet:
path: /healthz
@@ -81,7 +81,7 @@ spec:
- mountPath: /run
name: run
- name: reloader
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}@{{ index ($.Files.Get "images/nginx-cache.json" | fromJson) "containerimage.digest" }}"
image: "{{ $.Files.Get "images/nginx-cache.tag" | trim }}"
command: ["/usr/bin/nginx-reloader.sh"]
#command: ["sleep", "infinity"]
volumeMounts:
@@ -114,6 +114,9 @@ spec:
resources:
requests:
storage: "{{ $.Values.size }}"
{{- with $.Values.stroageClass }}
storageClassName: {{ . }}
{{- end }}
---
apiVersion: v1
kind: Service

View File

@@ -0,0 +1,47 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the data",
"default": ""
},
"haproxy": {
"type": "object",
"properties": {
"replicas": {
"type": "number",
"description": "Number of HAProxy replicas",
"default": 2
}
}
},
"nginx": {
"type": "object",
"properties": {
"replicas": {
"type": "number",
"description": "Number of Nginx replicas",
"default": 2
}
}
},
"endpoints": {
"type": "array",
"description": "Endpoints configuration",
"default": [],
"items": {}
}
}
}

View File

@@ -1,9 +1,30 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param storageClass StorageClass used to store the data
## @param haproxy.replicas Number of HAProxy replicas
## @param nginx.replicas Number of Nginx replicas
##
external: false
size: 10Gi
endpoints:
- 10.100.3.1:80
- 10.100.3.11:80
- 10.100.3.2:80
- 10.100.3.12:80
- 10.100.3.3:80
- 10.100.3.13:80
storageClass: ""
haproxy:
replicas: 2
nginx:
replicas: 2
## @section Configuration parameters
## @param endpoints Endpoints configuration
## Example:
## endpoints:
## - 10.100.3.1:80
## - 10.100.3.11:80
## - 10.100.3.2:80
## - 10.100.3.12:80
## - 10.100.3.3:80
## - 10.100.3.13:80
##
endpoints: []

View File

@@ -0,0 +1,3 @@
.helmignore
/logos
/Makefile

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: kafka
description: Managed Kafka service
icon: /logos/kafka.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "3.7.0"

View File

@@ -0,0 +1,4 @@
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -0,0 +1,21 @@
# Managed Kafka Service
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------------ | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `kafka.size` | Persistent Volume size for Kafka | `10Gi` |
| `kafka.replicas` | Number of Kafka replicas | `3` |
| `kafka.storageClass` | StorageClass used to store the Kafka data | `""` |
| `zookeeper.size` | Persistent Volume size for ZooKeeper | `5Gi` |
| `zookeeper.replicas` | Number of ZooKeeper replicas | `3` |
| `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data | `""` |
### Configuration parameters
| Name | Description | Value |
| -------- | -------------------- | ----- |
| `topics` | Topics configuration | `[]` |

View File

@@ -0,0 +1,10 @@
<svg width="144" height="144" viewBox="0 0 144 144" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="144" height="144" rx="24" fill="url(#paint0_linear_681_2820)"/>
<path d="M91.0307 77.8185C86.8577 77.8185 83.1166 79.6818 80.5547 82.6154L73.9901 77.9315C74.6869 75.9978 75.087 73.9215 75.087 71.7482C75.087 69.6126 74.7008 67.5711 74.0269 65.666L80.5769 61.0318C83.1385 63.9505 86.8699 65.8037 91.0307 65.8037C98.7328 65.8037 105 59.4884 105 51.7247C105 43.961 98.7328 37.6457 91.0307 37.6457C83.3285 37.6457 77.0614 43.961 77.0614 51.7247C77.0614 53.1143 77.2697 54.4543 77.6435 55.7233L71.0891 60.3598C68.3512 56.9365 64.409 54.5463 59.9174 53.8166V45.8553C66.2451 44.5158 71.0128 38.8495 71.0128 32.079C71.0128 24.3153 64.7457 18 57.0435 18C49.3414 18 43.0742 24.3153 43.0742 32.079C43.0742 38.7589 47.7184 44.3552 53.9196 45.7903V53.8551C45.4567 55.3523 39 62.7961 39 71.7482C39 80.744 45.5206 88.2151 54.0446 89.6613V98.1772C47.7801 99.565 43.0742 105.196 43.0742 111.921C43.0742 119.685 49.3414 126 57.0435 126C64.7457 126 71.0128 119.685 71.0128 111.921C71.0128 105.196 66.307 99.565 60.0424 98.1772V89.6611C64.3569 88.9286 68.2601 86.6407 71.0252 83.2234L77.6337 87.9376C77.2669 89.1952 77.0614 90.5219 77.0614 91.8975C77.0614 99.6612 83.3285 105.976 91.0307 105.976C98.7328 105.976 105 99.6612 105 91.8975C105 84.1338 98.7328 77.8185 91.0307 77.8185ZM91.0307 44.8985C94.7656 44.8985 97.8034 47.9615 97.8034 51.7247C97.8034 55.4879 94.7656 58.5506 91.0307 58.5506C87.2958 58.5506 84.258 55.4879 84.258 51.7247C84.258 47.9615 87.2958 44.8985 91.0307 44.8985ZM50.2705 32.079C50.2705 28.3158 53.3086 25.2531 57.0435 25.2531C60.7785 25.2531 63.8163 28.3158 63.8163 32.079C63.8163 35.8422 60.7785 38.9049 57.0435 38.9049C53.3086 38.9049 50.2705 35.8422 50.2705 32.079ZM63.8163 111.921C63.8163 115.684 60.7785 118.747 57.0435 118.747C53.3086 118.747 50.2705 115.684 50.2705 111.921C50.2705 108.158 53.3086 105.095 57.0435 105.095C60.7785 105.095 63.8163 108.158 63.8163 111.921ZM57.043 81.2681C51.8339 81.2681 47.5962 76.998 47.5962 71.7482C47.5962 66.4982 51.8339 62.2273 57.043 62.2273C62.2519 62.2273 66.4895 66.4982 66.4895 71.7482C66.4895 76.998 62.2519 81.2681 57.043 81.2681ZM91.0307 98.7237C87.2958 98.7237 84.258 95.6607 84.258 91.8975C84.258 88.1343 87.2958 85.0716 91.0307 85.0716C94.7656 85.0716 97.8034 88.1343 97.8034 91.8975C97.8034 95.6607 94.7656 98.7237 91.0307 98.7237Z" fill="white"/>
<defs>
<linearGradient id="paint0_linear_681_2820" x1="140" y1="130.5" x2="4" y2="9.49999" gradientUnits="userSpaceOnUse">
<stop/>
<stop offset="1" stop-color="#434141"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 2.6 KiB

View File

@@ -0,0 +1,78 @@
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
kafka:
replicas: {{ .Values.kafka.replicas }}
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
- name: external
port: 9094
{{- if .Values.external }}
type: loadbalancer
{{- else }}
type: internal
{{- end }}
tls: false
config:
{{- if eq (int .Values.kafka.replicas) 1 }}
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
default.replication.factor: 1
min.insync.replicas: 1
{{- else if eq (int .Values.kafka.replicas) 2 }}
offsets.topic.replication.factor: 2
transaction.state.log.replication.factor: 2
transaction.state.log.min.isr: 2
default.replication.factor: 2
min.insync.replicas: 2
{{- else }}
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
{{- end }}
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
{{- with .Values.kafka.size }}
size: {{ . }}
{{- end }}
{{- with .Values.kafka.stroageClass }}
class: {{ . }}
{{- end }}
deleteClaim: true
zookeeper:
replicas: {{ .Values.zookeeper.replicas }}
storage:
type: persistent-claim
{{- with .Values.zookeeper.size }}
size: {{ . }}
{{- end }}
{{- with .Values.kafka.stroageClass }}
class: {{ . }}
{{- end }}
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}
template:
pod:
metadata:
labels:
policy.cozystack.io/allow-to-apiserver: "true"

View File

@@ -0,0 +1,21 @@
{{- range $topic := .Values.topics }}
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: "{{ $.Release.Name }}-{{ kebabcase $topic.name }}"
labels:
strimzi.io/cluster: "{{ $.Release.Name }}"
spec:
topicName: "{{ $topic.name }}"
{{- with $topic.partitions }}
partitions: {{ . }}
{{- end }}
{{- with $topic.replicas }}
replicas: {{ . }}
{{- end }}
{{- with $topic.config }}
config:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,57 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"kafka": {
"type": "object",
"properties": {
"size": {
"type": "string",
"description": "Persistent Volume size for Kafka",
"default": "10Gi"
},
"replicas": {
"type": "number",
"description": "Number of Kafka replicas",
"default": 3
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the Kafka data",
"default": ""
}
}
},
"zookeeper": {
"type": "object",
"properties": {
"size": {
"type": "string",
"description": "Persistent Volume size for ZooKeeper",
"default": "5Gi"
},
"replicas": {
"type": "number",
"description": "Number of ZooKeeper replicas",
"default": 3
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the ZooKeeper data",
"default": ""
}
}
},
"topics": {
"type": "array",
"description": "Topics configuration",
"default": [],
"items": {}
}
}
}

View File

@@ -0,0 +1,41 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param kafka.size Persistent Volume size for Kafka
## @param kafka.replicas Number of Kafka replicas
## @param kafka.storageClass StorageClass used to store the Kafka data
## @param zookeeper.size Persistent Volume size for ZooKeeper
## @param zookeeper.replicas Number of ZooKeeper replicas
## @param zookeeper.storageClass StorageClass used to store the ZooKeeper data
##
external: false
kafka:
size: 10Gi
replicas: 3
storageClass: ""
zookeeper:
size: 5Gi
replicas: 3
storageClass: ""
## @section Configuration parameters
## @param topics Topics configuration
## Example:
## topics:
## - name: Results
## partitions: 1
## replicas: 3
## config:
## min.insync.replicas: 2
## - name: Orders
## config:
## cleanup.policy: compact
## segment.ms: 3600000
## max.compaction.lag.ms: 5400000
## min.insync.replicas: 2
## partitions: 1
## replicas: 3
##
topics: []

View File

@@ -1,23 +1,3 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
.helmignore
/logos
/Makefile

View File

@@ -1,7 +1,7 @@
apiVersion: v2
name: kubernetes
description: Managed Kubernetes service
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/3/39/Kubernetes_logo_without_workmark.svg/723px-Kubernetes_logo_without_workmark.svg.png
icon: /logos/kubernetes.svg
# A chart can be either an 'application' or a 'library' chart.
#
@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.9.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "1.30.1"

View File

@@ -1,19 +1,23 @@
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.2.0
UBUNTU_CONTAINER_DISK_TAG = v1.29.1
UBUNTU_CONTAINER_DISK_TAG = v1.30.1
include ../../../scripts/common-envs.mk
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
image: image-ubuntu-container-disk
image-ubuntu-container-disk:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/ubuntu-container-disk \
--provenance false \
--tag $(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG) \
--tag $(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG)-$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG) \
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG)) \
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/ubuntu-container-disk:latest \
--cache-to type=inline \
--metadata-file images/ubuntu-container-disk.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG)" > images/ubuntu-container-disk.tag
echo "$(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG))@$$(yq e '."containerimage.digest"' images/ubuntu-container-disk.json -o json -r)" \
> images/ubuntu-container-disk.tag
rm -f images/ubuntu-container-disk.json

View File

@@ -26,3 +26,27 @@ How to access to deployed cluster:
```
kubectl get secret -n <namespace> kubernetes-<clusterName>-admin-kubeconfig -o go-template='{{ printf "%s\n" (index .data "super-admin.conf" | base64decode) }}' > test
```
## Parameters
### Common parameters
| Name | Description | Value |
| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | ------------ |
| `host` | The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host). | `""` |
| `controlPlane.replicas` | Number of replicas for Kubernetes contorl-plane components | `2` |
| `storageClass` | StorageClass used to store user data | `replicated` |
| `nodeGroups` | nodeGroups configuration | `{}` |
### Cluster Addons
| Name | Description | Value |
| ------------------------------------ | ---------------------------------------------------------------------------------- | ------- |
| `addons.certManager.enabled` | Enables the cert-manager | `false` |
| `addons.certManager.valuesOverride` | Custom values to override | `{}` |
| `addons.ingressNginx.enabled` | Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role) | `false` |
| `addons.ingressNginx.valuesOverride` | Custom values to override | `{}` |
| `addons.ingressNginx.hosts` | List of domain names that should be passed through to the cluster by upper cluster | `[]` |
| `addons.fluxcd.enabled` | Enables Flux CD | `false` |
| `addons.fluxcd.valuesOverride` | Custom values to override | `{}` |

View File

@@ -1,4 +0,0 @@
{
"containerimage.config.digest": "sha256:ee8968be63c7c45621ec45f3687211e0875acb24e8d9784e8d2ebcbf46a3538c",
"containerimage.digest": "sha256:16c3c07e74212585786dc1f1ae31d3ab90a575014806193e8e37d1d7751cb084"
}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.29.1
ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.30.1@sha256:5ce80a453073c4f44347409133fc7b15f1d2f37a564d189871a4082fc552ff0f

View File

@@ -26,8 +26,8 @@ RUN qemu-img resize image.img 5G \
&& guestfish --remote sh "curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg" \
&& guestfish --remote sh 'echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list' \
# kubernetes repo
&& guestfish --remote sh "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" \
&& guestfish --remote sh "echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list" \
&& guestfish --remote sh "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" \
&& guestfish --remote sh "echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list" \
# install containerd
&& guestfish --remote command "apt-get update -y" \
&& guestfish --remote command "apt-get install -y containerd.io" \

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 7.0 KiB

View File

@@ -14,7 +14,14 @@ spec:
metadata:
labels:
app: {{ .Release.Name }}-cluster-autoscaler
policy.cozystack.io/allow-to-apiserver: "true"
spec:
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:
- image: ghcr.io/kvaps/test:cluster-autoscaller
name: cluster-autoscaler

View File

@@ -2,6 +2,54 @@
{{- $etcd := index $myNS.metadata.annotations "namespace.cozystack.io/etcd" }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $kubevirtmachinetemplateNames := list }}
{{- define "kubevirtmachinetemplate" -}}
spec:
virtualMachineBootstrapCheck:
checkStrategy: ssh
virtualMachineTemplate:
metadata:
namespace: {{ $.Release.Namespace }}
labels:
{{- range .group.roles }}
node-role.kubernetes.io/{{ . }}: ""
{{- end }}
spec:
runStrategy: Always
template:
metadata:
labels:
{{- range .group.roles }}
node-role.kubernetes.io/{{ . }}: ""
{{- end }}
spec:
domain:
cpu:
threads: 1
cores: {{ .group.resources.cpu }}
sockets: 1
devices:
disks:
- name: system
disk:
bus: virtio
pciAddress: 0000:07:00.0
- name: ephemeral
disk:
bus: virtio
pciAddress: 0000:08:00.0
networkInterfaceMultiqueue: true
memory:
guest: {{ .group.resources.memory }}
evictionStrategy: External
volumes:
- name: system
containerDisk:
image: "{{ $.Files.Get "images/ubuntu-container-disk.tag" | trim }}"
- name: ephemeral
emptyDisk:
capacity: {{ .group.ephemeralStorage | default "20Gi" }}
{{- end }}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
@@ -39,7 +87,9 @@ metadata:
spec:
dataStoreName: "{{ $etcd }}"
addons:
coreDNS: {}
coreDNS:
dnsServiceIPs:
- 10.95.0.10
konnectivity: {}
kubelet:
cgroupfs: systemd
@@ -54,8 +104,11 @@ spec:
hostname: {{ .Values.host | default (printf "%s.%s" .Release.Name $host) }}:443
className: "{{ $ingress }}"
deployment:
podAdditionalMetadata:
labels:
policy.cozystack.io/allow-to-etcd: "true"
replicas: 2
version: 1.29.0
version: 1.30.1
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtCluster
@@ -64,87 +117,120 @@ metadata:
cluster.x-k8s.io/managed-by: kamaji
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- range $groupName, $group := .Values.nodeGroups }}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: {{ $.Release.Namespace }}
spec:
template:
spec:
diskSetup:
filesystems:
- device: /dev/vdb
filesystem: xfs
label: ephemeral
partition: "none"
mounts:
- ["LABEL=ephemeral", "/ephemeral"]
- ["/ephemeral/kubelet", "/var/lib/kubelet", "none", "bind,nofail"]
- ["/ephemeral/containerd", "/var/lib/containerd", "none", "bind,nofail"]
preKubeadmCommands:
- sed -i 's|root:x:|root::|' /etc/passwd
- systemctl stop containerd.service
- mkdir -p /ephemeral/kubelet /ephemeral/containerd
- mount -o bind /ephemeral/kubelet /var/lib/kubelet
- mount -o bind /ephemeral/containerd /var/lib/containerd
- systemctl start containerd.service
joinConfiguration:
nodeRegistration:
kubeletExtraArgs: {}
discovery:
bootstrapToken:
apiServerEndpoint: {{ .Release.Name }}.{{ .Release.Namespace }}.svc:6443
apiServerEndpoint: {{ $.Release.Name }}.{{ $.Release.Namespace }}.svc:6443
initConfiguration:
skipPhases:
- addon/kube-proxy
---
{{- $context := deepCopy $ }}
{{- $_ := set $context "group" $group }}
{{- $kubevirtmachinetemplate := include "kubevirtmachinetemplate" $context }}
{{- $kubevirtmachinetemplateHash := $kubevirtmachinetemplate | sha256sum | trunc 6 }}
{{- $kubevirtmachinetemplateName := printf "%s-%s-%s" $.Release.Name $groupName $kubevirtmachinetemplateHash }}
{{- $kubevirtmachinetemplateNames = append $kubevirtmachinetemplateNames $kubevirtmachinetemplateName }}
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
metadata:
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
name: {{ $.Release.Name }}-{{ $groupName }}-{{ $kubevirtmachinetemplateHash }}
namespace: {{ $.Release.Namespace }}
spec:
template:
spec:
virtualMachineBootstrapCheck:
checkStrategy: ssh
virtualMachineTemplate:
metadata:
namespace: {{ .Release.Namespace }}
spec:
runStrategy: Always
template:
spec:
domain:
cpu:
threads: 1
cores: 2
sockets: 1
devices:
disks:
- disk:
bus: virtio
name: containervolume
networkInterfaceMultiqueue: true
memory:
guest: 1024Mi
evictionStrategy: External
volumes:
- containerDisk:
image: "{{ $.Files.Get "images/ubuntu-container-disk.tag" | trim }}@{{ index ($.Files.Get "images/ubuntu-container-disk.json" | fromJson) "containerimage.digest" }}"
name: containervolume
{{- $kubevirtmachinetemplate | nindent 4 }}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: {{ $.Release.Namespace }}
annotations:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "2"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0"
capacity.cluster-autoscaler.kubernetes.io/memory: "1024Mi"
capacity.cluster-autoscaler.kubernetes.io/cpu: "2"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "{{ $group.minReplicas }}"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "{{ $group.maxReplicas }}"
capacity.cluster-autoscaler.kubernetes.io/memory: "{{ $group.resources.memory }}"
capacity.cluster-autoscaler.kubernetes.io/cpu: "{{ $group.resources.cpu }}"
spec:
clusterName: {{ .Release.Name }}
selector:
matchLabels: null
clusterName: {{ $.Release.Name }}
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: {{ $.Release.Name }}
cluster.x-k8s.io/deployment-name: {{ $.Release.Name }}-{{ $groupName }}
{{- range $group.roles }}
node-role.kubernetes.io/{{ . }}: ""
{{- end }}
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: {{ .Release.Name }}-md-0
namespace: default
clusterName: {{ .Release.Name }}
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: {{ $.Release.Namespace }}
clusterName: {{ $.Release.Name }}
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
name: {{ .Release.Name }}-md-0
name: {{ $.Release.Name }}-{{ $groupName }}-{{ $kubevirtmachinetemplateHash }}
namespace: default
version: v1.23.10
version: v1.30.1
{{- end }}
---
{{- /*
We must preserve all previous KubevirtMachineTemplates until a MachineSet references them.
*/ -}}
{{- $mss := (lookup "cluster.x-k8s.io/v1beta1" "MachineSet" $.Release.Namespace "").items }}
{{- $oldKubevirtmachinetemplates := dict }}
{{- range $kmt := (lookup "infrastructure.cluster.x-k8s.io/v1alpha1" "KubevirtMachineTemplate" .Release.Namespace "").items }}
{{- range $or := $kmt.metadata.ownerReferences }}
{{- if and (eq $or.kind "Cluster") (eq $or.name $.Release.Name) }}
{{- range $ms := $mss }}
{{- if and (eq $ms.spec.template.spec.infrastructureRef.kind "KubevirtMachineTemplate") (eq $ms.spec.template.spec.infrastructureRef.name $kmt.metadata.name) }}
{{- if not (has $kmt.metadata.name $kubevirtmachinetemplateNames) }}
{{- $oldKubevirtmachinetemplates = merge $oldKubevirtmachinetemplates (dict $kmt.metadata.name $kmt) }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- range $oldKubevirtmachinetemplates }}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
metadata:
name: {{ .metadata.name }}
namespace: {{ .metadata.Namespace }}
spec:
{{- .spec | toYaml | nindent 2 }}
{{- end }}

View File

@@ -13,15 +13,14 @@ spec:
metadata:
labels:
app: {{ .Release.Name }}-kcsi-driver
policy.cozystack.io/allow-to-apiserver: "true"
spec:
serviceAccountName: {{ .Release.Name }}-kcsi
priorityClassName: system-cluster-critical
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:
@@ -49,7 +48,7 @@ spec:
fieldRef:
fieldPath: metadata.namespace
- name: INFRACLUSTER_LABELS
value: "csi-driver/cluster=test"
value: "cluster.x-k8s.io/cluster-name={{ .Release.Name }}"
- name: INFRA_STORAGE_CLASS_ENFORCEMENT
valueFrom:
configMapKeyRef:

View File

@@ -0,0 +1,26 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Release.Name }}-dashboard-resources
rules:
- apiGroups:
- networking.k8s.io
resources:
- ingresses
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- secrets
resourceNames:
- {{ .Release.Name }}-admin-kubeconfig
verbs: ["get", "list", "watch"]
- apiGroups:
- ""
resources:
- services
resourceNames:
- {{ .Release.Name }}
verbs: ["get", "list", "watch"]

View File

@@ -0,0 +1,56 @@
{{- if .Values.addons.certManager.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-cert-manager
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: cert-manager
chart:
spec:
chart: cozy-cert-manager
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-cert-manager
storageNamespace: cozy-cert-manager
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
{{- if .Values.addons.certManager.valuesOverride }}
valuesFrom:
- kind: Secret
name: {{ .Release.Name }}-cert-manager-values-override
valuesKey: values
{{- end }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.addons.certManager.valuesOverride }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-cert-manager-values-override
stringData:
values: |
{{- toYaml .Values.addons.certManager.valuesOverride | nindent 4 }}
{{- end }}

View File

@@ -1,4 +1,4 @@
apiVersion: helm.toolkit.fluxcd.io/v2beta1
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-cilium
@@ -6,7 +6,7 @@ metadata:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 1m
interval: 5m
releaseName: cilium
chart:
spec:
@@ -23,10 +23,17 @@ spec:
storageNamespace: cozy-cilium
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
values:
cilium:
tunnel: disabled
autoDirectNodeRoutes: true
autoDirectNodeRoutes: false
bpf:
masquerade: true
cgroup:
autoMount:
enabled: true
@@ -38,9 +45,11 @@ spec:
chainingMode: ~
customConf: false
configMap: ""
routingMode: native
routingMode: tunnel
enableIPv4Masquerade: true
ipv4NativeRoutingCIDR: "10.244.0.0/16"
ipv4NativeRoutingCIDR: ""
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,4 +1,4 @@
apiVersion: helm.toolkit.fluxcd.io/v2beta1
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-csi
@@ -6,7 +6,7 @@ metadata:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 1m
interval: 5m
releaseName: csi
chart:
spec:
@@ -23,6 +23,17 @@ spec:
storageNamespace: cozy-csi
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
{{- with .Values.stroageClass }}
values:
storageClass: "{{ . }}"
{{- end }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -12,19 +12,31 @@ spec:
spec:
serviceAccountName: {{ .Release.Name }}-flux-teardown
restartPolicy: Never
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:
- name: kubectl
image: docker.io/clastix/kubectl:v1.29.1
image: docker.io/clastix/kubectl:v1.30.1
command:
- kubectl
- --namespace={{ .Release.Namespace }}
- patch
- helmrelease
- {{ .Release.Name }}-cilium
- {{ .Release.Name }}-csi
- -p
- '{"spec": {"suspend": true}}'
- --type=merge
- /bin/sh
- -c
- |
kubectl
--namespace={{ .Release.Namespace }}
patch
helmrelease
{{ .Release.Name }}-cilium
{{ .Release.Name }}-csi
{{ .Release.Name }}-cert-manager
{{ .Release.Name }}-ingress-nginx
{{ .Release.Name }}-fluxcd-operator
{{ .Release.Name }}-fluxcd
-p '{"spec": {"suspend": true}}'
--type=merge --field-manager=flux-client-side-apply || true
---
apiVersion: v1
kind: ServiceAccount
@@ -54,6 +66,10 @@ rules:
resourceNames:
- {{ .Release.Name }}-cilium
- {{ .Release.Name }}-csi
- {{ .Release.Name }}-cert-manager
- {{ .Release.Name }}-ingress-nginx
- {{ .Release.Name }}-fluxcd-operator
- {{ .Release.Name }}-fluxcd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@@ -0,0 +1,101 @@
{{- if .Values.addons.fluxcd.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-fluxcd-operator
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: fluxcd-operator
chart:
spec:
chart: cozy-fluxcd-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-fluxcd
storageNamespace: cozy-fluxcd
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
values:
flux-operator:
fullnameOverride: flux-operator
tolerations: []
hostNetwork: false
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-fluxcd
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: fluxcd
chart:
spec:
chart: cozy-fluxcd
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-fluxcd
storageNamespace: cozy-fluxcd
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
{{- if .Values.addons.fluxcd.valuesOverride }}
valuesFrom:
- kind: Secret
name: {{ .Release.Name }}-fluxcd-values-override
valuesKey: values
{{- end }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
- name: {{ .Release.Name }}-fluxcd-operator
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.addons.fluxcd.valuesOverride }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-fluxcd-values-override
stringData:
values: |
{{- toYaml .Values.addons.fluxcd.valuesOverride | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,66 @@
{{- if .Values.addons.ingressNginx.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-ingress-nginx
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: ingress-nginx
chart:
spec:
chart: cozy-ingress-nginx
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-ingress-nginx
storageNamespace: cozy-ingress-nginx
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
values:
ingress-nginx:
fullnameOverride: ingress-nginx
controller:
kind: DaemonSet
hostNetwork: true
service:
enabled: false
nodeSelector:
node-role.kubernetes.io/ingress-nginx: ""
{{- if .Values.addons.ingressNginx.valuesOverride }}
valuesFrom:
- kind: Secret
name: {{ .Release.Name }}-ingress-nginx-values-override
valuesKey: values
{{- end }}
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
{{- end }}
{{- if .Values.addons.ingressNginx.valuesOverride }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-ingress-nginx-values-override
stringData:
values: |
{{- toYaml .Values.addons.ingressNginx.valuesOverride | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,58 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- if .Values.addons.ingressNginx.hosts }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ingress-nginx
annotations:
nginx.ingress.kubernetes.io/backend-protocol: AUTO_HTTP
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($scheme = http) {
set $proxy_upstream_name "{{ .Release.Namespace }}-{{ .Release.Name }}-ingress-nginx-80";
set $proxy_host $proxy_upstream_name;
}
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: "{{ $ingress }}"
rules:
{{- range .Values.addons.ingressNginx.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ $.Release.Name }}-ingress-nginx
port:
number: 443
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ $.Release.Name }}-ingress-nginx
port:
number: 80
{{- end }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-ingress-nginx
spec:
ports:
- appProtocol: http
name: http
port: 80
targetPort: 80
- appProtocol: https
name: https
port: 443
targetPort: 443
selector:
cluster.x-k8s.io/cluster-name: {{ .Release.Name }}
node-role.kubernetes.io/ingress-nginx: ""
{{- end }}

View File

@@ -13,7 +13,14 @@ spec:
metadata:
labels:
k8s-app: {{ .Release.Name }}-kccm
policy.cozystack.io/allow-to-apiserver: "true"
spec:
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:
- name: kubevirt-cloud-controller-manager
args:
@@ -44,6 +51,4 @@ spec:
- secret:
secretName: {{ .Release.Name }}-admin-kubeconfig
name: kubeconfig
tolerations:
- operator: Exists
serviceAccountName: {{ .Release.Name }}-kccm

View File

@@ -1,11 +1,82 @@
{
"$schema": "http://json-schema.org/schema#",
"type": "object",
"properties": {
"host": {
"type": "string",
"title": "Domain name for this kubernetes cluster",
"description": "This host will be used for all apps deployed in this tenant"
"title": "Chart Values",
"type": "object",
"properties": {
"host": {
"type": "string",
"description": "The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host).",
"default": ""
},
"controlPlane": {
"type": "object",
"properties": {
"replicas": {
"type": "number",
"description": "Number of replicas for Kubernetes contorl-plane components",
"default": 2
}
}
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store user data",
"default": "replicated"
},
"addons": {
"type": "object",
"properties": {
"certManager": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables the cert-manager",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
},
"ingressNginx": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role)",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
},
"hosts": {
"type": "array",
"description": "List of domain names that should be passed through to the cluster by upper cluster",
"default": [],
"items": {}
}
}
},
"fluxcd": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables Flux CD",
"default": false
},
"valuesOverride": {
"type": "object",
"description": "Custom values to override",
"default": {}
}
}
}
}
}
}
}
}
}

View File

@@ -1 +1,60 @@
## @section Common parameters
## @param host The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host).
## @param controlPlane.replicas Number of replicas for Kubernetes contorl-plane components
## @param storageClass StorageClass used to store user data
##
host: ""
controlPlane:
replicas: 2
storageClass: replicated
## @param nodeGroups [object] nodeGroups configuration
##
nodeGroups:
md0:
minReplicas: 0
maxReplicas: 10
resources:
cpu: 2
memory: 1024Mi
ephemeralStorage: 20Gi
roles:
- ingress-nginx
## @section Cluster Addons
##
addons:
## Cert-manager: automatically creates and manages SSL/TLS certificate
##
certManager:
## @param addons.certManager.enabled Enables the cert-manager
## @param addons.certManager.valuesOverride Custom values to override
enabled: false
valuesOverride: {}
## Ingress-NGINX Controller
##
ingressNginx:
## @param addons.ingressNginx.enabled Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role)
## @param addons.ingressNginx.valuesOverride Custom values to override
##
enabled: false
## @param addons.ingressNginx.hosts List of domain names that should be passed through to the cluster by upper cluster
## e.g:
## hosts:
## - example.org
## - foo.example.net
##
hosts: []
valuesOverride: {}
## Flux CD
##
fluxcd:
## @param addons.fluxcd.enabled Enables Flux CD
## @param addons.fluxcd.valuesOverride Custom values to override
##
enabled: false
valuesOverride: {}

View File

@@ -1,23 +1,3 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
.helmignore
/logos
/Makefile

View File

@@ -1,7 +1,7 @@
apiVersion: v2
name: mysql
description: Managed MariaDB service
icon: https://static-00.iconduck.com/assets.00/mariadb-icon-512x340-txozryr2.png
icon: /logos/mariadb.svg
# A chart can be either an 'application' or a 'library' chart.
#
@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
version: 0.4.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
appVersion: "11.0.2"

View File

@@ -0,0 +1,4 @@
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -62,3 +62,35 @@ more details:
mysqldump -h <slave> -P 3306 -u<user> -p<password> --column-statistics=0 <database> <table> ~/tmp/fix-table.sql
mysql -h <master> -P 3306 -u<user> -p<password> <database> < ~/tmp/fix-table.sql
```
## Parameters
### Common parameters
| Name | Description | Value |
| -------------- | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `replicas` | Number of MariaDB replicas | `2` |
| `storageClass` | StorageClass used to store the data | `""` |
### Configuration parameters
| Name | Description | Value |
| ----------- | ----------------------- | ----- |
| `users` | Users configuration | `{}` |
| `databases` | Databases configuration | `[]` |
### Backup parameters
| Name | Description | Value |
| ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |

View File

@@ -0,0 +1,12 @@
<svg width="144" height="144" viewBox="0 0 144 144" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect x="-0.00195312" width="144" height="144" rx="24" fill="url(#paint0_linear_683_2930)"/>
<path d="M133.191 29.0022C131.213 29.0654 131.839 29.6354 127.564 30.6873C123.248 31.7496 117.975 31.4239 113.327 33.3733C99.4516 39.1924 96.6676 59.0813 84.0535 66.2059C74.6247 71.5318 65.112 71.9565 56.5594 74.6365C50.9389 76.399 44.7906 80.0135 39.6982 84.4018C35.7455 87.8093 35.6423 90.8054 31.5123 95.0791C27.0947 99.6504 13.9551 95.1564 8 102.153C9.91835 104.093 10.7594 104.636 14.5398 104.133C13.7571 105.616 9.14332 106.866 10.0465 109.049C10.9968 111.345 22.1511 112.901 32.2908 106.78C37.0131 103.929 40.7743 99.8193 48.1288 98.8384C57.6459 97.5699 68.6093 99.652 79.6268 101.241C77.9932 106.112 74.7133 109.351 72.086 113.231C71.2724 114.107 73.7202 114.205 76.5126 113.675C81.5359 112.433 85.1561 111.433 88.9472 109.227C93.6047 106.515 94.3104 99.5639 100.025 98.0599C103.209 102.954 111.869 104.11 117.242 100.195C112.527 98.8607 111.224 88.8244 112.815 84.4018C114.323 80.2156 115.813 73.5192 117.331 67.9855C118.961 62.0425 119.562 54.5519 121.535 51.5247C124.503 46.9701 127.783 45.406 130.63 42.8377C133.477 40.2695 136.083 37.7694 135.998 31.8927C135.97 29.9998 134.992 28.9447 133.191 29.0022Z" fill="#04244E"/>
<path d="M128.953 32.4844C129.427 34.1004 130.168 34.8421 133.375 35.1387C132.906 39.2041 130.195 41.4276 127.154 43.5611C124.479 45.4376 121.547 47.2445 119.664 50.1757C117.734 53.1785 116.509 63.4554 113.517 73.6044C110.931 82.3738 107.025 91.0445 100.204 94.8437C99.4919 93.0502 100.295 89.74 98.878 88.652C97.9611 91.2674 96.9242 93.7627 95.7098 96.0821C91.7077 103.732 85.7822 109.459 75.8801 111.208C80.5785 104.85 85.071 98.2844 85.1683 87.3262C81.8617 88.0417 81.9319 95.8522 78.5345 97.9402C76.3563 98.1772 74.1498 98.1758 71.9291 98.0424C62.8091 97.4959 53.4535 94.7549 44.9219 97.4923C39.1128 99.3568 34.3619 103.755 29.4428 105.888C23.6614 108.396 19.2831 109.427 12.0836 108.396C11.1695 107.164 17.3526 105.575 16.9829 102.902C14.1653 102.59 12.5293 103.273 10.0801 102.16C10.3505 101.662 10.7479 101.247 11.2483 100.901C15.7373 97.794 28.4882 100.167 31.9006 96.8167C34.007 94.75 35.3889 92.5867 36.8197 90.4838C38.2072 88.4434 39.6415 86.4597 41.8268 84.6719C42.6337 84.0118 43.511 83.3596 44.4421 82.723C48.166 80.1743 52.7729 77.8628 57.3066 76.2694C63.4826 74.0984 69.741 73.9195 76.3237 71.4043C80.3904 69.85 84.8127 67.9302 88.4174 65.2439C89.2733 64.6051 90.0831 63.9245 90.8319 63.1949C101.125 53.1608 103.165 35.461 119.224 33.8116C121.166 33.6121 122.756 33.6767 124.203 33.6327C125.871 33.583 127.347 33.3893 128.953 32.4844ZM109.375 89.1339C109.567 92.2013 111.348 98.2872 112.92 99.7663C109.841 100.515 104.537 99.278 103.177 97.1062C103.876 93.9707 107.514 91.1041 109.375 89.1339Z" fill="white"/>
<path d="M130.109 35.9187C129.49 37.2169 128.305 38.8908 128.305 42.1956C128.3 42.763 127.875 43.1517 127.867 42.277C127.899 39.047 128.754 37.6507 129.662 35.8155C130.085 35.0636 130.339 35.3738 130.109 35.9187ZM129.486 35.4297C128.756 36.6684 126.998 38.9278 126.707 42.2203C126.653 42.7848 126.194 43.1343 126.264 42.2618C126.581 39.0477 127.986 37.036 129.052 35.2873C129.536 34.5761 129.763 34.9074 129.486 35.4297ZM128.918 34.7817C128.086 35.9543 125.38 38.6678 124.814 41.9247C124.712 42.4819 124.225 42.7928 124.368 41.929C124.954 38.752 127.287 36.255 128.496 34.6037C129.038 33.9346 129.237 34.284 128.918 34.7817ZM128.411 34.0588L128.137 34.3499C126.927 35.6472 124.116 38.8114 123.179 41.7074C122.999 42.245 122.474 42.4848 122.737 41.6493C123.763 38.5864 126.589 35.2873 128.018 33.8227C128.65 33.2364 128.796 33.6106 128.411 34.0588ZM113.886 40.6162C114.513 37.9231 116.607 36.696 120.223 36.9953C121.096 41.0151 116.213 42.6366 113.886 40.6162Z" fill="#04244E"/>
<defs>
<linearGradient id="paint0_linear_683_2930" x1="140.5" y1="141" x2="5.99999" y2="-5.50228e-06" gradientUnits="userSpaceOnUse">
<stop stop-color="#C49A6C"/>
<stop offset="1" stop-color="#E7BF93"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 4.0 KiB

View File

@@ -1,7 +1,7 @@
{{- range $name := .Values.databases }}
{{ $dnsName := replace "_" "-" $name }}
---
apiVersion: mariadb.mmontes.io/v1alpha1
apiVersion: k8s.mariadb.com/v1alpha1
kind: Database
metadata:
name: {{ $.Release.Name }}-{{ $dnsName }}

View File

@@ -1,18 +1,20 @@
---
apiVersion: mariadb.mmontes.io/v1alpha1
apiVersion: k8s.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: {{ .Release.Name }}
spec:
{{- if (and .Values.users.root .Values.users.root.password) }}
rootPasswordSecretKeyRef:
name: {{ .Release.Name }}
key: root-password
{{- end }}
image: "mariadb:11.0.2"
port: 3306
replicas: 2
replicas: {{ .Values.replicas }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@@ -28,15 +30,18 @@ spec:
- {{ .Release.Name }}
topologyKey: "kubernetes.io/hostname"
{{- if gt (int .Values.replicas) 1 }}
replication:
enabled: true
#primary:
# podIndex: 0
# automaticFailover: true
{{- end }}
metrics:
enabled: true
exporter:
image: prom/mysqld-exporter:v0.14.0
image: prom/mysqld-exporter:v0.15.1
resources:
requests:
cpu: 50m
@@ -53,14 +58,13 @@ spec:
name: {{ .Release.Name }}-my-cnf
key: config
volumeClaimTemplate:
resources:
requests:
storage: {{ .Values.size }}
accessModes:
- ReadWriteOnce
storage:
size: {{ .Values.size }}
resizeInUseVolumes: true
waitForVolumeResize: true
{{- with .Values.stroageClass }}
storageClassName: {{ . }}
{{- end }}
{{- if .Values.external }}
primaryService:

View File

@@ -2,7 +2,7 @@
{{ if not (eq $name "root") }}
{{ $dnsName := replace "_" "-" $name }}
---
apiVersion: mariadb.mmontes.io/v1alpha1
apiVersion: k8s.mariadb.com/v1alpha1
kind: User
metadata:
name: {{ $.Release.Name }}-{{ $dnsName }}
@@ -15,7 +15,7 @@ spec:
key: {{ $name }}-password
maxUserConnections: {{ $u.maxUserConnections }}
---
apiVersion: mariadb.mmontes.io/v1alpha1
apiVersion: k8s.mariadb.com/v1alpha1
kind: Grant
metadata:
name: {{ $.Release.Name }}-{{ $dnsName }}

View File

@@ -0,0 +1,77 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"replicas": {
"type": "number",
"description": "Number of MariaDB replicas",
"default": 2
},
"storageClass": {
"type": "string",
"description": "StorageClass used to store the data",
"default": ""
},
"databases": {
"type": "array",
"description": "Databases configuration",
"default": [],
"items": {}
},
"backup": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable pereiodic backups",
"default": false
},
"s3Region": {
"type": "string",
"description": "The AWS S3 region where backups are stored",
"default": "us-east-1"
},
"s3Bucket": {
"type": "string",
"description": "The S3 bucket used for storing backups",
"default": "s3.example.org/postgres-backups"
},
"schedule": {
"type": "string",
"description": "Cron schedule for automated backups",
"default": "0 2 * * *"
},
"cleanupStrategy": {
"type": "string",
"description": "The strategy for cleaning up old backups",
"default": "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
},
"s3AccessKey": {
"type": "string",
"description": "The access key for S3, used for authentication",
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu"
},
"s3SecretKey": {
"type": "string",
"description": "The secret key for S3, used for authentication",
"default": "ju3eum4dekeich9ahM1te8waeGai0oog"
},
"resticPassword": {
"type": "string",
"description": "The password for Restic backup encryption",
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
}
}
}

View File

@@ -1,24 +1,52 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param replicas Number of MariaDB replicas
## @param storageClass StorageClass used to store the data
##
external: false
size: 10Gi
replicas: 2
storageClass: ""
users:
root:
password: strongpassword
user1:
privileges: ['ALL']
maxUserConnections: 1000
password: hackme
user2:
privileges: ['SELECT']
maxUserConnections: 1000
password: hackme
## @section Configuration parameters
databases:
- wordpress1
- wordpress2
- wordpress3
- wordpress4
## @param users [object] Users configuration
## Example:
## users:
## root:
## password: strongpassword
## user1:
## privileges: ['ALL']
## maxUserConnections: 1000
## password: hackme
## user2:
## privileges: ['SELECT']
## maxUserConnections: 1000
## password: hackme
##
users: {}
## @param databases Databases configuration
## Example:
## databases:
## - wordpress1
## - wordpress2
## - wordpress3
## - wordpress4
databases: []
## @section Backup parameters
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.cleanupStrategy The strategy for cleaning up old backups
## @param backup.s3AccessKey The access key for S3, used for authentication
## @param backup.s3SecretKey The secret key for S3, used for authentication
## @param backup.resticPassword The password for Restic backup encryption
backup:
enabled: false
s3Region: us-east-1

View File

@@ -0,0 +1,3 @@
.helmignore
/logos
/Makefile

View File

@@ -0,0 +1,25 @@
apiVersion: v2
name: nats
description: Managed NATS service
icon: /logos/nats.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.4.1"

View File

@@ -0,0 +1,4 @@
include ../../../scripts/package.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -0,0 +1,12 @@
# Managed NATS Service
## Parameters
### Common parameters
| Name | Description | Value |
| -------------- | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `replicas` | Persistent Volume size for NATS | `2` |
| `storageClass` | StorageClass used to store the data | `""` |

View File

@@ -0,0 +1,12 @@
<svg width="144" height="144" viewBox="0 0 144 144" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="144" height="144" rx="24" fill="url(#paint0_linear_681_2821)"/>
<rect width="144" height="144" rx="24" fill="black" fill-opacity="0.3"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M117.48 25H27V98.9693H66.0689L87.7075 119V98.9693H117.48V25Z" fill="white"/>
<path d="M92.1352 72.4552V42.625H102.773V81.1999H86.6519L54.114 50.8414V81.2322H43.4443V42.625H60.1262L92.1352 72.4552Z" fill="black"/>
<defs>
<linearGradient id="paint0_linear_681_2821" x1="10" y1="15.5" x2="144" y2="131.5" gradientUnits="userSpaceOnUse">
<stop stop-color="#385C93"/>
<stop offset="1" stop-color="#32A574"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 735 B

View File

@@ -0,0 +1,45 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-system
spec:
chart:
spec:
chart: cozy-nats
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '*'
interval: 1m0s
timeout: 5m0s
values:
nats:
fullnameOverride: {{ .Release.Name }}
config:
cluster:
enabled: true
replicas: {{ .Values.replicas }}
monitor:
enabled: true
jetstream:
enabled: true
fileStore:
enabled: true
pvc:
enabled: true
size: 10Gi
{{- with .Values.storageClass }}
storageClassName: {{ . }}
{{- end }}
promExporter:
enabled: true
podMonitor:
enabled: true
{{- if .Values.external }}
service:
merge:
spec:
type: LoadBalancer
{{- end }}

Some files were not shown because too many files have changed in this diff Show More