Compare commits

..

1 Commits

Author SHA1 Message Date
Andrei Kvapil
236f3ce92f Add Adopters, Code of Conduct, Contributing, and Maintainers guides
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-02-14 13:41:43 +01:00
959 changed files with 43737 additions and 162270 deletions

2
.gitignore vendored
View File

@@ -1,3 +1 @@
_out
.git
.idea

View File

@@ -3,8 +3,6 @@
build:
make -C packages/apps/http-cache image
make -C packages/apps/kubernetes image
make -C packages/system/cilium image
make -C packages/system/kubeovn image
make -C packages/system/dashboard image
make -C packages/core/installer image
make manifests
@@ -20,8 +18,6 @@ repos:
make -C packages/system repo
make -C packages/apps repo
make -C packages/extra repo
mkdir -p _out/logos
cp ./packages/apps/*/logos/*.svg ./packages/extra/*/logos/*.svg _out/logos/
assets:
make -C packages/core/installer/ assets
make -C packages/core/talos/ assets

553
README.md
View File

@@ -10,7 +10,7 @@
# Cozystack
**Cozystack** is a free PaaS platform and framework for building clouds.
**Cozystack** is an open-source **PaaS platform** for cloud providers.
With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
@@ -18,55 +18,548 @@ You can use Cozystack to build your own cloud or to provide a cost-effective dev
## Use-Cases
* [**Using Cozystack to build public cloud**](https://cozystack.io/docs/use-cases/public-cloud/)
You can use Cozystack as backend for a public cloud
### As a backend for a public cloud
* [**Using Cozystack to build private cloud**](https://cozystack.io/docs/use-cases/private-cloud/)
You can use Cozystack as platform to build a private cloud powered by Infrastructure-as-Code approach
Cozystack positions itself as a kind of framework for building public clouds. The key word here is framework. In this case, it's important to understand that Cozystack is made for cloud providers, not for end users.
* [**Using Cozystack as Kubernetes distribution**](https://cozystack.io/docs/use-cases/kubernetes-distribution/)
You can use Cozystack as Kubernetes distribution for Bare Metal
Despite having a graphical interface, the current security model does not imply public user access to your management cluster.
Instead, end users get access to their own Kubernetes clusters, can order LoadBalancers and additional services from it, but they have no access and know nothing about your management cluster powered by Cozystack.
Thus, to integrate with your billing system, it's enough to teach your system to go to the management Kubernetes and place a YAML file signifying the service you're interested in. Cozystack will do the rest of the work for you.
![](https://aenix.io/wp-content/uploads/2024/02/Wireframe-1.png)
### As a private cloud for Infrastructure-as-Code
One of the use cases is a self-portal for users within your company, where they can order the service they're interested in or a managed database.
You can implement best GitOps practices, where users will launch their own Kubernetes clusters and databases for their needs with a simple commit of configuration into your infrastructure Git repository.
Thanks to the standardization of the approach to deploying applications, you can expand the platform's capabilities using the functionality of standard Helm charts.
### As a Kubernetes distribution for Bare Metal
We created Cozystack primarily for our own needs, having vast experience in building reliable systems on bare metal infrastructure. This experience led to the formation of a separate boxed product, which is aimed at standardizing and providing a ready-to-use tool for managing your infrastructure.
Currently, Cozystack already solves a huge scope of infrastructure tasks: starting from provisioning bare metal servers, having a ready monitoring system, fast and reliable storage, a network fabric with the possibility of interconnect with your infrastructure, the ability to run virtual machines, databases, and much more right out of the box.
All this makes Cozystack a convenient platform for delivering and launching your application on Bare Metal.
## Screenshot
![Cozystack screenshot](https://cozystack.io/img/screenshot.png)
![](https://aenix.io/wp-content/uploads/2023/12/cozystack1-1.png)
## Documentation
## Core values
The documentation is located on official [cozystack.io](https://cozystack.io) website.
### Standardization and unification
All components of the platform are based on open source tools and technologies which are widely known in the industry.
Read [Get Started](https://cozystack.io/docs/get-started/) section for a quick start.
### Collaborate, not compete
If a feature being developed for the platform could be useful to a upstream project, it should be contributed to upstream project, rather than being implemented within the platform.
If you encounter any difficulties, start with the [troubleshooting guide](https://cozystack.io/docs/troubleshooting/), and work your way through the process that we've outlined.
### API-first
Cozystack is based on Kubernetes and involves close interaction with its API. We don't aim to completely hide the all elements behind a pretty UI or any sort of customizations; instead, we provide a standard interface and teach users how to work with basic primitives. The web interface is used solely for deploying applications and quickly diving into basic concepts of platform.
## Versioning
## Quick Start
Versioning adheres to the [Semantic Versioning](http://semver.org/) principles.
A full list of the available releases is available in the GitHub repository's [Release](https://github.com/aenix-io/cozystack/releases) section.
### Prepare infrastructure
- [Roadmap](https://github.com/orgs/aenix-io/projects/2)
## Contributions
![](https://aenix.io/wp-content/uploads/2024/02/Wireframe-2.png)
Contributions are highly appreciated and very welcomed!
You need 3 physical servers or VMs with nested virtualisation:
In case of bugs, please, check if the issue has been already opened by checking the [GitHub Issues](https://github.com/aenix-io/cozystack/issues) section.
In case it isn't, you can open a new one: a detailed report will help us to replicate it, assess it, and work on a fix.
```
CPU: 4 cores
CPU model: host
RAM: 8-16 GB
HDD1: 32 GB
HDD2: 100GB (raw)
```
You can express your intention in working on the fix on your own.
Commits are used to generate the changelog, and their author will be referenced in it.
And one management VM or physical server connected to the same network.
Any Linux system installed on it (eg. Ubuntu should be enough)
In case of **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/aenix-io/cozystack/discussions/categories/feature-requests).
**Note:** The VM should support `x86-64-v2` architecture, the most probably you can achieve this by setting cpu model to `host`
## License
#### Install dependencies:
Cozystack is licensed under Apache 2.0.
The code is provided as-is with no warranties.
- `docker`
- `talosctl`
- `dialog`
- `nmap`
- `make`
- `yq`
- `kubectl`
- `helm`
## Commercial Support
### Netboot server
[**Ænix**](https://aenix.io) offers enterprise-grade support, available 24/7.
Start matchbox with prebuilt Talos image for Cozystack:
We provide all types of assistance, including consultations, development of missing features, design, assistance with installation, and integration.
```bash
sudo docker run --name=matchbox -d --net=host ghcr.io/aenix-io/cozystack/matchbox:v0.0.2 \
-address=:8080 \
-log-level=debug
```
[Contact us](https://aenix.io/contact/)
Start DHCP-Server:
```bash
sudo docker run --name=dnsmasq -d --cap-add=NET_ADMIN --net=host quay.io/poseidon/dnsmasq \
-d -q -p0 \
--dhcp-range=192.168.100.3,192.168.100.254 \
--dhcp-option=option:router,192.168.100.1 \
--enable-tftp \
--tftp-root=/var/lib/tftpboot \
--dhcp-match=set:bios,option:client-arch,0 \
--dhcp-boot=tag:bios,undionly.kpxe \
--dhcp-match=set:efi32,option:client-arch,6 \
--dhcp-boot=tag:efi32,ipxe.efi \
--dhcp-match=set:efibc,option:client-arch,7 \
--dhcp-boot=tag:efibc,ipxe.efi \
--dhcp-match=set:efi64,option:client-arch,9 \
--dhcp-boot=tag:efi64,ipxe.efi \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:ipxe,http://192.168.100.254:8080/boot.ipxe \
--log-queries \
--log-dhcp
```
Where:
- `192.168.100.3,192.168.100.254` range to allocate IPs from
- `192.168.100.1` your gateway
- `192.168.100.254` is address of your management server
Check status of containers:
```
docker ps
```
example output:
```console
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
22044f26f74d quay.io/poseidon/dnsmasq "/usr/sbin/dnsmasq -…" 6 seconds ago Up 5 seconds dnsmasq
231ad81ff9e0 ghcr.io/aenix-io/cozystack/matchbox:v0.0.2 "/matchbox -address=…" 58 seconds ago Up 57 seconds matchbox
```
### Bootstrap cluster
Write configuration for Cozystack:
```yaml
cat > patch.yaml <<\EOT
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.100.0/24
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
install:
image: ghcr.io/aenix-io/cozystack/talos:v1.6.4
files:
- content: |
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
network:
cni:
name: none
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOT
cat > patch-controlplane.yaml <<\EOT
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.100.0/24
EOT
```
Run [talos-bootstrap](https://github.com/aenix-io/talos-bootstrap/) to deploy cluster:
```bash
talos-bootstrap install
```
Save admin kubeconfig to access your Kubernetes cluster:
```bash
cp -i kubeconfig ~/.kube/config
```
Check connection:
```bash
kubectl get ns
```
example output:
```console
NAME STATUS AGE
default Active 7m56s
kube-node-lease Active 7m56s
kube-public Active 7m56s
kube-system Active 7m56s
```
**Note:**: All nodes should currently show as "Not Ready", don't worry about that, this is because you disabled the default CNI plugin in the previous step. Cozystack will install it's own CNI-plugin on the next step.
### Install Cozystack
write config for cozystack:
**Note:** please make sure that you written the same setting specified in `patch.yaml` and `patch-controlplane.yaml` files.
```yaml
cat > cozystack-config.yaml <<\EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: cozystack
namespace: cozy-system
data:
cluster-name: "cozystack"
ipv4-pod-cidr: "10.244.0.0/16"
ipv4-pod-gateway: "10.244.0.1"
ipv4-svc-cidr: "10.96.0.0/16"
ipv4-join-cidr: "100.64.0.0/16"
EOT
```
Create namesapce and install Cozystack system components:
```bash
kubectl create ns cozy-system
kubectl apply -f cozystack-config.yaml
kubectl apply -f manifests/cozystack-installer.yaml
```
(optional) You can track the logs of installer:
```bash
kubectl logs -n cozy-system deploy/cozystack -f
```
Wait for a while, then check the status of installation:
```bash
kubectl get hr -A
```
Wait until all releases become to `Ready` state:
```console
NAMESPACE NAME AGE READY STATUS
cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
cozy-cilium cilium 4m1s True Release reconciliation succeeded
cozy-cluster-api capi-operator 4m1s True Release reconciliation succeeded
cozy-cluster-api capi-providers 4m1s True Release reconciliation succeeded
cozy-dashboard dashboard 4m1s True Release reconciliation succeeded
cozy-fluxcd cozy-fluxcd 4m1s True Release reconciliation succeeded
cozy-grafana-operator grafana-operator 4m1s True Release reconciliation succeeded
cozy-kamaji kamaji 4m1s True Release reconciliation succeeded
cozy-kubeovn kubeovn 4m1s True Release reconciliation succeeded
cozy-kubevirt-cdi kubevirt-cdi 4m1s True Release reconciliation succeeded
cozy-kubevirt-cdi kubevirt-cdi-operator 4m1s True Release reconciliation succeeded
cozy-kubevirt kubevirt 4m1s True Release reconciliation succeeded
cozy-kubevirt kubevirt-operator 4m1s True Release reconciliation succeeded
cozy-linstor linstor 4m1s True Release reconciliation succeeded
cozy-linstor piraeus-operator 4m1s True Release reconciliation succeeded
cozy-mariadb-operator mariadb-operator 4m1s True Release reconciliation succeeded
cozy-metallb metallb 4m1s True Release reconciliation succeeded
cozy-monitoring monitoring 4m1s True Release reconciliation succeeded
cozy-postgres-operator postgres-operator 4m1s True Release reconciliation succeeded
cozy-rabbitmq-operator rabbitmq-operator 4m1s True Release reconciliation succeeded
cozy-redis-operator redis-operator 4m1s True Release reconciliation succeeded
cozy-telepresence telepresence 4m1s True Release reconciliation succeeded
cozy-victoria-metrics-operator victoria-metrics-operator 4m1s True Release reconciliation succeeded
tenant-root tenant-root 4m1s True Release reconciliation succeeded
```
#### Configure Storage
Setup alias to access LINSTOR:
```bash
alias linstor='kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor'
```
list your nodes
```bash
linstor node list
```
example output:
```console
+-------------------------------------------------------+
| Node | NodeType | Addresses | State |
|=======================================================|
| srv1 | SATELLITE | 192.168.100.11:3367 (SSL) | Online |
| srv2 | SATELLITE | 192.168.100.12:3367 (SSL) | Online |
| srv3 | SATELLITE | 192.168.100.13:3367 (SSL) | Online |
+-------------------------------------------------------+
```
list empty devices:
```bash
linstor physical-storage list
```
example output:
```console
+--------------------------------------------+
| Size | Rotational | Nodes |
|============================================|
| 107374182400 | True | srv3[/dev/sdb] |
| | | srv1[/dev/sdb] |
| | | srv2[/dev/sdb] |
+--------------------------------------------+
```
create storage pools:
```bash
linstor ps cdp lvm srv1 /dev/sdb --pool-name data --storage-pool data
linstor ps cdp lvm srv2 /dev/sdb --pool-name data --storage-pool data
linstor ps cdp lvm srv3 /dev/sdb --pool-name data --storage-pool data
```
list storage pools:
```bash
linstor sp l
```
example output:
```console
+-------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
|=====================================================================================================================================|
| DfltDisklessStorPool | srv1 | DISKLESS | | | | False | Ok | srv1;DfltDisklessStorPool |
| DfltDisklessStorPool | srv2 | DISKLESS | | | | False | Ok | srv2;DfltDisklessStorPool |
| DfltDisklessStorPool | srv3 | DISKLESS | | | | False | Ok | srv3;DfltDisklessStorPool |
| data | srv1 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv1;data |
| data | srv2 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv2;data |
| data | srv3 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv3;data |
+-------------------------------------------------------------------------------------------------------------------------------------+
```
Create default storage classes:
```yaml
kubectl create -f- <<EOT
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOT
```
list storageclasses:
```bash
kubectl get storageclasses
```
example output:
```console
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local (default) linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
replicated linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
```
#### Configure Networking interconnection
To access your services select the range of unused IPs, eg. `192.168.100.200-192.168.100.250`
**Note:** These IPs should be from the same network as nodes or they should have all necessary routes for them.
Configure MetalLB to use and announce this range:
```yaml
kubectl create -f- <<EOT
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools:
- cozystack
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses:
- 192.168.100.200-192.168.100.250
autoAssign: true
avoidBuggyIPs: false
EOT
```
#### Setup basic applications
Get token from `tenant-root`:
```bash
kubectl get secret -n tenant-root tenant-root -o go-template='{{ printf "%s\n" (index .data "token" | base64decode) }}'
```
Enable port forward to cozy-dashboard:
```bash
kubectl port-forward -n cozy-dashboard svc/dashboard 8080:80
```
Open: http://localhost:8080/
- Select `tenant-root`
- Click `Upgrade` button
- Write a domain into `host` which you wish to use as parent domain for all deployed applications
**Note:**
- if you have no domain yet, you can use `192.168.100.200.nip.io` where `192.168.100.200` is a first IP address in your network addresses range.
- alternatively you can leave the default value, however you'll be need to modify your `/etc/hosts` every time you want to access specific application.
- Set `etcd`, `monitoring` and `ingress` to enabled position
- Click Deploy
Check persistent volumes provisioned:
```bash
kubectl get pvc -n tenant-root
```
example output:
```console
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-etcd-0 Bound pvc-4cbd29cc-a29f-453d-b412-451647cd04bf 10Gi RWO local <unset> 2m10s
data-etcd-1 Bound pvc-1579f95a-a69d-4a26-bcc2-b15ccdbede0d 10Gi RWO local <unset> 115s
data-etcd-2 Bound pvc-907009e5-88bf-4d18-91e7-b56b0dbfb97e 10Gi RWO local <unset> 91s
grafana-db-1 Bound pvc-7b3f4e23-228a-46fd-b820-d033ef4679af 10Gi RWO local <unset> 2m41s
grafana-db-2 Bound pvc-ac9b72a4-f40e-47e8-ad24-f50d843b55e4 10Gi RWO local <unset> 113s
vmselect-cachedir-vmselect-longterm-0 Bound pvc-622fa398-2104-459f-8744-565eee0a13f1 2Gi RWO local <unset> 2m21s
vmselect-cachedir-vmselect-longterm-1 Bound pvc-fc9349f5-02b2-4e25-8bef-6cbc5cc6d690 2Gi RWO local <unset> 2m21s
vmselect-cachedir-vmselect-shortterm-0 Bound pvc-7acc7ff6-6b9b-4676-bd1f-6867ea7165e2 2Gi RWO local <unset> 2m41s
vmselect-cachedir-vmselect-shortterm-1 Bound pvc-e514f12b-f1f6-40ff-9838-a6bda3580eb7 2Gi RWO local <unset> 2m40s
vmstorage-db-vmstorage-longterm-0 Bound pvc-e8ac7fc3-df0d-4692-aebf-9f66f72f9fef 10Gi RWO local <unset> 2m21s
vmstorage-db-vmstorage-longterm-1 Bound pvc-68b5ceaf-3ed1-4e5a-9568-6b95911c7c3a 10Gi RWO local <unset> 2m21s
vmstorage-db-vmstorage-shortterm-0 Bound pvc-cee3a2a4-5680-4880-bc2a-85c14dba9380 10Gi RWO local <unset> 2m41s
vmstorage-db-vmstorage-shortterm-1 Bound pvc-d55c235d-cada-4c4a-8299-e5fc3f161789 10Gi RWO local <unset> 2m41s
```
Check all pods are running:
```bash
kubectl get pod -n tenant-root
```
example output:
```console
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 2m1s
etcd-1 1/1 Running 0 106s
etcd-2 1/1 Running 0 82s
grafana-db-1 1/1 Running 0 119s
grafana-db-2 1/1 Running 0 13s
grafana-deployment-74b5656d6-5dcvn 1/1 Running 0 90s
grafana-deployment-74b5656d6-q5589 1/1 Running 1 (105s ago) 111s
root-ingress-controller-6ccf55bc6d-pg79l 2/2 Running 0 2m27s
root-ingress-controller-6ccf55bc6d-xbs6x 2/2 Running 0 2m29s
root-ingress-defaultbackend-686bcbbd6c-5zbvp 1/1 Running 0 2m29s
vmalert-vmalert-644986d5c-7hvwk 2/2 Running 0 2m30s
vmalertmanager-alertmanager-0 2/2 Running 0 2m32s
vmalertmanager-alertmanager-1 2/2 Running 0 2m31s
vminsert-longterm-75789465f-hc6cz 1/1 Running 0 2m10s
vminsert-longterm-75789465f-m2v4t 1/1 Running 0 2m12s
vminsert-shortterm-78456f8fd9-wlwww 1/1 Running 0 2m29s
vminsert-shortterm-78456f8fd9-xg7cw 1/1 Running 0 2m28s
vmselect-longterm-0 1/1 Running 0 2m12s
vmselect-longterm-1 1/1 Running 0 2m12s
vmselect-shortterm-0 1/1 Running 0 2m31s
vmselect-shortterm-1 1/1 Running 0 2m30s
vmstorage-longterm-0 1/1 Running 0 2m12s
vmstorage-longterm-1 1/1 Running 0 2m12s
vmstorage-shortterm-0 1/1 Running 0 2m32s
vmstorage-shortterm-1 1/1 Running 0 2m31s
```
Now you can get public IP of ingress controller:
```
kubectl get svc -n tenant-root root-ingress-controller
```
example output:
```console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
root-ingress-controller LoadBalancer 10.96.16.141 192.168.100.200 80:31632/TCP,443:30113/TCP 3m33s
```
Use `grafana.example.org` (under 192.168.100.200) to access system monitoring, where `example.org` is your domain specified for `tenant-root`
- login: `admin`
- password:
```bash
kubectl get secret -n tenant-root grafana-admin-password -o go-template='{{ printf "%s\n" (index .data "password" | base64decode) }}'
```

View File

@@ -1,318 +0,0 @@
#!/bin/bash
if [ "$COZYSTACK_INSTALLER_YAML" = "" ]; then
echo 'COZYSTACK_INSTALLER_YAML variable is not set!' >&2
echo 'please set it with following command:' >&2
echo >&2
echo 'export COZYSTACK_INSTALLER_YAML=$(helm template -n cozy-system installer packages/core/installer)' >&2
echo >&2
exit 1
fi
if [ "$(cat /proc/sys/net/ipv4/ip_forward)" != 1 ]; then
echo "IPv4 forwarding is not enabled!" >&2
echo 'please enable forwarding with the following command:' >&2
echo >&2
echo 'echo 1 > /proc/sys/net/ipv4/ip_forward' >&2
echo >&2
exit 1
fi
set -x
set -e
kill `cat srv1/qemu.pid srv2/qemu.pid srv3/qemu.pid` || true
ip link del cozy-br0 || true
ip link add cozy-br0 type bridge
ip link set cozy-br0 up
ip addr add 192.168.123.1/24 dev cozy-br0
# Enable forward & masquerading
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -s 192.168.123.0/24 -j MASQUERADE
rm -rf srv1 srv2 srv3
mkdir -p srv1 srv2 srv3
# Prepare cloud-init
for i in 1 2 3; do
echo "local-hostname: srv$i" > "srv$i/meta-data"
echo '#cloud-config' > "srv$i/user-data"
cat > "srv$i/network-config" <<EOT
version: 2
ethernets:
eth0:
dhcp4: false
addresses:
- "192.168.123.1$i/26"
gateway4: "192.168.123.1"
nameservers:
search: [cluster.local]
addresses: [8.8.8.8]
EOT
( cd srv$i && genisoimage \
-output seed.img \
-volid cidata -rational-rock -joliet \
user-data meta-data network-config
)
done
# Prepare system drive
if [ ! -f nocloud-amd64.raw ]; then
wget https://github.com/aenix-io/cozystack/releases/latest/download/nocloud-amd64.raw.xz -O nocloud-amd64.raw.xz
rm -f nocloud-amd64.raw
xz --decompress nocloud-amd64.raw.xz
fi
for i in 1 2 3; do
cp nocloud-amd64.raw srv$i/system.img
qemu-img resize srv$i/system.img 20G
done
# Prepare data drives
for i in 1 2 3; do
qemu-img create srv$i/data.img 100G
done
# Prepare networking
for i in 1 2 3; do
ip link del cozy-srv$i || true
ip tuntap add dev cozy-srv$i mode tap
ip link set cozy-srv$i up
ip link set cozy-srv$i master cozy-br0
done
# Start VMs
for i in 1 2 3; do
qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -smp 4 -m 8192 \
-device virtio-net,netdev=net0,mac=52:54:00:12:34:5$i -netdev tap,id=net0,ifname=cozy-srv$i,script=no,downscript=no \
-drive file=srv$i/system.img,if=virtio,format=raw \
-drive file=srv$i/seed.img,if=virtio,format=raw \
-drive file=srv$i/data.img,if=virtio,format=raw \
-display none -daemonize -pidfile srv$i/qemu.pid
done
sleep 5
# Wait for VM to start up
timeout 60 sh -c 'until nc -nzv 192.168.123.11 50000 && nc -nzv 192.168.123.12 50000 && nc -nzv 192.168.123.13 50000; do sleep 1; done'
cat > patch.yaml <<\EOT
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.123.0/24
extraConfig:
maxPods: 512
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
- name: spl
install:
image: ghcr.io/aenix-io/cozystack/talos:v1.7.1
files:
- content: |
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
network:
cni:
name: none
dnsDomain: cozy.local
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOT
cat > patch-controlplane.yaml <<\EOT
machine:
network:
interfaces:
- interface: eth0
vip:
ip: 192.168.123.10
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.123.0/24
EOT
# Gen configuration
if [ ! -f secrets.yaml ]; then
talosctl gen secrets
fi
rm -f controlplane.yaml worker.yaml talosconfig kubeconfig
talosctl gen config --with-secrets secrets.yaml cozystack https://192.168.123.10:6443 --config-patch=@patch.yaml --config-patch-control-plane @patch-controlplane.yaml
export TALOSCONFIG=$PWD/talosconfig
# Apply configuration
talosctl apply -f controlplane.yaml -n 192.168.123.11 -e 192.168.123.11 -i
talosctl apply -f controlplane.yaml -n 192.168.123.12 -e 192.168.123.12 -i
talosctl apply -f controlplane.yaml -n 192.168.123.13 -e 192.168.123.13 -i
# Wait for VM to be configured
timeout 60 sh -c 'until nc -nzv 192.168.123.11 50000 && nc -nzv 192.168.123.12 50000 && nc -nzv 192.168.123.13 50000; do sleep 1; done'
# Bootstrap
talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11
# Wait for etcd
timeout 120 sh -c 'while talosctl etcd members -n 192.168.123.11,192.168.123.12,192.168.123.13 -e 192.168.123.10 2>&1 | grep "rpc error"; do sleep 1; done'
rm -f kubeconfig
talosctl kubeconfig kubeconfig -e 192.168.123.10 -n 192.168.123.10
export KUBECONFIG=$PWD/kubeconfig
# Wait for kubernetes nodes appear
timeout 60 sh -c 'until [ $(kubectl get node -o name | wc -l) = 3 ]; do sleep 1; done'
kubectl create ns cozy-system
kubectl create -f - <<\EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: cozystack
namespace: cozy-system
data:
bundle-name: "paas-full"
ipv4-pod-cidr: "10.244.0.0/16"
ipv4-pod-gateway: "10.244.0.1"
ipv4-svc-cidr: "10.96.0.0/16"
ipv4-join-cidr: "100.64.0.0/16"
EOT
#
echo "$COZYSTACK_INSTALLER_YAML" | kubectl apply -f -
# wait for cozystack pod to start
kubectl wait deploy --timeout=1m --for=condition=available -n cozy-system cozystack
# wait for helmreleases appear
timeout 60 sh -c 'until kubectl get hr -A | grep cozy; do sleep 1; done'
sleep 5
kubectl get hr -A | awk 'NR>1 {print "kubectl wait --timeout=15m --for=condition=ready -n " $1 " hr/" $2 " &"} END{print "wait"}' | sh -x
# Wait for linstor controller
kubectl wait deploy --timeout=5m --for=condition=available -n cozy-linstor linstor-controller
# Wait for all linstor nodes become Online
timeout 60 sh -c 'until [ $(kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor node list | grep -c Online) = 3 ]; do sleep 1; done'
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs srv1 /dev/vdc --pool-name data --storage-pool data
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs srv2 /dev/vdc --pool-name data --storage-pool data
kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor ps cdp zfs srv3 /dev/vdc --pool-name data --storage-pool data
kubectl create -f- <<EOT
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOT
kubectl create -f- <<EOT
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools:
- cozystack
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses:
- 192.168.123.200-192.168.123.250
autoAssign: true
avoidBuggyIPs: false
EOT
kubectl patch -n tenant-root hr/tenant-root --type=merge -p '{"spec":{ "values":{
"host": "example.org",
"ingress": true,
"monitoring": true,
"etcd": true
}}}'
# Wait for HelmRelease be created
timeout 60 sh -c 'until kubectl get hr -n tenant-root etcd ingress monitoring tenant-root; do sleep 1; done'
# Wait for HelmReleases be installed
kubectl wait --timeout=2m --for=condition=ready -n tenant-root hr etcd ingress monitoring tenant-root
# Wait for nginx-ingress-controller
timeout 60 sh -c 'until kubectl get deploy -n tenant-root root-ingress-controller; do sleep 1; done'
kubectl wait --timeout=5m --for=condition=available -n tenant-root deploy root-ingress-controller
# Wait for etcd
kubectl wait --timeout=5m --for=jsonpath=.status.readyReplicas=3 -n tenant-root sts etcd
# Wait for Victoria metrics
kubectl wait --timeout=5m --for=condition=available deploy -n tenant-root vmalert-vmalert vminsert-longterm vminsert-shortterm
kubectl wait --timeout=5m --for=jsonpath=.status.readyReplicas=2 -n tenant-root sts vmalertmanager-alertmanager vmselect-longterm vmselect-shortterm vmstorage-longterm vmstorage-shortterm
# Wait for grafana
kubectl wait --timeout=5m --for=condition=ready -n tenant-root clusters.postgresql.cnpg.io grafana-db
kubectl wait --timeout=5m --for=condition=available -n tenant-root deploy grafana-deployment
# Get IP of nginx-ingress
ip=$(kubectl get svc -n tenant-root root-ingress-controller -o jsonpath='{.status.loadBalancer.ingress..ip}')
# Check Grafana
curl -sS -k "https://$ip" -H 'Host: grafana.example.org' | grep Found

View File

@@ -20,28 +20,9 @@ miss_map=$(echo "$new_map" | awk 'NR==FNR { new_map[$1 " " $2] = $3; next } { if
resolved_miss_map=$(
echo "$miss_map" | while read chart version commit; do
if [ "$commit" = HEAD ]; then
line=$(awk '/^version:/ {print NR; exit}' "./$chart/Chart.yaml")
change_commit=$(git --no-pager blame -L"$line",+1 -- "$chart/Chart.yaml" | awk '{print $1}')
if [ "$change_commit" = "00000000" ]; then
# Not commited yet, use previus commit
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
if [ $(echo $commit | cut -c1) = "^" ]; then
# Previus commit not exists
commit=$(echo $commit | cut -c2-)
fi
else
# Commited, but version_map wasn't updated
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
change_commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
if [ $(echo $change_commit | cut -c1) = "^" ]; then
# Previus commit not exists
commit=$(echo $change_commit | cut -c2-)
else
commit=$(git describe --always "$change_commit~1")
fi
fi
line=$(git show HEAD:"./$chart/Chart.yaml" | awk '/^version:/ {print NR; exit}')
change_commit=$(git --no-pager blame -L"$line",+1 HEAD -- "$chart/Chart.yaml" | awk '{print $1}')
commit=$(git describe --always "$change_commit~1")
fi
echo "$chart $version $commit"
done

22
hack/prepare_release.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/bin/sh
set -e
if [ -e $1 ]; then
echo "Please pass version in the first argument"
echo "Example: $0 v0.0.2"
exit 1
fi
version=$1
talos_version=$(awk '/^version:/ {print $2}' packages/core/installer/images/talos/profiles/installer.yaml)
set -x
sed -i "s|\(ghcr.io/aenix-io/cozystack/matchbox:\)v[^ ]\+|\1${version}|g" README.md
sed -i "s|\(ghcr.io/aenix-io/cozystack/talos:\)v[^ ]\+|\1${talos_version}|g" README.md
sed -i "/^TAG / s|=.*|= ${version}|" \
packages/apps/http-cache/Makefile \
packages/apps/kubernetes/Makefile \
packages/core/installer/Makefile \
packages/system/dashboard/Makefile

View File

@@ -15,6 +15,13 @@ metadata:
namespace: cozy-system
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cozystack
namespace: cozy-system
---
# Source: cozy-installer/templates/cozystack.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
@@ -55,10 +62,7 @@ spec:
matchLabels:
app: cozystack
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: Recreate
template:
metadata:
labels:
@@ -68,26 +72,14 @@ spec:
serviceAccountName: cozystack
containers:
- name: cozystack
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.10.1"
image: "ghcr.io/aenix-io/cozystack/installer:v0.0.2"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
- name: KUBERNETES_SERVICE_PORT
value: "7445"
- name: K8S_AWAIT_ELECTION_ENABLED
value: "1"
- name: K8S_AWAIT_ELECTION_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAMESPACE
value: cozy-system
- name: K8S_AWAIT_ELECTION_IDENTITY
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: darkhttpd
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.10.1"
image: "ghcr.io/aenix-io/cozystack/installer:v0.0.2"
command:
- /usr/bin/darkhttpd
- /cozystack/assets
@@ -100,6 +92,3 @@ spec:
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoSchedule"
- key: "node.cilium.io/agent-not-ready"
operator: "Exists"
effect: "NoSchedule"

View File

@@ -7,11 +7,11 @@ repo:
awk '$$3 != "HEAD" {print "mkdir -p $(TMP)/" $$1 "-" $$2}' versions_map | sh -ex
awk '$$3 != "HEAD" {print "git archive " $$3 " " $$1 " | tar -xf- --strip-components=1 -C $(TMP)/" $$1 "-" $$2 }' versions_map | sh -ex
helm package -d "$(OUT)" $$(find . $(TMP) -mindepth 2 -maxdepth 2 -name Chart.yaml | awk 'sub("/Chart.yaml", "")' | sort -V)
cd "$(OUT)" && helm repo index . --url http://cozystack.cozy-system.svc/repos/apps
cd "$(OUT)" && helm repo index .
rm -rf "$(TMP)"
fix-chartnames:
find . -maxdepth 2 -name Chart.yaml | awk -F/ '{print $$2}' | while read i; do sed -i "s/^name: .*/name: $$i/" "$$i/Chart.yaml"; done
find . -name Chart.yaml -maxdepth 2 | awk -F/ '{print $$2}' | while read i; do sed -i "s/^name: .*/name: $$i/" "$$i/Chart.yaml"; done
gen-versions-map: fix-chartnames
../../hack/gen_versions_map.sh

View File

@@ -1,3 +0,0 @@
.helmignore
/logos
/Makefile

View File

@@ -1,25 +0,0 @@
apiVersion: v2
name: clickhouse
description: Managed ClickHouse service
icon: /logos/clickhouse.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "24.3.0"

View File

@@ -1,2 +0,0 @@
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -1,17 +0,0 @@
# Managed Clickhouse Service
## Parameters
### Common parameters
| Name | Description | Value |
| ---------- | ----------------------------- | ------ |
| `size` | Persistent Volume size | `10Gi` |
| `shards` | Number of Clickhouse replicas | `1` |
| `replicas` | Number of Clickhouse shards | `2` |
### Configuration parameters
| Name | Description | Value |
| ------- | ------------------- | ----- |
| `users` | Users configuration | `{}` |

View File

@@ -1 +0,0 @@
<svg height="2222" viewBox="0 0 9 8" width="2500" xmlns="http://www.w3.org/2000/svg"><path d="m0 7h1v1h-1z" fill="#f00"/><path d="m0 0h1v7h-1zm2 0h1v8h-1zm2 0h1v8h-1zm2 0h1v8h-1zm2 3.25h1v1.5h-1z" fill="#fc0"/></svg>

Before

Width:  |  Height:  |  Size: 216 B

View File

@@ -1,37 +0,0 @@
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
name: "{{ .Release.Name }}"
spec:
{{- with .Values.size }}
defaults:
templates:
dataVolumeClaimTemplate: data-volume-template
{{- end }}
configuration:
{{- with .Values.users }}
users:
{{- range $name, $u := . }}
{{ $name }}/password_sha256_hex: {{ sha256sum $u.password }}
{{ $name }}/profile: {{ ternary "readonly" "default" (index $u "readonly" | default false) }}
{{ $name }}/networks/ip: ["::/0"]
{{- end }}
{{- end }}
profiles:
readonly/readonly: "1"
clusters:
- name: "clickhouse"
layout:
shardsCount: {{ .Values.shards }}
replicasCount: {{ .Values.replicas }}
{{- with .Values.size }}
templates:
volumeClaimTemplates:
- name: data-volume-template
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ . }}
{{- end }}

View File

@@ -1,21 +0,0 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"shards": {
"type": "number",
"description": "Number of Clickhouse replicas",
"default": 1
},
"replicas": {
"type": "number",
"description": "Number of Clickhouse shards",
"default": 2
}
}
}

View File

@@ -1,22 +0,0 @@
## @section Common parameters
## @param size Persistent Volume size
## @param shards Number of Clickhouse replicas
## @param replicas Number of Clickhouse shards
##
size: 10Gi
shards: 1
replicas: 2
## @section Configuration parameters
## @param users [object] Users configuration
## Example:
## users:
## user1:
## password: strongpassword
## user2:
## readonly: true
## password: hackme
##
users: {}

View File

@@ -1,3 +0,0 @@
.helmignore
/logos
/Makefile

View File

@@ -1,25 +0,0 @@
apiVersion: v2
name: ferretdb
description: Managed FerretDB service
icon: /logos/ferretdb.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.22.0"

View File

@@ -1,2 +0,0 @@
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -1,34 +0,0 @@
# Managed FerretDB Service
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `replicas` | Number of Postgres replicas | `2` |
| `quorum.minSyncReplicas` | Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed. | `0` |
| `quorum.maxSyncReplicas` | Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances). | `0` |
### Configuration parameters
| Name | Description | Value |
| ------- | ------------------- | ----- |
| `users` | Users configuration | `{}` |
### Backup parameters
| Name | Description | Value |
| ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |

View File

@@ -1,54 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="200mm"
height="195.323mm"
viewBox="0 0 200 195.323"
version="1.1"
id="svg948"
inkscape:version="1.1.1 (c3084ef, 2021-09-22)"
sodipodi:docname="ferretdb.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<sodipodi:namedview
id="namedview950"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:document-units="mm"
showgrid="false"
inkscape:zoom="0.64052329"
inkscape:cx="-69.474445"
inkscape:cy="579.99452"
inkscape:window-width="3440"
inkscape:window-height="1387"
inkscape:window-x="0"
inkscape:window-y="25"
inkscape:window-maximized="1"
inkscape:current-layer="layer1" />
<defs
id="defs945" />
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1">
<path
d="M 95.871302,0.25836635 C 73.52529,3.312081 51.107429,17.502874 38.138123,36.831094 c -2.083712,3.125567 -5.676318,9.628178 -5.676318,10.274847 0,0.0719 1.724451,-0.970003 3.808162,-2.335187 25.651206,-16.921175 56.260205,-20.046742 81.156963,-8.298921 5.42484,2.550751 8.83781,5.029648 13.68783,9.879665 8.15521,8.191137 14.11894,19.148592 18.25044,33.554942 2.15556,7.400765 3.95187,17.495992 4.4189,24.35786 0.10778,1.86816 0.39518,3.52075 0.57482,3.62853 1.00593,0.61075 5.53261,-5.96372 8.73003,-12.645965 5.06558,-10.634111 7.43669,-21.0886 7.40077,-32.692714 -0.036,-16.418213 -5.71224,-30.213814 -17.13674,-41.710153 C 143.22184,10.640997 130.43216,3.6354156 117.03174,0.90503536 113.90617,0.29429263 111.6069,0.11466224 105.75097,0.00688441 101.69132,-0.02904391 97.272414,0.07873086 95.871302,0.25836635 Z"
id="path824"
style="fill:#216778;stroke-width:0.0359261" />
<path
d="m 48.377049,48.219658 c -2.335194,1.149625 -6.251134,4.742233 -9.700036,8.873735 -1.54482,1.832222 -3.880014,4.095564 -5.604464,5.388902 -4.02372,3.017795 -10.885597,9.735963 -14.370424,14.083015 -18.1785821,22.525641 -23.2441594,48.21277 -14.585984,74.00768 7.113359,21.12453 23.567499,35.13569 48.859444,41.4946 9.843739,2.51482 24.60935,3.91593 30.788632,2.94593 l 1.580747,-0.25148 -2.442972,-1.43704 C 69.42972,185.49312 60.017093,172.27233 57.39449,157.57857 c -0.790373,-4.45483 -0.826299,-12.35856 -0.03593,-16.70562 1.760377,-9.77189 6.682247,-18.7534 13.364494,-24.35786 3.125567,-2.6226 8.586328,-5.31706 12.933381,-6.35891 6.538543,-1.58075 10.526335,-3.37705 14.657827,-6.64633 2.658538,-2.0837 4.993728,-5.2452 6.933738,-9.340763 1.65259,-3.484834 5.17335,-14.550063 5.17335,-16.310439 0,-1.221482 -1.25742,-2.874082 -3.05372,-3.987789 -0.93408,-0.574812 -2.40705,-0.898147 -6.17927,-1.293338 C 84.949773,70.888992 76.866409,67.943063 67.094521,60.218953 65.693406,59.105246 64.00488,57.847837 63.322285,57.416727 62.639691,57.021536 61.2745,55.512639 60.340423,54.111526 c -2.838159,-4.131492 -6.358912,-6.790025 -9.053367,-6.825953 -0.574817,0 -1.904081,0.431119 -2.910011,0.934085 z m 17.639695,16.633763 c 1.221486,0.610741 2.55075,1.401113 2.981863,1.724447 l 0.790373,0.646669 -1.257411,5.029649 c -1.077783,4.38298 -1.257413,5.496687 -1.149634,8.622257 0.107777,3.089642 0.215555,3.77223 0.934077,4.778161 1.18556,1.616673 3.233345,2.586676 5.532613,2.586676 3.269271,0 5.820021,-1.86815 10.059296,-7.436693 1.221486,-1.580744 2.19149,-2.442973 3.628532,-3.125571 2.227415,-1.113706 3.808162,-1.221481 8.765958,-0.790372 l 3.305202,0.323335 v 1.940007 c 0,3.053724 1.616677,4.814099 4.921857,5.317065 l 1.58075,0.21555 -0.57481,1.329266 c -2.51483,6.071499 -8.981521,12.93338 -15.05302,15.987093 -0.970004,0.46703 -3.161494,1.32926 -4.850018,1.90408 -2.766306,0.89815 -3.520754,1.00593 -8.262994,1.00593 -4.706313,0 -5.496687,-0.10778 -8.083363,-0.97001 -7.795954,-2.58667 -13.58005,-8.334832 -16.202652,-16.058942 -0.934077,-2.73038 -0.970004,-10.670039 -0.03593,-13.975231 1.257413,-4.562611 3.484828,-8.33485 5.820023,-9.80782 1.508893,-0.970003 4.311126,-0.646669 7.149285,0.754454 z"
id="path826"
style="fill:#216778;stroke-width:0.0359261" />
<path
d="m 181.55494,78.397542 c 0,1.616673 -1.7963,9.089295 -3.30519,13.759681 -5.67632,17.495987 -15.95117,33.195677 -29.35159,44.656087 -9.41263,8.08336 -16.09488,11.64004 -26.69306,14.26265 -6.82596,1.68852 -11.28078,2.22741 -19.93897,2.44297 -10.813737,0.2874 -21.483776,-0.6826 -31.040108,-2.76631 -1.832229,-0.39519 -3.377049,-0.64667 -3.484828,-0.53889 -0.431112,0.39519 1.221487,5.89187 2.658529,8.80189 2.622602,5.38891 5.604466,9.41262 10.921522,14.72968 5.604465,5.60446 9.771888,8.6941 16.238576,12.03522 16.023019,8.263 34.417169,9.37671 53.278339,3.1615 19.90304,-6.50262 34.52495,-18.25043 42.39275,-34.05791 5.24521,-10.4904 7.40077,-21.69934 6.6104,-34.489 -0.97001,-15.77155 -6.79003,-31.219754 -15.23265,-40.344967 -1.32926,-1.437041 -2.55075,-2.586676 -2.73038,-2.586676 -0.17963,0 -0.32334,0.431109 -0.32334,0.934075 z"
id="path828"
style="fill:#216778;stroke-width:0.0359261" />
</g>
</svg>

Before

Width:  |  Height:  |  Size: 5.2 KiB

View File

@@ -1,99 +0,0 @@
{{- if .Values.backup.enabled }}
{{ $image := .Files.Get "images/backup.json" | fromJson }}
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ .Release.Name }}-backup
spec:
schedule: "{{ .Values.backup.schedule }}"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
backoffLimit: 2
template:
spec:
restartPolicy: OnFailure
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/backup-script.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/backup-secret.yaml") . | sha256sum }}
spec:
restartPolicy: Never
containers:
- name: mysqldump
image: "{{ index $image "image.name" }}@{{ index $image "containerimage.digest" }}"
command:
- /bin/sh
- /scripts/backup.sh
env:
- name: REPO_PREFIX
value: {{ required "s3Bucket is not specified!" .Values.backup.s3Bucket | quote }}
- name: CLEANUP_STRATEGY
value: {{ required "cleanupStrategy is not specified!" .Values.backup.cleanupStrategy | quote }}
- name: PGUSER
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: username
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: password
- name: PGHOST
value: {{ .Release.Name }}-postgres-rw
- name: PGPORT
value: "5432"
- name: PGDATABASE
value: postgres
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: s3AccessKey
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: s3SecretKey
- name: AWS_DEFAULT_REGION
value: {{ .Values.backup.s3Region }}
- name: RESTIC_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-backup
key: resticPassword
volumeMounts:
- mountPath: /scripts
name: scripts
- mountPath: /tmp
name: tmp
- mountPath: /.cache
name: cache
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumes:
- name: scripts
secret:
secretName: {{ .Release.Name }}-backup-script
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
securityContext:
runAsNonRoot: true
runAsUser: 9000
runAsGroup: 9000
seccompProfile:
type: RuntimeDefault
{{- end }}

View File

@@ -1,50 +0,0 @@
{{- if .Values.backup.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backup-script
stringData:
backup.sh: |
#!/bin/sh
set -e
set -o pipefail
JOB_ID="job-$(uuidgen|cut -f1 -d-)"
DB_LIST=$(psql -Atq -c 'SELECT datname FROM pg_catalog.pg_database;' | grep -v '^\(postgres\|app\|template.*\)$')
echo DB_LIST=$(echo "$DB_LIST" | shuf) # shuffle list
echo "Job ID: $JOB_ID"
echo "Target repo: $REPO_PREFIX"
echo "Cleanup strategy: $CLEANUP_STRATEGY"
echo "Start backup for:"
echo "$DB_LIST"
echo
echo "Backup started at `date +%Y-%m-%d\ %H:%M:%S`"
for db in $DB_LIST; do
(
set -x
restic -r "s3:${REPO_PREFIX}/$db" cat config >/dev/null 2>&1 || \
restic -r "s3:${REPO_PREFIX}/$db" init --repository-version 2
restic -r "s3:${REPO_PREFIX}/$db" unlock --remove-all >/dev/null 2>&1 || true # no locks, k8s takes care of it
pg_dump -Z0 -Ft -d "$db" | \
restic -r "s3:${REPO_PREFIX}/$db" backup --tag "$JOB_ID" --stdin --stdin-filename dump.tar
restic -r "s3:${REPO_PREFIX}/$db" tag --tag "$JOB_ID" --set "completed"
)
done
echo "Backup finished at `date +%Y-%m-%d\ %H:%M:%S`"
echo
echo "Run cleanup:"
echo
echo "Cleanup started at `date +%Y-%m-%d\ %H:%M:%S`"
for db in $DB_LIST; do
(
set -x
restic forget -r "s3:${REPO_PREFIX}/$db" --group-by=tags --keep-tag "completed" # keep completed snapshots only
restic forget -r "s3:${REPO_PREFIX}/$db" --group-by=tags $CLEANUP_STRATEGY
restic prune -r "s3:${REPO_PREFIX}/$db"
)
done
echo "Cleanup finished at `date +%Y-%m-%d\ %H:%M:%S`"
{{- end }}

View File

@@ -1,11 +0,0 @@
{{- if .Values.backup.enabled }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-backup
stringData:
s3AccessKey: {{ required "s3AccessKey is not specified!" .Values.backup.s3AccessKey }}
s3SecretKey: {{ required "s3SecretKey is not specified!" .Values.backup.s3SecretKey }}
resticPassword: {{ required "resticPassword is not specified!" .Values.backup.resticPassword }}
{{- end }}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
spec:
type: {{ ternary "LoadBalancer" "ClusterIP" .Values.external }}
{{- if .Values.external }}
externalTrafficPolicy: Local
allocateLoadBalancerNodePorts: false
{{- end }}
ports:
- name: ferretdb
port: 27017
selector:
app: {{ .Release.Name }}

View File

@@ -1,26 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
containers:
- name: ferretdb
image: ghcr.io/ferretdb/ferretdb:1.22.0
ports:
- containerPort: 27017
env:
- name: FERRETDB_POSTGRESQL_URL
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-app
key: uri

View File

@@ -1,66 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-init-job
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
metadata:
name: {{ .Release.Name }}-init-job
annotations:
checksum/config: {{ include (print $.Template.BasePath "/init-script.yaml") . | sha256sum }}
spec:
restartPolicy: Never
containers:
- name: postgres
image: ghcr.io/cloudnative-pg/postgresql:15.3
command:
- bash
- /scripts/init.sh
env:
- name: PGUSER
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: username
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgres-superuser
key: password
- name: PGHOST
value: {{ .Release.Name }}-postgres-rw
- name: PGPORT
value: "5432"
- name: PGDATABASE
value: postgres
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
volumeMounts:
- mountPath: /etc/secret
name: secret
- mountPath: /scripts
name: scripts
securityContext:
fsGroup: 26
runAsGroup: 26
runAsNonRoot: true
runAsUser: 26
seccompProfile:
type: RuntimeDefault
volumes:
- name: secret
secret:
secretName: {{ .Release.Name }}-postgres-superuser
- name: scripts
secret:
secretName: {{ .Release.Name }}-init-script

View File

@@ -1,101 +0,0 @@
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-init-script
stringData:
init.sh: |
#!/bin/bash
set -e
echo "== create users"
{{- if .Values.users }}
psql -v ON_ERROR_STOP=1 <<\EOT
{{- range $user, $u := .Values.users }}
SELECT 'CREATE ROLE {{ $user }} LOGIN INHERIT;'
WHERE NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '{{ $user }}')\gexec
ALTER ROLE {{ $user }} WITH PASSWORD '{{ $u.password }}' LOGIN INHERIT {{ ternary "REPLICATION" "NOREPLICATION" (default false $u.replication) }};
COMMENT ON ROLE {{ $user }} IS 'user managed by helm';
{{- end }}
EOT
{{- end }}
echo "== delete users"
MANAGED_USERS=$(echo '\du+' | psql | awk -F'|' '$4 == " user managed by helm" {print $1}' | awk NF=NF RS= OFS=' ')
DEFINED_USERS="{{ join " " (keys .Values.users) }}"
DELETE_USERS=$(for user in $MANAGED_USERS; do case " $DEFINED_USERS " in *" $user "*) :;; *) echo $user;; esac; done)
echo "users to delete: $DELETE_USERS"
for user in $DELETE_USERS; do
# https://stackoverflow.com/a/51257346/2931267
psql -v ON_ERROR_STOP=1 --echo-all <<EOT
REASSIGN OWNED BY $user TO postgres;
DROP OWNED BY $user;
DROP USER $user;
EOT
done
echo "== create roles"
psql -v ON_ERROR_STOP=1 --echo-all <<\EOT
SELECT 'CREATE ROLE app_admin NOINHERIT;'
WHERE NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = 'app_admin')\gexec
COMMENT ON ROLE app_admin IS 'role managed by helm';
EOT
echo "== grant privileges on databases to roles"
psql -v ON_ERROR_STOP=1 --echo-all -d "app" <<\EOT
ALTER DATABASE app OWNER TO app_admin;
DO $$
DECLARE
schema_record record;
BEGIN
-- Loop over all schemas
FOR schema_record IN SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('pg_catalog', 'information_schema') LOOP
-- Changing Schema Ownership
EXECUTE format('ALTER SCHEMA %I OWNER TO %I', schema_record.schema_name, 'app_admin');
-- Add rights for the admin role
EXECUTE format('GRANT ALL ON SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('GRANT ALL ON ALL TABLES IN SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('GRANT ALL ON ALL SEQUENCES IN SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('GRANT ALL ON ALL FUNCTIONS IN SCHEMA %I TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON TABLES TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON SEQUENCES TO %I', schema_record.schema_name, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON FUNCTIONS TO %I', schema_record.schema_name, 'app_admin');
END LOOP;
END$$;
EOT
echo "== setup event trigger for schema creation"
psql -v ON_ERROR_STOP=1 --echo-all -d "app" <<\EOT
CREATE OR REPLACE FUNCTION auto_grant_schema_privileges()
RETURNS event_trigger LANGUAGE plpgsql AS $$
DECLARE
obj record;
BEGIN
FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands() WHERE command_tag = 'CREATE SCHEMA' LOOP
-- Set owner for schema
EXECUTE format('ALTER SCHEMA %I OWNER TO %I', obj.object_identity, 'app_admin');
-- Set privileges for admin role
EXECUTE format('GRANT ALL ON SCHEMA %I TO %I', obj.object_identity, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON TABLES TO %I', obj.object_identity, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON SEQUENCES TO %I', obj.object_identity, 'app_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON FUNCTIONS TO %I', obj.object_identity, 'app_admin');
END LOOP;
END;
$$;
DROP EVENT TRIGGER IF EXISTS trigger_auto_grant;
CREATE EVENT TRIGGER trigger_auto_grant ON ddl_command_end
WHEN TAG IN ('CREATE SCHEMA')
EXECUTE PROCEDURE auto_grant_schema_privileges();
EOT
echo "== assign roles to users"
psql -v ON_ERROR_STOP=1 --echo-all <<\EOT
GRANT app_admin TO app;
{{- range $user, $u := $.Values.users }}
GRANT app_admin TO {{ $user }};
{{- end }}
EOT

View File

@@ -1,45 +0,0 @@
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: {{ .Release.Name }}-postgres
spec:
instances: {{ .Values.replicas }}
enableSuperuserAccess: true
minSyncReplicas: {{ .Values.quorum.minSyncReplicas }}
maxSyncReplicas: {{ .Values.quorum.maxSyncReplicas }}
monitoring:
enablePodMonitor: true
storage:
size: {{ required ".Values.size is required" .Values.size }}
{{- if .Values.users }}
managed:
roles:
{{- range $user, $config := .Values.users }}
- name: {{ $user }}
ensure: present
passwordSecret:
name: {{ printf "%s-user-%s" $.Release.Name $user }}
login: true
inRoles:
- app
{{- end }}
{{- end }}
{{- range $user, $config := .Values.users }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ printf "%s-user-%s" $.Release.Name $user }}
labels:
cnpg.io/reload: "true"
type: kubernetes.io/basic-auth
data:
username: {{ $user | b64enc }}
password: {{ $config.password | b64enc }}
{{- end }}

View File

@@ -1,81 +0,0 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"replicas": {
"type": "number",
"description": "Number of Postgres replicas",
"default": 2
},
"quorum": {
"type": "object",
"properties": {
"minSyncReplicas": {
"type": "number",
"description": "Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.",
"default": 0
},
"maxSyncReplicas": {
"type": "number",
"description": "Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).",
"default": 0
}
}
},
"backup": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable pereiodic backups",
"default": false
},
"s3Region": {
"type": "string",
"description": "The AWS S3 region where backups are stored",
"default": "us-east-1"
},
"s3Bucket": {
"type": "string",
"description": "The S3 bucket used for storing backups",
"default": "s3.example.org/postgres-backups"
},
"schedule": {
"type": "string",
"description": "Cron schedule for automated backups",
"default": "0 2 * * *"
},
"cleanupStrategy": {
"type": "string",
"description": "The strategy for cleaning up old backups",
"default": "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
},
"s3AccessKey": {
"type": "string",
"description": "The access key for S3, used for authentication",
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu"
},
"s3SecretKey": {
"type": "string",
"description": "The secret key for S3, used for authentication",
"default": "ju3eum4dekeich9ahM1te8waeGai0oog"
},
"resticPassword": {
"type": "string",
"description": "The password for Restic backup encryption",
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
}
}
}

View File

@@ -1,48 +0,0 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param replicas Number of Postgres replicas
##
external: false
size: 10Gi
replicas: 2
## Configuration for the quorum-based synchronous replication
## @param quorum.minSyncReplicas Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.
## @param quorum.maxSyncReplicas Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
## @section Configuration parameters
## @param users [object] Users configuration
## Example:
## users:
## user1:
## password: strongpassword
## user2:
## password: hackme
##
users: {}
## @section Backup parameters
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.cleanupStrategy The strategy for cleaning up old backups
## @param backup.s3AccessKey The access key for S3, used for authentication
## @param backup.s3SecretKey The secret key for S3, used for authentication
## @param backup.resticPassword The password for Restic backup encryption
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0

View File

@@ -1,56 +0,0 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param replicas Number of Postgres replicas
##
external: false
size: 10Gi
replicas: 1
## Configuration for the quorum-based synchronous replication
## @param quorum.minSyncReplicas Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.
## @param quorum.maxSyncReplicas Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).
quorum:
minSyncReplicas: 0
maxSyncReplicas: 0
## @section Configuration parameters
## @param users [object] Users configuration
## Example:
## users:
## user1:
## password: strongpassword
## user2:
## password: hackme
##
users:
foo:
password: asd
bar:
password: asd
baz:
password: asd
boo:
password: asd
## @section Backup parameters
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.cleanupStrategy The strategy for cleaning up old backups
## @param backup.s3AccessKey The access key for S3, used for authentication
## @param backup.s3SecretKey The secret key for S3, used for authentication
## @param backup.resticPassword The password for Restic backup encryption
backup:
enabled: false
s3Region: us-east-1
s3Bucket: s3.example.org/postgres-backups
schedule: "0 2 * * *"
cleanupStrategy: "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
resticPassword: ChaXoveekoh6eigh4siesheeda2quai0

View File

@@ -1,3 +1,23 @@
.helmignore
/logos
/Makefile
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,7 +1,7 @@
apiVersion: v2
name: http-cache
description: Layer7 load balacner and caching service
icon: /logos/nginx.svg
icon: https://www.svgrepo.com/show/373924/nginx.svg
# A chart can be either an 'application' or a 'library' chart.
#
@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.25.3"
appVersion: "1.16.0"

View File

@@ -1,23 +1,22 @@
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
NGINX_CACHE_TAG = v0.1.0
include ../../../scripts/common-envs.mk
TAG := v0.0.2
image: image-nginx
image-nginx:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/nginx-cache \
--provenance false \
--tag $(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG)) \
--tag $(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/nginx-cache:latest \
--tag $(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG) \
--tag $(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG)-$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG) \
--cache-to type=inline \
--metadata-file images/nginx-cache.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/nginx-cache:$(call settag,$(NGINX_CACHE_TAG))" > images/nginx-cache.tag
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
echo "$(REGISTRY)/nginx-cache:$(NGINX_CACHE_TAG)" > images/nginx-cache.tag
update:
tag=$$(git ls-remote --tags --sort="v:refname" https://github.com/chrislim2888/IP2Location-C-Library | awk -F'[/^]' 'END{print $$3}') && \

View File

@@ -55,20 +55,3 @@ The deployment architecture is illustrated in the diagram below:
VTS module shows wrong upstream resonse time
- https://github.com/vozlt/nginx-module-vts/issues/198
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------ | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `haproxy.replicas` | Number of HAProxy replicas | `2` |
| `nginx.replicas` | Number of Nginx replicas | `2` |
### Configuration parameters
| Name | Description | Value |
| ----------- | ----------------------- | ----- |
| `endpoints` | Endpoints configuration | `[]` |

View File

@@ -1,48 +1,14 @@
{
"buildx.build.provenance": {
"buildType": "https://mobyproject.org/buildkit@v1",
"materials": [
{
"uri": "pkg:docker/ubuntu@22.04?platform=linux%2Famd64",
"digest": {
"sha256": "340d9b015b194dc6e2a13938944e0d016e57b9679963fdeb9ce021daac430221"
}
}
],
"invocation": {
"configSource": {
"entryPoint": "Dockerfile"
},
"parameters": {
"frontend": "dockerfile.v0",
"args": {
"build-arg:ARCH": "amd64"
},
"locals": [
{
"name": "context"
},
{
"name": "dockerfile"
}
]
},
"environment": {
"platform": "linux/amd64"
}
}
},
"buildx.build.ref": "cozystack/cozystack0/7j4plhjjn8onm0o8q0omik63x",
"containerimage.config.digest": "sha256:f30f57d817c596f7a7d0ecfe734b7b41994eca9d36d43307206314ee37bdb286",
"containerimage.config.digest": "sha256:f4ad0559a74749de0d11b1835823bf9c95332962b0909450251d849113f22c19",
"containerimage.descriptor": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:f7d86b1a72a12b60434a12a604e9ddd3779d9fa605205c7968fe9495e764c94c",
"size": 1094,
"digest": "sha256:3a0e8d791e0ccf681711766387ea9278e7d39f1956509cead2f72aa0001797ef",
"size": 1093,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
"containerimage.digest": "sha256:f7d86b1a72a12b60434a12a604e9ddd3779d9fa605205c7968fe9495e764c94c",
"image.name": "ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0,ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0-v0.10.1"
"containerimage.digest": "sha256:3a0e8d791e0ccf681711766387ea9278e7d39f1956509cead2f72aa0001797ef",
"image.name": "ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0,ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0-v0.0.2"
}

View File

@@ -1,2 +0,0 @@
<?xml version="1.0" encoding="utf-8"?><!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg width="800px" height="800px" viewBox="0 0 32 32" xmlns="http://www.w3.org/2000/svg"><title>file_type_nginx</title><path d="M15.948,2h.065a10.418,10.418,0,0,1,.972.528Q22.414,5.65,27.843,8.774a.792.792,0,0,1,.414.788c-.008,4.389,0,8.777-.005,13.164a.813.813,0,0,1-.356.507q-5.773,3.324-11.547,6.644a.587.587,0,0,1-.657.037Q9.912,26.6,4.143,23.274a.7.7,0,0,1-.4-.666q0-6.582,0-13.163a.693.693,0,0,1,.387-.67Q9.552,5.657,14.974,2.535c.322-.184.638-.379.974-.535" style="fill:#019639"/><path d="M8.767,10.538q0,5.429,0,10.859a1.509,1.509,0,0,0,.427,1.087,1.647,1.647,0,0,0,2.06.206,1.564,1.564,0,0,0,.685-1.293c0-2.62-.005-5.24,0-7.86q3.583,4.29,7.181,8.568a2.833,2.833,0,0,0,2.6.782,1.561,1.561,0,0,0,1.251-1.371q.008-5.541,0-11.081a1.582,1.582,0,0,0-3.152,0c0,2.662-.016,5.321,0,7.982-2.346-2.766-4.663-5.556-7-8.332A2.817,2.817,0,0,0,10.17,9.033,1.579,1.579,0,0,0,8.767,10.538Z" style="fill:#fff"/></svg>

Before

Width:  |  Height:  |  Size: 1.0 KiB

View File

@@ -74,7 +74,7 @@ data:
option redispatch 1
default-server observe layer7 error-limit 10 on-error mark-down
{{- range $i, $e := until (int $.Values.nginx.replicas) }}
{{- range $i, $e := until (int $.Values.replicas) }}
server cache{{ $i }} {{ $.Release.Name }}-nginx-cache-{{ $i }}:80 check
{{- end }}
{{- range $i, $e := $.Values.endpoints }}

View File

@@ -7,7 +7,7 @@ metadata:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.haproxy.replicas }}
replicas: 2
selector:
matchLabels:
app: {{ .Release.Name }}-haproxy

View File

@@ -11,7 +11,7 @@ spec:
selector:
matchLabels:
app: {{ $.Release.Name }}-nginx-cache
{{- range $i := until (int $.Values.nginx.replicas) }}
{{- range $i := until 3 }}
---
apiVersion: apps/v1
kind: Deployment

View File

@@ -1,42 +0,0 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"haproxy": {
"type": "object",
"properties": {
"replicas": {
"type": "number",
"description": "Number of HAProxy replicas",
"default": 2
}
}
},
"nginx": {
"type": "object",
"properties": {
"replicas": {
"type": "number",
"description": "Number of Nginx replicas",
"default": 2
}
}
},
"endpoints": {
"type": "array",
"description": "Endpoints configuration",
"default": [],
"items": {}
}
}
}

View File

@@ -1,28 +1,9 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param haproxy.replicas Number of HAProxy replicas
## @param nginx.replicas Number of Nginx replicas
##
external: false
size: 10Gi
haproxy:
replicas: 2
nginx:
replicas: 2
## @section Configuration parameters
## @param endpoints Endpoints configuration
## Example:
## endpoints:
## - 10.100.3.1:80
## - 10.100.3.11:80
## - 10.100.3.2:80
## - 10.100.3.12:80
## - 10.100.3.3:80
## - 10.100.3.13:80
##
endpoints: []
endpoints:
- 10.100.3.1:80
- 10.100.3.11:80
- 10.100.3.2:80
- 10.100.3.12:80
- 10.100.3.3:80
- 10.100.3.13:80

View File

@@ -1,3 +0,0 @@
.helmignore
/logos
/Makefile

View File

@@ -1,25 +0,0 @@
apiVersion: v2
name: kafka
description: Managed Kafka service
icon: /logos/kafka.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.2
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "3.7.0"

View File

@@ -1,2 +0,0 @@
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -1,19 +0,0 @@
# Managed Kafka Service
## Parameters
### Common parameters
| Name | Description | Value |
| -------------------- | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `kafka.size` | Persistent Volume size for Kafka | `10Gi` |
| `kafka.replicas` | Number of Kafka replicas | `3` |
| `zookeeper.size` | Persistent Volume size for ZooKeeper | `5Gi` |
| `zookeeper.replicas` | Number of ZooKeeper replicas | `3` |
### Configuration parameters
| Name | Description | Value |
| -------- | -------------------- | ----- |
| `topics` | Topics configuration | `[]` |

View File

@@ -1 +0,0 @@
<svg width="154" height="250" viewBox="0 0 256 416" xmlns="http://www.w3.org/2000/svg" preserveAspectRatio="xMidYMid"><path d="M201.816 230.216c-16.186 0-30.697 7.171-40.634 18.461l-25.463-18.026c2.703-7.442 4.255-15.433 4.255-23.797 0-8.219-1.498-16.076-4.112-23.408l25.406-17.835c9.936 11.233 24.409 18.365 40.548 18.365 29.875 0 54.184-24.305 54.184-54.184 0-29.879-24.309-54.184-54.184-54.184-29.875 0-54.184 24.305-54.184 54.184 0 5.348.808 10.505 2.258 15.389l-25.423 17.844c-10.62-13.175-25.911-22.374-43.333-25.182v-30.64c24.544-5.155 43.037-26.962 43.037-53.019C124.171 24.305 99.862 0 69.987 0 40.112 0 15.803 24.305 15.803 54.184c0 25.708 18.014 47.246 42.067 52.769v31.038C25.044 143.753 0 172.401 0 206.854c0 34.621 25.292 63.374 58.355 68.94v32.774c-24.299 5.341-42.552 27.011-42.552 52.894 0 29.879 24.309 54.184 54.184 54.184 29.875 0 54.184-24.305 54.184-54.184 0-25.883-18.253-47.553-42.552-52.894v-32.775a69.965 69.965 0 0 0 42.6-24.776l25.633 18.143c-1.423 4.84-2.22 9.946-2.22 15.24 0 29.879 24.309 54.184 54.184 54.184 29.875 0 54.184-24.305 54.184-54.184 0-29.879-24.309-54.184-54.184-54.184zm0-126.695c14.487 0 26.27 11.788 26.27 26.271s-11.783 26.27-26.27 26.27-26.27-11.787-26.27-26.27c0-14.483 11.783-26.271 26.27-26.271zm-158.1-49.337c0-14.483 11.784-26.27 26.271-26.27s26.27 11.787 26.27 26.27c0 14.483-11.783 26.27-26.27 26.27s-26.271-11.787-26.271-26.27zm52.541 307.278c0 14.483-11.783 26.27-26.27 26.27s-26.271-11.787-26.271-26.27c0-14.483 11.784-26.27 26.271-26.27s26.27 11.787 26.27 26.27zm-26.272-117.97c-20.205 0-36.642-16.434-36.642-36.638 0-20.205 16.437-36.642 36.642-36.642 20.204 0 36.641 16.437 36.641 36.642 0 20.204-16.437 36.638-36.641 36.638zm131.831 67.179c-14.487 0-26.27-11.788-26.27-26.271s11.783-26.27 26.27-26.27 26.27 11.787 26.27 26.27c0 14.483-11.783 26.271-26.27 26.271z" style="fill:#231f20"/></svg>

Before

Width:  |  Height:  |  Size: 1.8 KiB

View File

@@ -1,67 +0,0 @@
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
kafka:
replicas: {{ .Values.kafka.replicas }}
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
- name: external
port: 9094
{{- if .Values.external }}
type: loadbalancer
{{- else }}
type: internal
{{- end }}
tls: false
config:
{{- if eq (int .Values.kafka.replicas) 1 }}
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
default.replication.factor: 1
min.insync.replicas: 1
{{- else if eq (int .Values.kafka.replicas) 2 }}
offsets.topic.replication.factor: 2
transaction.state.log.replication.factor: 2
transaction.state.log.min.isr: 2
default.replication.factor: 2
min.insync.replicas: 2
{{- else }}
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
{{- end }}
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
{{- with .Values.kafka.size }}
size: {{ . }}
{{- end }}
deleteClaim: true
zookeeper:
replicas: {{ .Values.zookeeper.replicas }}
storage:
type: persistent-claim
{{- with .Values.zookeeper.size }}
size: {{ . }}
{{- end }}
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {}

View File

@@ -1,21 +0,0 @@
{{- range $topic := .Values.topics }}
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: "{{ $.Release.Name }}-{{ kebabcase $topic.name }}"
labels:
strimzi.io/cluster: "{{ $.Release.Name }}"
spec:
topicName: "{{ $topic.name }}"
{{- with $topic.partitions }}
partitions: {{ . }}
{{- end }}
{{- with $topic.replicas }}
replicas: {{ . }}
{{- end }}
{{- with $topic.config }}
config:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -1,47 +0,0 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"kafka": {
"type": "object",
"properties": {
"size": {
"type": "string",
"description": "Persistent Volume size for Kafka",
"default": "10Gi"
},
"replicas": {
"type": "number",
"description": "Number of Kafka replicas",
"default": 3
}
}
},
"zookeeper": {
"type": "object",
"properties": {
"size": {
"type": "string",
"description": "Persistent Volume size for ZooKeeper",
"default": "5Gi"
},
"replicas": {
"type": "number",
"description": "Number of ZooKeeper replicas",
"default": 3
}
}
},
"topics": {
"type": "array",
"description": "Topics configuration",
"default": [],
"items": {}
}
}
}

View File

@@ -1,37 +0,0 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param kafka.size Persistent Volume size for Kafka
## @param kafka.replicas Number of Kafka replicas
## @param zookeeper.size Persistent Volume size for ZooKeeper
## @param zookeeper.replicas Number of ZooKeeper replicas
##
external: false
kafka:
size: 10Gi
replicas: 3
zookeeper:
size: 5Gi
replicas: 3
## @section Configuration parameters
## @param topics Topics configuration
## Example:
## topics:
## - name: Results
## partitions: 1
## replicas: 3
## config:
## min.insync.replicas: 2
## - name: Orders
## config:
## cleanup.policy: compact
## segment.ms: 3600000
## max.compaction.lag.ms: 5400000
## min.insync.replicas: 2
## partitions: 1
## replicas: 3
##
topics: []

View File

@@ -1,3 +1,23 @@
.helmignore
/logos
/Makefile
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,7 +1,7 @@
apiVersion: v2
name: kubernetes
description: Managed Kubernetes service
icon: /logos/kubernetes.svg
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/3/39/Kubernetes_logo_without_workmark.svg/723px-Kubernetes_logo_without_workmark.svg.png
# A chart can be either an 'application' or a 'library' chart.
#
@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.8.1
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.30.1"
appVersion: "1.16.0"

View File

@@ -1,20 +1,19 @@
UBUNTU_CONTAINER_DISK_TAG = v1.30.1
include ../../../scripts/common-envs.mk
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.0.2
UBUNTU_CONTAINER_DISK_TAG = v1.29.1
image: image-ubuntu-container-disk
image-ubuntu-container-disk:
docker buildx build --platform linux/amd64 --build-arg ARCH=amd64 images/ubuntu-container-disk \
--provenance false \
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG)) \
--tag $(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG)-$(TAG)) \
--cache-from type=registry,ref=$(REGISTRY)/ubuntu-container-disk:latest \
--tag $(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG) \
--tag $(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG)-$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG) \
--cache-to type=inline \
--metadata-file images/ubuntu-container-disk.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/ubuntu-container-disk:$(call settag,$(UBUNTU_CONTAINER_DISK_TAG))" > images/ubuntu-container-disk.tag
echo "$(REGISTRY)/ubuntu-container-disk:$(UBUNTU_CONTAINER_DISK_TAG)" > images/ubuntu-container-disk.tag

View File

@@ -26,23 +26,3 @@ How to access to deployed cluster:
```
kubectl get secret -n <namespace> kubernetes-<clusterName>-admin-kubeconfig -o go-template='{{ printf "%s\n" (index .data "super-admin.conf" | base64decode) }}' > test
```
## Parameters
### Common parameters
| Name | Description | Value |
| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | ----- |
| `host` | The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host). | `""` |
| `controlPlane.replicas` | Number of replicas for Kubernetes contorl-plane components | `2` |
| `nodeGroups` | nodeGroups configuration | `{}` |
### Cluster Addons
| Name | Description | Value |
| ----------------------------- | ---------------------------------------------------------------------------------- | ------- |
| `addons.certManager.enabled` | Enables the cert-manager | `false` |
| `addons.ingressNginx.enabled` | Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role) | `false` |
| `addons.ingressNginx.hosts` | List of domain names that should be passed through to the cluster by upper cluster | `[]` |
| `addons.fluxcd.enabled` | Enables Flux CD | `false` |

View File

@@ -1,48 +1,4 @@
{
"buildx.build.provenance": {
"buildType": "https://mobyproject.org/buildkit@v1",
"materials": [
{
"uri": "pkg:docker/ubuntu@22.04?platform=linux%2Famd64",
"digest": {
"sha256": "340d9b015b194dc6e2a13938944e0d016e57b9679963fdeb9ce021daac430221"
}
}
],
"invocation": {
"configSource": {
"entryPoint": "Dockerfile"
},
"parameters": {
"frontend": "dockerfile.v0",
"args": {
"build-arg:ARCH": "amd64"
},
"locals": [
{
"name": "context"
},
{
"name": "dockerfile"
}
]
},
"environment": {
"platform": "linux/amd64"
}
}
},
"buildx.build.ref": "cozystack/cozystack0/xkanpm0dojuj7v0lo951qocfb",
"containerimage.config.digest": "sha256:c144c5f12a47af7880ee5f056b14177c07b585b8ab1e68b7e7900e1c923083cf",
"containerimage.descriptor": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:81caf89efe252ae2ca1990d08a3a314552d70ff36bcd4022b173c7150fbec805",
"size": 506,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
"containerimage.digest": "sha256:81caf89efe252ae2ca1990d08a3a314552d70ff36bcd4022b173c7150fbec805",
"image.name": "ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.30.1,ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.30.1-v0.10.1"
"containerimage.config.digest": "sha256:e982cfa2320d3139ed311ae44bcc5ea18db7e4e76d2746e0af04c516288ff0f1",
"containerimage.digest": "sha256:34f6aba5b5a2afbb46bbb891ef4ddc0855c2ffe4f9e5a99e8e553286ddd2c070"
}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.30.1
ghcr.io/aenix-io/cozystack/ubuntu-container-disk:v1.29.1

View File

@@ -26,8 +26,8 @@ RUN qemu-img resize image.img 5G \
&& guestfish --remote sh "curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg" \
&& guestfish --remote sh 'echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list' \
# kubernetes repo
&& guestfish --remote sh "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" \
&& guestfish --remote sh "echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list" \
&& guestfish --remote sh "curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg" \
&& guestfish --remote sh "echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list" \
# install containerd
&& guestfish --remote command "apt-get update -y" \
&& guestfish --remote command "apt-get install -y containerd.io" \

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -14,14 +14,7 @@ spec:
metadata:
labels:
app: {{ .Release.Name }}-cluster-autoscaler
policy.cozystack.io/allow-to-apiserver: "true"
spec:
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:
- image: ghcr.io/kvaps/test:cluster-autoscaller
name: cluster-autoscaler

View File

@@ -2,54 +2,6 @@
{{- $etcd := index $myNS.metadata.annotations "namespace.cozystack.io/etcd" }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- $host := index $myNS.metadata.annotations "namespace.cozystack.io/host" }}
{{- $kubevirtmachinetemplateNames := list }}
{{- define "kubevirtmachinetemplate" -}}
spec:
virtualMachineBootstrapCheck:
checkStrategy: ssh
virtualMachineTemplate:
metadata:
namespace: {{ $.Release.Namespace }}
labels:
{{- range .group.roles }}
node-role.kubernetes.io/{{ . }}: ""
{{- end }}
spec:
runStrategy: Always
template:
metadata:
labels:
{{- range .group.roles }}
node-role.kubernetes.io/{{ . }}: ""
{{- end }}
spec:
domain:
cpu:
threads: 1
cores: {{ .group.resources.cpu }}
sockets: 1
devices:
disks:
- name: system
disk:
bus: virtio
pciAddress: 0000:07:00.0
- name: ephemeral
disk:
bus: virtio
pciAddress: 0000:08:00.0
networkInterfaceMultiqueue: true
memory:
guest: {{ .group.resources.memory }}
evictionStrategy: External
volumes:
- name: system
containerDisk:
image: "{{ $.Files.Get "images/ubuntu-container-disk.tag" | trim }}@{{ index ($.Files.Get "images/ubuntu-container-disk.json" | fromJson) "containerimage.digest" }}"
- name: ephemeral
emptyDisk:
capacity: {{ .group.ephemeralStorage | default "20Gi" }}
{{- end }}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
@@ -87,9 +39,7 @@ metadata:
spec:
dataStoreName: "{{ $etcd }}"
addons:
coreDNS:
dnsServiceIPs:
- 10.95.0.10
coreDNS: {}
konnectivity: {}
kubelet:
cgroupfs: systemd
@@ -104,11 +54,8 @@ spec:
hostname: {{ .Values.host | default (printf "%s.%s" .Release.Name $host) }}:443
className: "{{ $ingress }}"
deployment:
podAdditionalMetadata:
labels:
policy.cozystack.io/allow-to-etcd: "true"
replicas: 2
version: 1.30.1
version: 1.29.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtCluster
@@ -117,120 +64,87 @@ metadata:
cluster.x-k8s.io/managed-by: kamaji
name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- range $groupName, $group := .Values.nodeGroups }}
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: {{ $.Release.Namespace }}
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
spec:
template:
spec:
diskSetup:
filesystems:
- device: /dev/vdb
filesystem: xfs
label: ephemeral
partition: "none"
mounts:
- ["LABEL=ephemeral", "/ephemeral"]
- ["/ephemeral/kubelet", "/var/lib/kubelet", "none", "bind,nofail"]
- ["/ephemeral/containerd", "/var/lib/containerd", "none", "bind,nofail"]
preKubeadmCommands:
- sed -i 's|root:x:|root::|' /etc/passwd
- systemctl stop containerd.service
- mkdir -p /ephemeral/kubelet /ephemeral/containerd
- mount -o bind /ephemeral/kubelet /var/lib/kubelet
- mount -o bind /ephemeral/containerd /var/lib/containerd
- systemctl start containerd.service
joinConfiguration:
nodeRegistration:
kubeletExtraArgs: {}
discovery:
bootstrapToken:
apiServerEndpoint: {{ $.Release.Name }}.{{ $.Release.Namespace }}.svc:6443
apiServerEndpoint: {{ .Release.Name }}.{{ .Release.Namespace }}.svc:6443
initConfiguration:
skipPhases:
- addon/kube-proxy
---
{{- $context := deepCopy $ }}
{{- $_ := set $context "group" $group }}
{{- $kubevirtmachinetemplate := include "kubevirtmachinetemplate" $context }}
{{- $kubevirtmachinetemplateHash := $kubevirtmachinetemplate | sha256sum | trunc 6 }}
{{- $kubevirtmachinetemplateName := printf "%s-%s-%s" $.Release.Name $groupName $kubevirtmachinetemplateHash }}
{{- $kubevirtmachinetemplateNames = append $kubevirtmachinetemplateNames $kubevirtmachinetemplateName }}
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
metadata:
name: {{ $.Release.Name }}-{{ $groupName }}-{{ $kubevirtmachinetemplateHash }}
namespace: {{ $.Release.Namespace }}
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
spec:
template:
{{- $kubevirtmachinetemplate | nindent 4 }}
spec:
virtualMachineBootstrapCheck:
checkStrategy: ssh
virtualMachineTemplate:
metadata:
namespace: {{ .Release.Namespace }}
spec:
runStrategy: Always
template:
spec:
domain:
cpu:
threads: 1
cores: 2
sockets: 1
devices:
disks:
- disk:
bus: virtio
name: containervolume
networkInterfaceMultiqueue: true
memory:
guest: 1024Mi
evictionStrategy: External
volumes:
- containerDisk:
image: "{{ $.Files.Get "images/ubuntu-container-disk.tag" | trim }}@{{ index ($.Files.Get "images/ubuntu-container-disk.json" | fromJson) "containerimage.digest" }}"
name: containervolume
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: {{ $.Release.Namespace }}
name: {{ .Release.Name }}-md-0
namespace: {{ .Release.Namespace }}
annotations:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "{{ $group.minReplicas }}"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "{{ $group.maxReplicas }}"
capacity.cluster-autoscaler.kubernetes.io/memory: "{{ $group.resources.memory }}"
capacity.cluster-autoscaler.kubernetes.io/cpu: "{{ $group.resources.cpu }}"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "2"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0"
capacity.cluster-autoscaler.kubernetes.io/memory: "1024Mi"
capacity.cluster-autoscaler.kubernetes.io/cpu: "2"
spec:
clusterName: {{ $.Release.Name }}
clusterName: {{ .Release.Name }}
selector:
matchLabels: null
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: {{ $.Release.Name }}
cluster.x-k8s.io/deployment-name: {{ $.Release.Name }}-{{ $groupName }}
{{- range $group.roles }}
node-role.kubernetes.io/{{ . }}: ""
{{- end }}
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: {{ $.Release.Name }}-{{ $groupName }}
namespace: {{ $.Release.Namespace }}
clusterName: {{ $.Release.Name }}
name: {{ .Release.Name }}-md-0
namespace: default
clusterName: {{ .Release.Name }}
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
name: {{ $.Release.Name }}-{{ $groupName }}-{{ $kubevirtmachinetemplateHash }}
name: {{ .Release.Name }}-md-0
namespace: default
version: v1.30.1
{{- end }}
---
{{- /*
We must preserve all previous KubevirtMachineTemplates until a MachineSet references them.
*/ -}}
{{- $mss := (lookup "cluster.x-k8s.io/v1beta1" "MachineSet" $.Release.Namespace "").items }}
{{- $oldKubevirtmachinetemplates := dict }}
{{- range $kmt := (lookup "infrastructure.cluster.x-k8s.io/v1alpha1" "KubevirtMachineTemplate" .Release.Namespace "").items }}
{{- range $or := $kmt.metadata.ownerReferences }}
{{- if and (eq $or.kind "Cluster") (eq $or.name $.Release.Name) }}
{{- range $ms := $mss }}
{{- if and (eq $ms.spec.template.spec.infrastructureRef.kind "KubevirtMachineTemplate") (eq $ms.spec.template.spec.infrastructureRef.name $kmt.metadata.name) }}
{{- if not (has $kmt.metadata.name $kubevirtmachinetemplateNames) }}
{{- $oldKubevirtmachinetemplates = merge $oldKubevirtmachinetemplates (dict $kmt.metadata.name $kmt) }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- range $oldKubevirtmachinetemplates }}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
metadata:
name: {{ .metadata.name }}
namespace: {{ .metadata.Namespace }}
spec:
{{- .spec | toYaml | nindent 2 }}
{{- end }}
version: v1.23.10

View File

@@ -13,14 +13,15 @@ spec:
metadata:
labels:
app: {{ .Release.Name }}-kcsi-driver
policy.cozystack.io/allow-to-apiserver: "true"
spec:
serviceAccountName: {{ .Release.Name }}-kcsi
priorityClassName: system-cluster-critical
nodeSelector:
node-role.kubernetes.io/control-plane: ""
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
- key: node-role.kubernetes.io/master
operator: Exists
effect: "NoSchedule"
containers:

View File

@@ -1,39 +0,0 @@
{{- if .Values.addons.certManager.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-cert-manager
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: cert-manager
chart:
spec:
chart: cozy-cert-manager
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-cert-manager
storageNamespace: cozy-cert-manager
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,4 +1,4 @@
apiVersion: helm.toolkit.fluxcd.io/v2
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-cilium
@@ -6,7 +6,7 @@ metadata:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
interval: 1m
releaseName: cilium
chart:
spec:
@@ -23,17 +23,10 @@ spec:
storageNamespace: cozy-cilium
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
values:
cilium:
tunnel: disabled
autoDirectNodeRoutes: false
bpf:
masquerade: true
autoDirectNodeRoutes: true
cgroup:
autoMount:
enabled: true
@@ -45,11 +38,9 @@ spec:
chainingMode: ~
customConf: false
configMap: ""
routingMode: tunnel
routingMode: native
enableIPv4Masquerade: true
ipv4NativeRoutingCIDR: ""
ipv4NativeRoutingCIDR: "10.244.0.0/16"
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,4 +1,4 @@
apiVersion: helm.toolkit.fluxcd.io/v2
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-csi
@@ -6,7 +6,7 @@ metadata:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
interval: 1m
releaseName: csi
chart:
spec:
@@ -23,13 +23,6 @@ spec:
storageNamespace: cozy-csi
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -12,31 +12,19 @@ spec:
spec:
serviceAccountName: {{ .Release.Name }}-flux-teardown
restartPolicy: Never
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:
- name: kubectl
image: docker.io/clastix/kubectl:v1.30.1
image: docker.io/clastix/kubectl:v1.29.1
command:
- /bin/sh
- -c
- |
kubectl
--namespace={{ .Release.Namespace }}
patch
helmrelease
{{ .Release.Name }}-cilium
{{ .Release.Name }}-csi
{{ .Release.Name }}-cert-manager
{{ .Release.Name }}-ingress-nginx
{{ .Release.Name }}-fluxcd-operator
{{ .Release.Name }}-fluxcd
-p '{"spec": {"suspend": true}}'
--type=merge --field-manager=flux-client-side-apply || true
- kubectl
- --namespace={{ .Release.Namespace }}
- patch
- helmrelease
- {{ .Release.Name }}-cilium
- {{ .Release.Name }}-csi
- -p
- '{"spec": {"suspend": true}}'
- --type=merge
---
apiVersion: v1
kind: ServiceAccount
@@ -66,10 +54,6 @@ rules:
resourceNames:
- {{ .Release.Name }}-cilium
- {{ .Release.Name }}-csi
- {{ .Release.Name }}-cert-manager
- {{ .Release.Name }}-ingress-nginx
- {{ .Release.Name }}-fluxcd-operator
- {{ .Release.Name }}-fluxcd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding

View File

@@ -1,84 +0,0 @@
{{- if .Values.addons.fluxcd.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-fluxcd-operator
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: fluxcd-operator
chart:
spec:
chart: cozy-fluxcd-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-fluxcd
storageNamespace: cozy-fluxcd
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
values:
flux-operator:
fullnameOverride: flux-operator
tolerations: []
hostNetwork: false
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-fluxcd
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: fluxcd
chart:
spec:
chart: cozy-fluxcd
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-fluxcd
storageNamespace: cozy-fluxcd
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
- name: {{ .Release.Name }}-fluxcd-operator
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,49 +0,0 @@
{{- if .Values.addons.ingressNginx.enabled }}
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-ingress-nginx
labels:
cozystack.io/repository: system
coztstack.io/target-cluster-name: {{ .Release.Name }}
spec:
interval: 5m
releaseName: ingress-nginx
chart:
spec:
chart: cozy-ingress-nginx
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
kubeConfig:
secretRef:
name: {{ .Release.Name }}-kubeconfig
targetNamespace: cozy-ingress-nginx
storageNamespace: cozy-ingress-nginx
install:
createNamespace: true
remediation:
retries: -1
upgrade:
remediation:
retries: -1
values:
ingress-nginx:
fullnameOverride: ingress-nginx
controller:
kind: DaemonSet
hostNetwork: true
service:
enabled: false
nodeSelector:
node-role.kubernetes.io/ingress-nginx: ""
dependsOn:
{{- if lookup "helm.toolkit.fluxcd.io/v2" "HelmRelease" .Release.Namespace .Release.Name }}
- name: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
{{- end }}
- name: {{ .Release.Name }}-cilium
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,59 +0,0 @@
{{- $myNS := lookup "v1" "Namespace" "" .Release.Namespace }}
{{- $ingress := index $myNS.metadata.annotations "namespace.cozystack.io/ingress" }}
{{- if .Values.addons.ingressNginx.hosts }}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ingress-nginx
annotations:
nginx.ingress.kubernetes.io/backend-protocol: AUTO_HTTP
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($scheme = http) {
set $proxy_upstream_name "tenant-root-kubernetes-infra-ingress-nginx-80";
set $proxy_host $proxy_upstream_name;
set $service_port 80;
}
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: "{{ $ingress }}"
rules:
{{- range .Values.addons.ingressNginx.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ $.Release.Name }}-ingress-nginx
port:
number: 443
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ $.Release.Name }}-ingress-nginx
port:
number: 80
{{- end }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-ingress-nginx
spec:
ports:
- appProtocol: http
name: http
port: 80
targetPort: 80
- appProtocol: https
name: https
port: 443
targetPort: 443
selector:
cluster.x-k8s.io/cluster-name: {{ .Release.Name }}
node-role.kubernetes.io/ingress-nginx: ""
{{- end }}

View File

@@ -13,14 +13,7 @@ spec:
metadata:
labels:
k8s-app: {{ .Release.Name }}-kccm
policy.cozystack.io/allow-to-apiserver: "true"
spec:
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: "NoSchedule"
containers:
- name: kubevirt-cloud-controller-manager
args:
@@ -51,4 +44,6 @@ spec:
- secret:
secretName: {{ .Release.Name }}-admin-kubeconfig
name: kubeconfig
tolerations:
- operator: Exists
serviceAccountName: {{ .Release.Name }}-kccm

View File

@@ -1,62 +1,11 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"host": {
"type": "string",
"description": "The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host).",
"default": ""
},
"controlPlane": {
"type": "object",
"properties": {
"replicas": {
"type": "number",
"description": "Number of replicas for Kubernetes contorl-plane components",
"default": 2
}
}
},
"addons": {
"type": "object",
"properties": {
"certManager": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables the cert-manager",
"default": false
}
}
},
"ingressNginx": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role)",
"default": false
},
"hosts": {
"type": "array",
"description": "List of domain names that should be passed through to the cluster by upper cluster",
"default": [],
"items": {}
}
}
},
"fluxcd": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enables Flux CD",
"default": false
}
}
}
}
}
"$schema": "http://json-schema.org/schema#",
"type": "object",
"properties": {
"host": {
"type": "string",
"title": "Domain name for this kubernetes cluster",
"description": "This host will be used for all apps deployed in this tenant"
}
}
}
}

View File

@@ -1,52 +1 @@
## @section Common parameters
## @param host The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host).
## @param controlPlane.replicas Number of replicas for Kubernetes contorl-plane components
##
host: ""
controlPlane:
replicas: 2
## @param nodeGroups [object] nodeGroups configuration
##
nodeGroups:
md0:
minReplicas: 0
maxReplicas: 10
resources:
cpu: 2
memory: 1024Mi
ephemeralStorage: 20Gi
roles:
- ingress-nginx
## @section Cluster Addons
##
addons:
## Cert-manager: automatically creates and manages SSL/TLS certificate
##
certManager:
## @param addons.certManager.enabled Enables the cert-manager
enabled: false
## Ingress-NGINX Controller
##
ingressNginx:
## @param addons.ingressNginx.enabled Enable Ingress-NGINX controller (expect nodes with 'ingress-nginx' role)
##
enabled: false
## @param addons.ingressNginx.hosts List of domain names that should be passed through to the cluster by upper cluster
## e.g:
## hosts:
## - example.org
## - foo.example.net
##
hosts: []
## Flux CD
##
fluxcd:
## @param addons.fluxcd.enabled Enables Flux CD
##
enabled: false

View File

@@ -1,3 +1,23 @@
.helmignore
/logos
/Makefile
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,7 +1,7 @@
apiVersion: v2
name: mysql
description: Managed MariaDB service
icon: /logos/mariadb.svg
icon: https://static-00.iconduck.com/assets.00/mariadb-icon-512x340-txozryr2.png
# A chart can be either an 'application' or a 'library' chart.
#
@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.3.0
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "11.0.2"
appVersion: "1.16.0"

View File

@@ -1,2 +0,0 @@
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -62,34 +62,3 @@ more details:
mysqldump -h <slave> -P 3306 -u<user> -p<password> --column-statistics=0 <database> <table> ~/tmp/fix-table.sql
mysql -h <master> -P 3306 -u<user> -p<password> <database> < ~/tmp/fix-table.sql
```
## Parameters
### Common parameters
| Name | Description | Value |
| ---------- | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `replicas` | Number of MariaDB replicas | `2` |
### Configuration parameters
| Name | Description | Value |
| ----------- | ----------------------- | ----- |
| `users` | Users configuration | `{}` |
| `databases` | Databases configuration | `[]` |
### Backup parameters
| Name | Description | Value |
| ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |

View File

@@ -1,12 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg width="800px" height="800px" viewBox="0 -43 256 256" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" preserveAspectRatio="xMidYMid">
<g>
<path d="M250.382523,0.00447241672 C246.426131,0.130891567 247.677353,1.27087056 239.128415,3.37469592 C230.495553,5.49917829 219.950359,4.84773528 210.654095,8.74649903 C182.903099,20.3847485 177.335232,60.1626339 152.106938,74.4118517 C133.249415,85.0635193 114.223916,85.9130759 97.1188786,91.2730771 C85.8778244,94.7980074 73.5811418,102.026905 63.3964279,110.803626 C55.49096,117.618586 55.2845466,123.610697 47.0245784,132.158212 C38.1894743,141.300822 11.9101646,132.312705 0,146.305625 C3.83670733,150.185042 5.51875114,151.271649 13.0796841,150.265122 C11.5142932,153.232113 2.28663486,155.732479 4.09296236,160.097129 C5.99360595,164.689675 28.3022154,167.802917 48.5816837,155.559279 C58.0261053,149.857249 65.5486285,141.638595 80.2576532,139.676806 C99.2917078,137.139881 121.218611,141.30404 143.253683,144.481588 C139.986431,154.22355 133.426672,160.702176 128.172006,168.461009 C126.544787,170.213508 131.440311,170.409956 137.025262,169.350783 C147.071883,166.866533 154.312169,164.86632 161.894457,160.453039 C171.209327,155.030397 172.62088,141.127864 184.04984,138.119701 C190.417778,147.907219 207.737102,150.219223 218.48411,142.390618 C209.053925,139.721295 206.447626,119.648695 209.630855,110.803626 C212.646122,102.431204 215.625486,89.0383196 218.662065,77.9709494 C221.922199,66.0849867 223.124932,51.1038191 227.070434,45.0492956 C233.00651,35.9401552 239.565643,32.81205 245.260156,27.675489 C250.954656,22.538928 256.166954,17.538894 255.995904,5.78538669 C255.940809,1.99964564 253.983391,-0.11060033 250.382523,0.00447241672 L250.382523,0.00447241672 Z" fill="#002B64">
</path>
<path d="M241.905484,6.96809574 C242.853676,10.2001831 244.337002,11.6835082 250.750076,12.2768382 C249.813239,20.407447 244.389521,24.8545834 238.308598,29.1214497 C232.957272,32.8744751 227.094944,36.4883945 223.327724,42.3507224 C219.46824,48.3564147 217.01827,68.9100487 211.033869,89.2081817 C205.861394,106.746904 198.050161,124.088323 184.409248,131.686638 C182.98412,128.099688 184.590937,121.479374 181.756296,119.303358 C179.922367,124.53403 177.848551,129.524816 175.419872,134.163578 C167.415594,149.462409 155.564607,160.917369 135.760443,164.414894 C145.157201,151.699462 154.142319,138.568131 154.336783,116.651825 C147.723566,118.082631 147.864092,133.703676 141.069185,137.879698 C136.712894,138.353794 132.299824,138.350955 127.858366,138.084099 C109.618435,136.991122 90.9072468,131.509207 73.84404,136.984025 C62.2258429,140.71292 52.7240456,149.509251 42.8858386,153.776117 C31.323,158.791033 22.5664139,160.853494 8.16751449,158.791033 C6.33926307,156.328288 18.7055102,153.150139 17.9659769,147.803072 C12.3307609,147.179933 9.058929,148.545444 4.16040754,146.319747 C4.70121793,145.323293 5.49610985,144.492915 6.49682201,143.801643 C15.4748424,137.587291 40.9766785,142.333932 47.8013935,135.632709 C52.0143206,131.499271 54.7779895,127.172788 57.6396004,122.966958 C60.4146249,118.886039 63.2833331,114.918677 67.6538192,111.343083 C69.2677337,110.022994 71.0221737,108.71852 72.8844919,107.445273 C80.3323453,102.348029 89.5459944,97.7248808 98.6134401,94.5382159 C110.965493,90.1961188 123.482202,89.8384174 136.647599,84.8078871 C144.781047,81.6992919 153.625639,77.8596801 160.835025,72.4870623 C162.546881,71.2095575 164.166473,69.8483051 165.663993,68.3891106 C186.250274,48.3209285 190.331193,12.9212684 222.449085,9.62246697 C226.3327,9.22360156 229.512267,9.3527715 232.406525,9.26476561 C235.742233,9.16540412 238.694688,8.77789431 241.905484,6.96809574 Z M202.75118,120.267107 C203.134432,126.40197 206.695831,138.573752 209.839913,141.531886 C203.682339,143.029405 193.074791,140.555304 190.353705,136.211788 C191.751863,129.940658 199.027963,124.2075 202.75118,120.267107 Z" fill="#C49A6C" fill-rule="nonzero">
</path>
<path d="M244.218787,13.8370641 C242.980829,16.4335799 240.610981,19.7812981 240.610981,26.3910072 C240.60081,27.5258023 239.749351,28.3031588 239.734821,26.5537435 C239.798753,20.0936937 241.508937,17.3010225 243.32519,13.6307377 C244.169385,12.12688 244.677936,12.7473121 244.218787,13.8370641 Z M242.972111,12.8591933 C241.511843,15.3365629 237.995576,19.8554012 237.414375,26.4404093 C237.306853,27.5693924 236.388555,28.2682867 236.528044,26.5232305 C237.161553,20.0951467 239.97166,16.0717822 242.104668,12.5744048 C243.072368,11.1519152 243.527158,11.8144844 242.972111,12.8591933 Z M241.835862,11.5631149 C240.172174,13.9082613 234.759739,19.3352263 233.62785,25.8490372 C233.42443,26.9634903 232.450918,27.5853754 232.73716,25.8577553 C233.90828,19.5037746 238.573871,14.5098044 240.993121,11.2071293 C242.077061,9.86891382 242.473731,10.5678081 241.835862,11.5631149 Z M240.821667,10.1173773 L240.274318,10.6995682 C237.854262,13.2941372 232.232203,19.6224619 230.358594,25.4145894 C229.99825,26.4898114 228.947729,26.9693023 229.475169,25.2983492 C231.526809,19.17249 237.177536,12.5744048 240.037045,9.64515141 C241.299704,8.47257825 241.593211,9.22087463 240.821667,10.1173773 Z M211.771784,23.2321794 C213.025725,17.8458985 217.214732,15.391777 224.446326,15.9904141 C226.191383,24.0298779 216.425752,27.2729799 211.771784,23.2321794 Z" fill="#002B64">

Before

Width:  |  Height:  |  Size: 5.4 KiB

View File

@@ -1,7 +1,7 @@
{{- range $name := .Values.databases }}
{{ $dnsName := replace "_" "-" $name }}
---
apiVersion: k8s.mariadb.com/v1alpha1
apiVersion: mariadb.mmontes.io/v1alpha1
kind: Database
metadata:
name: {{ $.Release.Name }}-{{ $dnsName }}

View File

@@ -1,20 +1,18 @@
---
apiVersion: k8s.mariadb.com/v1alpha1
apiVersion: mariadb.mmontes.io/v1alpha1
kind: MariaDB
metadata:
name: {{ .Release.Name }}
spec:
{{- if (and .Values.users.root .Values.users.root.password) }}
rootPasswordSecretKeyRef:
name: {{ .Release.Name }}
key: root-password
{{- end }}
image: "mariadb:11.0.2"
port: 3306
replicas: {{ .Values.replicas }}
replicas: 2
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@@ -30,18 +28,15 @@ spec:
- {{ .Release.Name }}
topologyKey: "kubernetes.io/hostname"
{{- if gt (int .Values.replicas) 1 }}
replication:
enabled: true
#primary:
# podIndex: 0
# automaticFailover: true
{{- end }}
metrics:
enabled: true
exporter:
image: prom/mysqld-exporter:v0.15.1
image: prom/mysqld-exporter:v0.14.0
resources:
requests:
cpu: 50m
@@ -58,10 +53,14 @@ spec:
name: {{ .Release.Name }}-my-cnf
key: config
storage:
size: {{ .Values.size }}
resizeInUseVolumes: true
waitForVolumeResize: true
volumeClaimTemplate:
resources:
requests:
storage: {{ .Values.size }}
accessModes:
- ReadWriteOnce
{{- if .Values.external }}
primaryService:

View File

@@ -2,7 +2,7 @@
{{ if not (eq $name "root") }}
{{ $dnsName := replace "_" "-" $name }}
---
apiVersion: k8s.mariadb.com/v1alpha1
apiVersion: mariadb.mmontes.io/v1alpha1
kind: User
metadata:
name: {{ $.Release.Name }}-{{ $dnsName }}
@@ -15,7 +15,7 @@ spec:
key: {{ $name }}-password
maxUserConnections: {{ $u.maxUserConnections }}
---
apiVersion: k8s.mariadb.com/v1alpha1
apiVersion: mariadb.mmontes.io/v1alpha1
kind: Grant
metadata:
name: {{ $.Release.Name }}-{{ $dnsName }}

View File

@@ -1,72 +0,0 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"replicas": {
"type": "number",
"description": "Number of MariaDB replicas",
"default": 2
},
"databases": {
"type": "array",
"description": "Databases configuration",
"default": [],
"items": {}
},
"backup": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable pereiodic backups",
"default": false
},
"s3Region": {
"type": "string",
"description": "The AWS S3 region where backups are stored",
"default": "us-east-1"
},
"s3Bucket": {
"type": "string",
"description": "The S3 bucket used for storing backups",
"default": "s3.example.org/postgres-backups"
},
"schedule": {
"type": "string",
"description": "Cron schedule for automated backups",
"default": "0 2 * * *"
},
"cleanupStrategy": {
"type": "string",
"description": "The strategy for cleaning up old backups",
"default": "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
},
"s3AccessKey": {
"type": "string",
"description": "The access key for S3, used for authentication",
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu"
},
"s3SecretKey": {
"type": "string",
"description": "The secret key for S3, used for authentication",
"default": "ju3eum4dekeich9ahM1te8waeGai0oog"
},
"resticPassword": {
"type": "string",
"description": "The password for Restic backup encryption",
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
}
}
}

View File

@@ -1,50 +1,24 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param size Persistent Volume size
## @param replicas Number of MariaDB replicas
##
external: false
size: 10Gi
replicas: 2
## @section Configuration parameters
users:
root:
password: strongpassword
user1:
privileges: ['ALL']
maxUserConnections: 1000
password: hackme
user2:
privileges: ['SELECT']
maxUserConnections: 1000
password: hackme
## @param users [object] Users configuration
## Example:
## users:
## root:
## password: strongpassword
## user1:
## privileges: ['ALL']
## maxUserConnections: 1000
## password: hackme
## user2:
## privileges: ['SELECT']
## maxUserConnections: 1000
## password: hackme
##
users: {}
databases:
- wordpress1
- wordpress2
- wordpress3
- wordpress4
## @param databases Databases configuration
## Example:
## databases:
## - wordpress1
## - wordpress2
## - wordpress3
## - wordpress4
databases: []
## @section Backup parameters
## @param backup.enabled Enable pereiodic backups
## @param backup.s3Region The AWS S3 region where backups are stored
## @param backup.s3Bucket The S3 bucket used for storing backups
## @param backup.schedule Cron schedule for automated backups
## @param backup.cleanupStrategy The strategy for cleaning up old backups
## @param backup.s3AccessKey The access key for S3, used for authentication
## @param backup.s3SecretKey The secret key for S3, used for authentication
## @param backup.resticPassword The password for Restic backup encryption
backup:
enabled: false
s3Region: us-east-1

View File

@@ -1,3 +0,0 @@
.helmignore
/logos
/Makefile

View File

@@ -1,25 +0,0 @@
apiVersion: v2
name: nats
description: Managed NATS service
icon: /logos/nats.svg
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.4.1"

View File

@@ -1,2 +0,0 @@
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -1,11 +0,0 @@
# Managed NATS Service
## Parameters
### Common parameters
| Name | Description | Value |
| ---------- | ----------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `replicas` | Persistent Volume size for NATS | `3` |

View File

@@ -1,76 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Generator: Adobe Illustrator 24.3.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg
version="1.0"
id="katman_1"
x="0px"
y="0px"
viewBox="0 0 440.79001 456.32996"
xml:space="preserve"
sodipodi:docname="NATS.io.svg"
width="440.79001"
height="456.32999"
inkscape:version="1.1.1 (c3084ef, 2021-09-22)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><defs
id="defs843" /><sodipodi:namedview
id="namedview841"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
showgrid="false"
width="440.79px"
height="456.32999px"
inkscape:zoom="0.27371294"
inkscape:cx="524.2719"
inkscape:cy="823.85584"
inkscape:window-width="1312"
inkscape:window-height="969"
inkscape:window-x="0"
inkscape:window-y="25"
inkscape:window-maximized="0"
inkscape:current-layer="katman_1" />
<style
type="text/css"
id="style824">
.st0{fill:#32A574;}
.st1{fill:#2AAAE1;}
.st2{fill:#8EC044;}
.st3{fill:#385C93;}
.st4{fill:#FFFFFF;}
</style>
<path
class="st0"
d="M 220.4,0 H 440.79 V 178.67 H 220.4 Z"
id="path826" />
<path
class="st1"
d="M 0,0 H 220.39 V 178.67 H 0 Z"
id="path828" />
<path
class="st2"
d="M 220.4,178.83 H 440.79 V 357.5 H 220.4 Z"
id="path830" />
<path
class="st3"
d="M 0,178.83 H 220.39 V 357.5 H 0 Z"
id="path832" />
<path
class="st2"
d="m 188,356.52 107.82,99.81 v -99.81 z"
id="path834" />
<path
class="st3"
d="m 220.4,356.52 1.15,31.41 -34.52,-32.23 z"
id="path836" />
<path
class="st4"
d="M 311.7,231.03 V 83.12 h 52.69 V 274.39 H 284.54 L 123.37,123.86 V 274.55 H 70.52 V 83.12 h 82.63 z"
id="path838" />
</svg>

Before

Width:  |  Height:  |  Size: 1.9 KiB

View File

@@ -1,43 +0,0 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: {{ .Release.Name }}-system
spec:
chart:
spec:
chart: cozy-nats
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
version: '*'
interval: 1m0s
timeout: 5m0s
values:
nats:
fullnameOverride: {{ .Release.Name }}
config:
cluster:
enabled: true
replicas: {{ .Values.replicas }}
monitor:
enabled: true
jetstream:
enabled: true
fileStore:
enabled: true
pvc:
enabled: true
size: 10Gi
storageClassName: local
promExporter:
enabled: true
podMonitor:
enabled: true
{{- if .Values.external }}
service:
merge:
spec:
type: LoadBalancer
{{- end }}

View File

@@ -1,16 +0,0 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"replicas": {
"type": "number",
"description": "Persistent Volume size for NATS",
"default": 3
}
}
}

View File

@@ -1,8 +0,0 @@
## @section Common parameters
## @param external Enable external access from outside the cluster
## @param replicas Persistent Volume size for NATS
##
external: false
replicas: 2

View File

@@ -1,3 +1,23 @@
.helmignore
/logos
/Makefile
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -1,7 +1,7 @@
apiVersion: v2
name: postgres
description: Managed PostgreSQL service
icon: /logos/postgres.svg
icon: https://cdn-icons-png.flaticon.com/512/5968/5968342.png
# A chart can be either an 'application' or a 'library' chart.
#
@@ -16,10 +16,10 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "16.2"
appVersion: "1.16.0"

View File

@@ -1,2 +0,0 @@
generate:
readme-generator -v values.yaml -s values.schema.json -r README.md

View File

@@ -30,37 +30,3 @@ restic -r s3:s3.example.org/postgres-backups/database_name restore latest --targ
more details:
- https://itnext.io/restic-effective-backup-from-stdin-4bc1e8f083c1
## Parameters
### Common parameters
| Name | Description | Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------- | ------- |
| `external` | Enable external access from outside the cluster | `false` |
| `size` | Persistent Volume size | `10Gi` |
| `replicas` | Number of Postgres replicas | `2` |
| `quorum.minSyncReplicas` | Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed. | `0` |
| `quorum.maxSyncReplicas` | Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances). | `0` |
### Configuration parameters
| Name | Description | Value |
| ----------- | ----------------------- | ----- |
| `users` | Users configuration | `{}` |
| `databases` | Databases configuration | `{}` |
### Backup parameters
| Name | Description | Value |
| ------------------------ | ---------------------------------------------- | ------------------------------------------------------ |
| `backup.enabled` | Enable pereiodic backups | `false` |
| `backup.s3Region` | The AWS S3 region where backups are stored | `us-east-1` |
| `backup.s3Bucket` | The S3 bucket used for storing backups | `s3.example.org/postgres-backups` |
| `backup.schedule` | Cron schedule for automated backups | `0 2 * * *` |
| `backup.cleanupStrategy` | The strategy for cleaning up old backups | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
| `backup.s3AccessKey` | The access key for S3, used for authentication | `oobaiRus9pah8PhohL1ThaeTa4UVa7gu` |
| `backup.s3SecretKey` | The secret key for S3, used for authentication | `ju3eum4dekeich9ahM1te8waeGai0oog` |
| `backup.resticPassword` | The password for Restic backup encryption | `ChaXoveekoh6eigh4siesheeda2quai0` |

View File

@@ -1,22 +0,0 @@
<?xml version="1.0"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg width="432.071pt" height="445.383pt" viewBox="0 0 432.071 445.383" xml:space="preserve" xmlns="http://www.w3.org/2000/svg">
<g id="orginal" style="fill-rule:nonzero;clip-rule:nonzero;stroke:#000000;stroke-miterlimit:4;">
</g>
<g id="Layer_x0020_3" style="fill-rule:nonzero;clip-rule:nonzero;fill:none;stroke:#FFFFFF;stroke-width:12.4651;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;">
<path style="fill:#000000;stroke:#000000;stroke-width:37.3953;stroke-linecap:butt;stroke-linejoin:miter;" d="M323.205,324.227c2.833-23.601,1.984-27.062,19.563-23.239l4.463,0.392c13.517,0.615,31.199-2.174,41.587-7c22.362-10.376,35.622-27.7,13.572-23.148c-50.297,10.376-53.755-6.655-53.755-6.655c53.111-78.803,75.313-178.836,56.149-203.322 C352.514-5.534,262.036,26.049,260.522,26.869l-0.482,0.089c-9.938-2.062-21.06-3.294-33.554-3.496c-22.761-0.374-40.032,5.967-53.133,15.904c0,0-161.408-66.498-153.899,83.628c1.597,31.936,45.777,241.655,98.47,178.31 c19.259-23.163,37.871-42.748,37.871-42.748c9.242,6.14,20.307,9.272,31.912,8.147l0.897-0.765c-0.281,2.876-0.157,5.689,0.359,9.019c-13.572,15.167-9.584,17.83-36.723,23.416c-27.457,5.659-11.326,15.734-0.797,18.367c12.768,3.193,42.305,7.716,62.268-20.224 l-0.795,3.188c5.325,4.26,4.965,30.619,5.72,49.452c0.756,18.834,2.017,36.409,5.856,46.771c3.839,10.36,8.369,37.05,44.036,29.406c29.809-6.388,52.6-15.582,54.677-101.107"/>
<path style="fill:#336791;stroke:none;" d="M402.395,271.23c-50.302,10.376-53.76-6.655-53.76-6.655c53.111-78.808,75.313-178.843,56.153-203.326c-52.27-66.785-142.752-35.2-144.262-34.38l-0.486,0.087c-9.938-2.063-21.06-3.292-33.56-3.496c-22.761-0.373-40.026,5.967-53.127,15.902 c0,0-161.411-66.495-153.904,83.63c1.597,31.938,45.776,241.657,98.471,178.312c19.26-23.163,37.869-42.748,37.869-42.748c9.243,6.14,20.308,9.272,31.908,8.147l0.901-0.765c-0.28,2.876-0.152,5.689,0.361,9.019c-13.575,15.167-9.586,17.83-36.723,23.416 c-27.459,5.659-11.328,15.734-0.796,18.367c12.768,3.193,42.307,7.716,62.266-20.224l-0.796,3.188c5.319,4.26,9.054,27.711,8.428,48.969c-0.626,21.259-1.044,35.854,3.147,47.254c4.191,11.4,8.368,37.05,44.042,29.406c29.809-6.388,45.256-22.942,47.405-50.555 c1.525-19.631,4.976-16.729,5.194-34.28l2.768-8.309c3.192-26.611,0.507-35.196,18.872-31.203l4.463,0.392c13.517,0.615,31.208-2.174,41.591-7c22.358-10.376,35.618-27.7,13.573-23.148z"/>
<path d="M215.866,286.484c-1.385,49.516,0.348,99.377,5.193,111.495c4.848,12.118,15.223,35.688,50.9,28.045c29.806-6.39,40.651-18.756,45.357-46.051c3.466-20.082,10.148-75.854,11.005-87.281"/>
<path d="M173.104,38.256c0,0-161.521-66.016-154.012,84.109c1.597,31.938,45.779,241.664,98.473,178.316c19.256-23.166,36.671-41.335,36.671-41.335"/>
<path d="M260.349,26.207c-5.591,1.753,89.848-34.889,144.087,34.417c19.159,24.484-3.043,124.519-56.153,203.329"/>
<path style="stroke-linejoin:bevel;" d="M348.282,263.953c0,0,3.461,17.036,53.764,6.653c22.04-4.552,8.776,12.774-13.577,23.155c-18.345,8.514-59.474,10.696-60.146-1.069c-1.729-30.355,21.647-21.133,19.96-28.739c-1.525-6.85-11.979-13.573-18.894-30.338 c-6.037-14.633-82.796-126.849,21.287-110.183c3.813-0.789-27.146-99.002-124.553-100.599c-97.385-1.597-94.19,119.762-94.19,119.762"/>
<path d="M188.604,274.334c-13.577,15.166-9.584,17.829-36.723,23.417c-27.459,5.66-11.326,15.733-0.797,18.365c12.768,3.195,42.307,7.718,62.266-20.229c6.078-8.509-0.036-22.086-8.385-25.547c-4.034-1.671-9.428-3.765-16.361,3.994z"/>
<path d="M187.715,274.069c-1.368-8.917,2.93-19.528,7.536-31.942c6.922-18.626,22.893-37.255,10.117-96.339c-9.523-44.029-73.396-9.163-73.436-3.193c-0.039,5.968,2.889,30.26-1.067,58.548c-5.162,36.913,23.488,68.132,56.479,64.938"/>
<path style="fill:#FFFFFF;stroke-width:4.155;stroke-linecap:butt;stroke-linejoin:miter;" d="M172.517,141.7c-0.288,2.039,3.733,7.48,8.976,8.207c5.234,0.73,9.714-3.522,9.998-5.559c0.284-2.039-3.732-4.285-8.977-5.015c-5.237-0.731-9.719,0.333-9.996,2.367z"/>
<path style="fill:#FFFFFF;stroke-width:2.0775;stroke-linecap:butt;stroke-linejoin:miter;" d="M331.941,137.543c0.284,2.039-3.732,7.48-8.976,8.207c-5.238,0.73-9.718-3.522-10.005-5.559c-0.277-2.039,3.74-4.285,8.979-5.015c5.239-0.73,9.718,0.333,10.002,2.368z"/>
<path d="M350.676,123.432c0.863,15.994-3.445,26.888-3.988,43.914c-0.804,24.748,11.799,53.074-7.191,81.435"/>
<path style="stroke-width:3;" d="M0,60.232"/>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 4.4 KiB

View File

@@ -4,16 +4,13 @@ kind: Cluster
metadata:
name: {{ .Release.Name }}
spec:
instances: {{ .Values.replicas }}
instances: 2
enableSuperuserAccess: true
postgresql:
parameters:
max_wal_senders: "30"
minSyncReplicas: {{ .Values.quorum.minSyncReplicas }}
maxSyncReplicas: {{ .Values.quorum.maxSyncReplicas }}
monitoring:
enablePodMonitor: true

View File

@@ -53,93 +53,60 @@ stringData:
echo "== grant privileges on databases to roles"
{{- range $database, $d := .Values.databases }}
psql -v ON_ERROR_STOP=1 --echo-all -d "{{ $database }}" <<\EOT
ALTER DATABASE {{ $database }} OWNER TO {{ $database }}_admin;
GRANT CONNECT ON DATABASE {{ $database }} TO {{ $database }}_readonly;
DO $$
# admin
psql -v ON_ERROR_STOP=1 --echo-all -d "{{ $database }}" <<\EOT
DO $$DECLARE r record;
DECLARE
schema_record record;
v_schema varchar := 'public';
v_new_owner varchar := '{{ $database }}_admin';
BEGIN
-- Loop over all schemas
FOR schema_record IN SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN ('pg_catalog', 'information_schema') LOOP
-- Changing Schema Ownership
EXECUTE format('ALTER SCHEMA %I OWNER TO %I', schema_record.schema_name, '{{ $database }}_admin');
-- Add rights for the admin role
EXECUTE format('GRANT ALL ON SCHEMA %I TO %I', schema_record.schema_name, '{{ $database }}_admin');
EXECUTE format('GRANT ALL ON ALL TABLES IN SCHEMA %I TO %I', schema_record.schema_name, '{{ $database }}_admin');
EXECUTE format('GRANT ALL ON ALL SEQUENCES IN SCHEMA %I TO %I', schema_record.schema_name, '{{ $database }}_admin');
EXECUTE format('GRANT ALL ON ALL FUNCTIONS IN SCHEMA %I TO %I', schema_record.schema_name, '{{ $database }}_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON TABLES TO %I', schema_record.schema_name, '{{ $database }}_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON SEQUENCES TO %I', schema_record.schema_name, '{{ $database }}_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON FUNCTIONS TO %I', schema_record.schema_name, '{{ $database }}_admin');
-- Add rights for the readonly role
EXECUTE format('GRANT USAGE ON SCHEMA %I TO %I', schema_record.schema_name, '{{ $database }}_readonly');
EXECUTE format('GRANT SELECT ON ALL TABLES IN SCHEMA %I TO %I', schema_record.schema_name, '{{ $database }}_readonly');
EXECUTE format('GRANT USAGE ON ALL SEQUENCES IN SCHEMA %I TO %I', schema_record.schema_name, '{{ $database }}_readonly');
EXECUTE format('GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA %I TO %I', schema_record.schema_name, '{{ $database }}_readonly');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT SELECT ON TABLES TO %I', schema_record.schema_name, '{{ $database }}_readonly');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT USAGE ON SEQUENCES TO %I', schema_record.schema_name, '{{ $database }}_readonly');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT EXECUTE ON FUNCTIONS TO %I', schema_record.schema_name, '{{ $database }}_readonly');
FOR r IN
select 'ALTER TABLE "' || table_schema || '"."' || table_name || '" OWNER TO ' || v_new_owner || ';' as a from information_schema.tables where table_schema = v_schema
union all
select 'ALTER TABLE "' || sequence_schema || '"."' || sequence_name || '" OWNER TO ' || v_new_owner || ';' as a from information_schema.sequences where sequence_schema = v_schema
union all
select 'ALTER TABLE "' || table_schema || '"."' || table_name || '" OWNER TO ' || v_new_owner || ';' as a from information_schema.views where table_schema = v_schema
union all
select 'ALTER FUNCTION "'||nsp.nspname||'"."'||p.proname||'"('||pg_get_function_identity_arguments(p.oid)||') OWNER TO ' || v_new_owner || ';' as a from pg_proc p join pg_namespace nsp ON p.pronamespace = nsp.oid where nsp.nspname = v_schema
LOOP
EXECUTE r.a;
END LOOP;
END$$;
ALTER DATABASE {{ $database }} OWNER TO {{ $database }}_admin;
ALTER SCHEMA public OWNER TO {{ $database }}_admin;
GRANT ALL ON SCHEMA public TO {{ $database }}_admin;
GRANT ALL ON ALL TABLES IN SCHEMA public TO {{ $database }}_admin;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO {{ $database }}_admin;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO {{ $database }}_admin;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO {{ $database }}_admin;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO {{ $database }}_admin;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON FUNCTIONS TO {{ $database }}_admin;
EOT
echo "== setup event trigger for schema creation"
# readonly
psql -v ON_ERROR_STOP=1 --echo-all -d "{{ $database }}" <<\EOT
CREATE OR REPLACE FUNCTION auto_grant_schema_privileges()
RETURNS event_trigger LANGUAGE plpgsql AS $$
DECLARE
obj record;
BEGIN
FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands() WHERE command_tag = 'CREATE SCHEMA' LOOP
EXECUTE format('ALTER SCHEMA %I OWNER TO %I', obj.object_identity, '{{ $database }}_admin');
EXECUTE format('GRANT ALL ON SCHEMA %I TO %I', obj.object_identity, '{{ $database }}_admin');
EXECUTE format('GRANT USAGE ON SCHEMA %I TO %I', obj.object_identity, '{{ $database }}_readonly');
EXECUTE format('GRANT SELECT ON ALL TABLES IN SCHEMA %I TO %I', obj.object_identity, '{{ $database }}_readonly');
EXECUTE format('GRANT USAGE ON ALL SEQUENCES IN SCHEMA %I TO %I', obj.object_identity, '{{ $database }}_readonly');
EXECUTE format('GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA %I TO %I', obj.object_identity, '{{ $database }}_readonly');
-- Set owner for schema
EXECUTE format('ALTER SCHEMA %I OWNER TO %I', obj.object_identity, '{{ $database }}_admin');
-- Set privileges for admin role
EXECUTE format('GRANT ALL ON SCHEMA %I TO %I', obj.object_identity, '{{ $database }}_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON TABLES TO %I', obj.object_identity, '{{ $database }}_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON SEQUENCES TO %I', obj.object_identity, '{{ $database }}_admin');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT ALL ON FUNCTIONS TO %I', obj.object_identity, '{{ $database }}_admin');
-- Set privileges for readonly role
EXECUTE format('GRANT USAGE ON SCHEMA %I TO %I', obj.object_identity, '{{ $database }}_readonly');
EXECUTE format('GRANT SELECT ON ALL TABLES IN SCHEMA %I TO %I', obj.object_identity, '{{ $database }}_readonly');
EXECUTE format('GRANT USAGE ON ALL SEQUENCES IN SCHEMA %I TO %I', obj.object_identity, '{{ $database }}_readonly');
EXECUTE format('GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA %I TO %I', obj.object_identity, '{{ $database }}_readonly');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT SELECT ON TABLES TO %I', obj.object_identity, '{{ $database }}_readonly');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT USAGE ON SEQUENCES TO %I', obj.object_identity, '{{ $database }}_readonly');
EXECUTE format('ALTER DEFAULT PRIVILEGES IN SCHEMA %I GRANT EXECUTE ON FUNCTIONS TO %I', obj.object_identity, '{{ $database }}_readonly');
END LOOP;
END;
$$;
DROP EVENT TRIGGER IF EXISTS trigger_auto_grant;
CREATE EVENT TRIGGER trigger_auto_grant ON ddl_command_end
WHEN TAG IN ('CREATE SCHEMA')
EXECUTE PROCEDURE auto_grant_schema_privileges();
GRANT CONNECT ON DATABASE {{ $database }} TO {{ $database }}_readonly;
GRANT USAGE ON SCHEMA public TO {{ $database }}_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO {{ $database }}_readonly;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO {{ $database }}_readonly;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO {{ $database }}_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO {{ $database }}_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT USAGE ON SEQUENCES TO {{ $database }}_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT EXECUTE ON FUNCTIONS TO {{ $database }}_readonly;
EOT
{{- end }}
echo "== assign roles to users"
psql -v ON_ERROR_STOP=1 --echo-all <<\EOT
{{- range $database, $d := .Values.databases }}
{{- range $user, $u := $.Values.users }}
{{- if has $user $d.roles.admin }}
{{- range $user, $u := $.Values.roles }}
{{- if has $user $d.users.admin }}
GRANT {{ $database }}_admin TO {{ $user }};
{{- else }}
REVOKE {{ $database }}_admin FROM {{ $user }};
{{- end }}
{{- if has $user $d.roles.readonly }}
{{- if has $user $d.users.readonly }}
GRANT {{ $database }}_readonly TO {{ $user }};
{{- else }}
REVOKE {{ $database }}_readonly FROM {{ $user }};

View File

@@ -1,86 +0,0 @@
{
"title": "Chart Values",
"type": "object",
"properties": {
"external": {
"type": "boolean",
"description": "Enable external access from outside the cluster",
"default": false
},
"size": {
"type": "string",
"description": "Persistent Volume size",
"default": "10Gi"
},
"replicas": {
"type": "number",
"description": "Number of Postgres replicas",
"default": 2
},
"quorum": {
"type": "object",
"properties": {
"minSyncReplicas": {
"type": "number",
"description": "Minimum number of synchronous replicas that must acknowledge a transaction before it is considered committed.",
"default": 0
},
"maxSyncReplicas": {
"type": "number",
"description": "Maximum number of synchronous replicas that can acknowledge a transaction (must be lower than the number of instances).",
"default": 0
}
}
},
"databases": {
"type": "object",
"description": "Databases configuration",
"default": {}
},
"backup": {
"type": "object",
"properties": {
"enabled": {
"type": "boolean",
"description": "Enable pereiodic backups",
"default": false
},
"s3Region": {
"type": "string",
"description": "The AWS S3 region where backups are stored",
"default": "us-east-1"
},
"s3Bucket": {
"type": "string",
"description": "The S3 bucket used for storing backups",
"default": "s3.example.org/postgres-backups"
},
"schedule": {
"type": "string",
"description": "Cron schedule for automated backups",
"default": "0 2 * * *"
},
"cleanupStrategy": {
"type": "string",
"description": "The strategy for cleaning up old backups",
"default": "--keep-last=3 --keep-daily=3 --keep-within-weekly=1m"
},
"s3AccessKey": {
"type": "string",
"description": "The access key for S3, used for authentication",
"default": "oobaiRus9pah8PhohL1ThaeTa4UVa7gu"
},
"s3SecretKey": {
"type": "string",
"description": "The secret key for S3, used for authentication",
"default": "ju3eum4dekeich9ahM1te8waeGai0oog"
},
"resticPassword": {
"type": "string",
"description": "The password for Restic backup encryption",
"default": "ChaXoveekoh6eigh4siesheeda2quai0"
}
}
}
}
}

Some files were not shown because too many files have changed in this diff Show More