Compare commits

...

13 Commits

Author SHA1 Message Date
Andrei Kvapil
5310bd3591 add extra bundles 2024-04-01 14:41:13 +02:00
Andrei Kvapil
88d2dcc3e1 match bundle-name from cozystack-config 2024-04-01 11:48:10 +02:00
Andrei Kvapil
b53e264c5e Allow overriding values by prividng values-<release>: <json|yaml> in cozystack-config 2024-04-01 10:36:17 +02:00
Andrei Kvapil
0c92d25669 bundles 2024-04-01 10:13:26 +02:00
Andrei Kvapil
e17dcaa65e Update CNPG to 1.22.2 (#46)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-15 21:15:36 +01:00
Andrei Kvapil
85d4ed251d Update piraeus-operator and LINSTOR v2.4.1 (#45) 2024-03-15 21:15:27 +01:00
Andrei Kvapil
f1c01a0fe8 Add link to roadmap (#41)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-15 21:15:17 +01:00
Andrei Kvapil
2cff181279 Preapre release v0.2.0 (#38)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-15 21:15:06 +01:00
Andrei Kvapil
2e3555600d Positioning Cozystack as framework for building clouds (#31)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-05 11:05:40 +01:00
George Gaál
98f488fcac Fix gitignore (#26)
Signed-off-by: George Gaál <gb12335@gmail.com>
2024-02-21 12:33:52 +01:00
Andrei Kvapil
1c6de1ccf5 Preapre release v0.1.0 (#25) 2024-02-20 12:24:39 +01:00
Andrei Kvapil
235a2fcf47 Workaround: The declarative way to flush redis for our dashboard (#24) 2024-02-19 20:13:55 +01:00
Andrei Kvapil
24151b09f3 Update README.md (#21) 2024-02-19 15:16:01 +01:00
78 changed files with 18997 additions and 6932 deletions

2
.gitignore vendored
View File

@@ -1 +1,3 @@
_out
.git
.idea

553
README.md
View File

@@ -10,7 +10,7 @@
# Cozystack
**Cozystack** is an open-source **PaaS platform** for cloud providers.
**Cozystack** is a free PaaS platform and framework for building clouds.
With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
@@ -18,548 +18,55 @@ You can use Cozystack to build your own cloud or to provide a cost-effective dev
## Use-Cases
### As a backend for a public cloud
* [**Using Cozystack to build public cloud**](https://cozystack.io/docs/use-cases/public-cloud/)
You can use Cozystack as backend for a public cloud
Cozystack positions itself as a kind of framework for building public clouds. The key word here is framework. In this case, it's important to understand that Cozystack is made for cloud providers, not for end users.
* [**Using Cozystack to build private cloud**](https://cozystack.io/docs/use-cases/private-cloud/)
You can use Cozystack as platform to build a private cloud powered by Infrastructure-as-Code approach
Despite having a graphical interface, the current security model does not imply public user access to your management cluster.
Instead, end users get access to their own Kubernetes clusters, can order LoadBalancers and additional services from it, but they have no access and know nothing about your management cluster powered by Cozystack.
Thus, to integrate with your billing system, it's enough to teach your system to go to the management Kubernetes and place a YAML file signifying the service you're interested in. Cozystack will do the rest of the work for you.
![](https://aenix.io/wp-content/uploads/2024/02/Wireframe-1.png)
### As a private cloud for Infrastructure-as-Code
One of the use cases is a self-portal for users within your company, where they can order the service they're interested in or a managed database.
You can implement best GitOps practices, where users will launch their own Kubernetes clusters and databases for their needs with a simple commit of configuration into your infrastructure Git repository.
Thanks to the standardization of the approach to deploying applications, you can expand the platform's capabilities using the functionality of standard Helm charts.
### As a Kubernetes distribution for Bare Metal
We created Cozystack primarily for our own needs, having vast experience in building reliable systems on bare metal infrastructure. This experience led to the formation of a separate boxed product, which is aimed at standardizing and providing a ready-to-use tool for managing your infrastructure.
Currently, Cozystack already solves a huge scope of infrastructure tasks: starting from provisioning bare metal servers, having a ready monitoring system, fast and reliable storage, a network fabric with the possibility of interconnect with your infrastructure, the ability to run virtual machines, databases, and much more right out of the box.
All this makes Cozystack a convenient platform for delivering and launching your application on Bare Metal.
* [**Using Cozystack as Kubernetes distribution**](https://cozystack.io/docs/use-cases/kubernetes-distribution/)
You can use Cozystack as Kubernetes distribution for Bare Metal
## Screenshot
![](https://aenix.io/wp-content/uploads/2023/12/cozystack1-1.png)
![Cozystack screenshot](https://cozystack.io/img/screenshot.png)
## Core values
## Documentation
### Standardization and unification
All components of the platform are based on open source tools and technologies which are widely known in the industry.
The documentation is located on official [cozystack.io](cozystack.io) website.
### Collaborate, not compete
If a feature being developed for the platform could be useful to a upstream project, it should be contributed to upstream project, rather than being implemented within the platform.
Read [Get Started](https://cozystack.io/docs/get-started/) section for a quick start.
### API-first
Cozystack is based on Kubernetes and involves close interaction with its API. We don't aim to completely hide the all elements behind a pretty UI or any sort of customizations; instead, we provide a standard interface and teach users how to work with basic primitives. The web interface is used solely for deploying applications and quickly diving into basic concepts of platform.
If you encounter any difficulties, start with the [troubleshooting guide](https://cozystack.io/docs/troubleshooting/), and work your way through the process that we've outlined.
## Quick Start
## Versioning
### Prepare infrastructure
Versioning adheres to the [Semantic Versioning](http://semver.org/) principles.
A full list of the available releases is available in the GitHub repository's [Release](https://github.com/aenix-io/cozystack/releases) section.
- [Roadmap](https://github.com/orgs/aenix-io/projects/2)
![](https://aenix.io/wp-content/uploads/2024/02/Wireframe-2.png)
## Contributions
You need 3 physical servers or VMs with nested virtualisation:
Contributions are highly appreciated and very welcomed!
```
CPU: 4 cores
CPU model: host
RAM: 8-16 GB
HDD1: 32 GB
HDD2: 100GB (raw)
```
In case of bugs, please, check if the issue has been already opened by checking the [GitHub Issues](https://github.com/aenix-io/cozystack/issues) section.
In case it isn't, you can open a new one: a detailed report will help us to replicate it, assess it, and work on a fix.
And one management VM or physical server connected to the same network.
Any Linux system installed on it (eg. Ubuntu should be enough)
You can express your intention in working on the fix on your own.
Commits are used to generate the changelog, and their author will be referenced in it.
**Note:** The VM should support `x86-64-v2` architecture, the most probably you can achieve this by setting cpu model to `host`
In case of **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/aenix-io/cozystack/discussions/categories/feature-requests).
#### Install dependencies:
## License
- `docker`
- `talosctl`
- `dialog`
- `nmap`
- `make`
- `yq`
- `kubectl`
- `helm`
Cozystack is licensed under Apache 2.0.
The code is provided as-is with no warranties.
### Netboot server
## Commercial Support
Start matchbox with prebuilt Talos image for Cozystack:
[**Ænix**](https://aenix.io) offers enterprise-grade support, available 24/7.
```bash
sudo docker run --name=matchbox -d --net=host ghcr.io/aenix-io/cozystack/matchbox:v1.6.4 \
-address=:8080 \
-log-level=debug
```
We provide all types of assistance, including consultations, development of missing features, design, assistance with installation, and integration.
Start DHCP-Server:
```bash
sudo docker run --name=dnsmasq -d --cap-add=NET_ADMIN --net=host quay.io/poseidon/dnsmasq \
-d -q -p0 \
--dhcp-range=192.168.100.3,192.168.100.254 \
--dhcp-option=option:router,192.168.100.1 \
--enable-tftp \
--tftp-root=/var/lib/tftpboot \
--dhcp-match=set:bios,option:client-arch,0 \
--dhcp-boot=tag:bios,undionly.kpxe \
--dhcp-match=set:efi32,option:client-arch,6 \
--dhcp-boot=tag:efi32,ipxe.efi \
--dhcp-match=set:efibc,option:client-arch,7 \
--dhcp-boot=tag:efibc,ipxe.efi \
--dhcp-match=set:efi64,option:client-arch,9 \
--dhcp-boot=tag:efi64,ipxe.efi \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:ipxe,http://192.168.100.254:8080/boot.ipxe \
--log-queries \
--log-dhcp
```
Where:
- `192.168.100.3,192.168.100.254` range to allocate IPs from
- `192.168.100.1` your gateway
- `192.168.100.254` is address of your management server
Check status of containers:
```
docker ps
```
example output:
```console
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
22044f26f74d quay.io/poseidon/dnsmasq "/usr/sbin/dnsmasq -…" 6 seconds ago Up 5 seconds dnsmasq
231ad81ff9e0 ghcr.io/aenix-io/cozystack/matchbox:v0.0.2 "/matchbox -address=…" 58 seconds ago Up 57 seconds matchbox
```
### Bootstrap cluster
Write configuration for Cozystack:
```yaml
cat > patch.yaml <<\EOT
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.100.0/24
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
install:
image: ghcr.io/aenix-io/cozystack/talos:v1.6.4
files:
- content: |
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
network:
cni:
name: none
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOT
cat > patch-controlplane.yaml <<\EOT
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.100.0/24
EOT
```
Run [talos-bootstrap](https://github.com/aenix-io/talos-bootstrap/) to deploy cluster:
```bash
talos-bootstrap install
```
Save admin kubeconfig to access your Kubernetes cluster:
```bash
cp -i kubeconfig ~/.kube/config
```
Check connection:
```bash
kubectl get ns
```
example output:
```console
NAME STATUS AGE
default Active 7m56s
kube-node-lease Active 7m56s
kube-public Active 7m56s
kube-system Active 7m56s
```
**Note:**: All nodes should currently show as "Not Ready", don't worry about that, this is because you disabled the default CNI plugin in the previous step. Cozystack will install it's own CNI-plugin on the next step.
### Install Cozystack
write config for cozystack:
**Note:** please make sure that you written the same setting specified in `patch.yaml` and `patch-controlplane.yaml` files.
```yaml
cat > cozystack-config.yaml <<\EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: cozystack
namespace: cozy-system
data:
cluster-name: "cozystack"
ipv4-pod-cidr: "10.244.0.0/16"
ipv4-pod-gateway: "10.244.0.1"
ipv4-svc-cidr: "10.96.0.0/16"
ipv4-join-cidr: "100.64.0.0/16"
EOT
```
Create namesapce and install Cozystack system components:
```bash
kubectl create ns cozy-system
kubectl apply -f cozystack-config.yaml
kubectl apply -f manifests/cozystack-installer.yaml
```
(optional) You can track the logs of installer:
```bash
kubectl logs -n cozy-system deploy/cozystack -f
```
Wait for a while, then check the status of installation:
```bash
kubectl get hr -A
```
Wait until all releases become to `Ready` state:
```console
NAMESPACE NAME AGE READY STATUS
cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
cozy-cilium cilium 4m1s True Release reconciliation succeeded
cozy-cluster-api capi-operator 4m1s True Release reconciliation succeeded
cozy-cluster-api capi-providers 4m1s True Release reconciliation succeeded
cozy-dashboard dashboard 4m1s True Release reconciliation succeeded
cozy-fluxcd cozy-fluxcd 4m1s True Release reconciliation succeeded
cozy-grafana-operator grafana-operator 4m1s True Release reconciliation succeeded
cozy-kamaji kamaji 4m1s True Release reconciliation succeeded
cozy-kubeovn kubeovn 4m1s True Release reconciliation succeeded
cozy-kubevirt-cdi kubevirt-cdi 4m1s True Release reconciliation succeeded
cozy-kubevirt-cdi kubevirt-cdi-operator 4m1s True Release reconciliation succeeded
cozy-kubevirt kubevirt 4m1s True Release reconciliation succeeded
cozy-kubevirt kubevirt-operator 4m1s True Release reconciliation succeeded
cozy-linstor linstor 4m1s True Release reconciliation succeeded
cozy-linstor piraeus-operator 4m1s True Release reconciliation succeeded
cozy-mariadb-operator mariadb-operator 4m1s True Release reconciliation succeeded
cozy-metallb metallb 4m1s True Release reconciliation succeeded
cozy-monitoring monitoring 4m1s True Release reconciliation succeeded
cozy-postgres-operator postgres-operator 4m1s True Release reconciliation succeeded
cozy-rabbitmq-operator rabbitmq-operator 4m1s True Release reconciliation succeeded
cozy-redis-operator redis-operator 4m1s True Release reconciliation succeeded
cozy-telepresence telepresence 4m1s True Release reconciliation succeeded
cozy-victoria-metrics-operator victoria-metrics-operator 4m1s True Release reconciliation succeeded
tenant-root tenant-root 4m1s True Release reconciliation succeeded
```
#### Configure Storage
Setup alias to access LINSTOR:
```bash
alias linstor='kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor'
```
list your nodes
```bash
linstor node list
```
example output:
```console
+-------------------------------------------------------+
| Node | NodeType | Addresses | State |
|=======================================================|
| srv1 | SATELLITE | 192.168.100.11:3367 (SSL) | Online |
| srv2 | SATELLITE | 192.168.100.12:3367 (SSL) | Online |
| srv3 | SATELLITE | 192.168.100.13:3367 (SSL) | Online |
+-------------------------------------------------------+
```
list empty devices:
```bash
linstor physical-storage list
```
example output:
```console
+--------------------------------------------+
| Size | Rotational | Nodes |
|============================================|
| 107374182400 | True | srv3[/dev/sdb] |
| | | srv1[/dev/sdb] |
| | | srv2[/dev/sdb] |
+--------------------------------------------+
```
create storage pools:
```bash
linstor ps cdp lvm srv1 /dev/sdb --pool-name data --storage-pool data
linstor ps cdp lvm srv2 /dev/sdb --pool-name data --storage-pool data
linstor ps cdp lvm srv3 /dev/sdb --pool-name data --storage-pool data
```
list storage pools:
```bash
linstor sp l
```
example output:
```console
+-------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
|=====================================================================================================================================|
| DfltDisklessStorPool | srv1 | DISKLESS | | | | False | Ok | srv1;DfltDisklessStorPool |
| DfltDisklessStorPool | srv2 | DISKLESS | | | | False | Ok | srv2;DfltDisklessStorPool |
| DfltDisklessStorPool | srv3 | DISKLESS | | | | False | Ok | srv3;DfltDisklessStorPool |
| data | srv1 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv1;data |
| data | srv2 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv2;data |
| data | srv3 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv3;data |
+-------------------------------------------------------------------------------------------------------------------------------------+
```
Create default storage classes:
```yaml
kubectl create -f- <<EOT
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOT
```
list storageclasses:
```bash
kubectl get storageclasses
```
example output:
```console
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local (default) linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
replicated linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
```
#### Configure Networking interconnection
To access your services select the range of unused IPs, eg. `192.168.100.200-192.168.100.250`
**Note:** These IPs should be from the same network as nodes or they should have all necessary routes for them.
Configure MetalLB to use and announce this range:
```yaml
kubectl create -f- <<EOT
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools:
- cozystack
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses:
- 192.168.100.200-192.168.100.250
autoAssign: true
avoidBuggyIPs: false
EOT
```
#### Setup basic applications
Get token from `tenant-root`:
```bash
kubectl get secret -n tenant-root tenant-root -o go-template='{{ printf "%s\n" (index .data "token" | base64decode) }}'
```
Enable port forward to cozy-dashboard:
```bash
kubectl port-forward -n cozy-dashboard svc/dashboard 8080:80
```
Open: http://localhost:8080/
- Select `tenant-root`
- Click `Upgrade` button
- Write a domain into `host` which you wish to use as parent domain for all deployed applications
**Note:**
- if you have no domain yet, you can use `192.168.100.200.nip.io` where `192.168.100.200` is a first IP address in your network addresses range.
- alternatively you can leave the default value, however you'll be need to modify your `/etc/hosts` every time you want to access specific application.
- Set `etcd`, `monitoring` and `ingress` to enabled position
- Click Deploy
Check persistent volumes provisioned:
```bash
kubectl get pvc -n tenant-root
```
example output:
```console
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-etcd-0 Bound pvc-4cbd29cc-a29f-453d-b412-451647cd04bf 10Gi RWO local <unset> 2m10s
data-etcd-1 Bound pvc-1579f95a-a69d-4a26-bcc2-b15ccdbede0d 10Gi RWO local <unset> 115s
data-etcd-2 Bound pvc-907009e5-88bf-4d18-91e7-b56b0dbfb97e 10Gi RWO local <unset> 91s
grafana-db-1 Bound pvc-7b3f4e23-228a-46fd-b820-d033ef4679af 10Gi RWO local <unset> 2m41s
grafana-db-2 Bound pvc-ac9b72a4-f40e-47e8-ad24-f50d843b55e4 10Gi RWO local <unset> 113s
vmselect-cachedir-vmselect-longterm-0 Bound pvc-622fa398-2104-459f-8744-565eee0a13f1 2Gi RWO local <unset> 2m21s
vmselect-cachedir-vmselect-longterm-1 Bound pvc-fc9349f5-02b2-4e25-8bef-6cbc5cc6d690 2Gi RWO local <unset> 2m21s
vmselect-cachedir-vmselect-shortterm-0 Bound pvc-7acc7ff6-6b9b-4676-bd1f-6867ea7165e2 2Gi RWO local <unset> 2m41s
vmselect-cachedir-vmselect-shortterm-1 Bound pvc-e514f12b-f1f6-40ff-9838-a6bda3580eb7 2Gi RWO local <unset> 2m40s
vmstorage-db-vmstorage-longterm-0 Bound pvc-e8ac7fc3-df0d-4692-aebf-9f66f72f9fef 10Gi RWO local <unset> 2m21s
vmstorage-db-vmstorage-longterm-1 Bound pvc-68b5ceaf-3ed1-4e5a-9568-6b95911c7c3a 10Gi RWO local <unset> 2m21s
vmstorage-db-vmstorage-shortterm-0 Bound pvc-cee3a2a4-5680-4880-bc2a-85c14dba9380 10Gi RWO local <unset> 2m41s
vmstorage-db-vmstorage-shortterm-1 Bound pvc-d55c235d-cada-4c4a-8299-e5fc3f161789 10Gi RWO local <unset> 2m41s
```
Check all pods are running:
```bash
kubectl get pod -n tenant-root
```
example output:
```console
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 2m1s
etcd-1 1/1 Running 0 106s
etcd-2 1/1 Running 0 82s
grafana-db-1 1/1 Running 0 119s
grafana-db-2 1/1 Running 0 13s
grafana-deployment-74b5656d6-5dcvn 1/1 Running 0 90s
grafana-deployment-74b5656d6-q5589 1/1 Running 1 (105s ago) 111s
root-ingress-controller-6ccf55bc6d-pg79l 2/2 Running 0 2m27s
root-ingress-controller-6ccf55bc6d-xbs6x 2/2 Running 0 2m29s
root-ingress-defaultbackend-686bcbbd6c-5zbvp 1/1 Running 0 2m29s
vmalert-vmalert-644986d5c-7hvwk 2/2 Running 0 2m30s
vmalertmanager-alertmanager-0 2/2 Running 0 2m32s
vmalertmanager-alertmanager-1 2/2 Running 0 2m31s
vminsert-longterm-75789465f-hc6cz 1/1 Running 0 2m10s
vminsert-longterm-75789465f-m2v4t 1/1 Running 0 2m12s
vminsert-shortterm-78456f8fd9-wlwww 1/1 Running 0 2m29s
vminsert-shortterm-78456f8fd9-xg7cw 1/1 Running 0 2m28s
vmselect-longterm-0 1/1 Running 0 2m12s
vmselect-longterm-1 1/1 Running 0 2m12s
vmselect-shortterm-0 1/1 Running 0 2m31s
vmselect-shortterm-1 1/1 Running 0 2m30s
vmstorage-longterm-0 1/1 Running 0 2m12s
vmstorage-longterm-1 1/1 Running 0 2m12s
vmstorage-shortterm-0 1/1 Running 0 2m32s
vmstorage-shortterm-1 1/1 Running 0 2m31s
```
Now you can get public IP of ingress controller:
```
kubectl get svc -n tenant-root root-ingress-controller
```
example output:
```console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
root-ingress-controller LoadBalancer 10.96.16.141 192.168.100.200 80:31632/TCP,443:30113/TCP 3m33s
```
Use `grafana.example.org` (under 192.168.100.200) to access system monitoring, where `example.org` is your domain specified for `tenant-root`
- login: `admin`
- password:
```bash
kubectl get secret -n tenant-root grafana-admin-password -o go-template='{{ printf "%s\n" (index .data "password" | base64decode) }}'
```
[Contact us](https://aenix.io/contact/)

View File

@@ -12,9 +12,6 @@ talos_version=$(awk '/^version:/ {print $2}' packages/core/installer/images/talo
set -x
sed -i "s|\(ghcr.io/aenix-io/cozystack/matchbox:\)v[^ ]\+|\1${talos_version}|g" README.md
sed -i "s|\(ghcr.io/aenix-io/cozystack/talos:\)v[^ ]\+|\1${talos_version}|g" README.md
sed -i "/^TAG / s|=.*|= ${version}|" \
packages/apps/http-cache/Makefile \
packages/apps/kubernetes/Makefile \

View File

@@ -61,8 +61,6 @@ spec:
selector:
matchLabels:
app: cozystack
strategy:
type: Recreate
template:
metadata:
labels:
@@ -72,14 +70,26 @@ spec:
serviceAccountName: cozystack
containers:
- name: cozystack
image: "ghcr.io/aenix-io/cozystack/installer:v0.0.2"
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.1.0"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
- name: KUBERNETES_SERVICE_PORT
value: "7445"
- name: K8S_AWAIT_ELECTION_ENABLED
value: "1"
- name: K8S_AWAIT_ELECTION_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAMESPACE
value: cozy-system
- name: K8S_AWAIT_ELECTION_IDENTITY
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: darkhttpd
image: "ghcr.io/aenix-io/cozystack/installer:v0.0.2"
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.1.0"
command:
- /usr/bin/darkhttpd
- /cozystack/assets

View File

@@ -2,7 +2,7 @@ PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
NGINX_CACHE_TAG = v0.1.0
TAG := v0.0.2
TAG := v0.2.0
image: image-nginx

View File

@@ -1,14 +1,4 @@
{
"containerimage.config.digest": "sha256:f4ad0559a74749de0d11b1835823bf9c95332962b0909450251d849113f22c19",
"containerimage.descriptor": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:3a0e8d791e0ccf681711766387ea9278e7d39f1956509cead2f72aa0001797ef",
"size": 1093,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
"containerimage.digest": "sha256:3a0e8d791e0ccf681711766387ea9278e7d39f1956509cead2f72aa0001797ef",
"image.name": "ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0,ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0-v0.0.2"
"containerimage.config.digest": "sha256:318fd8d0d6f6127387042f6ad150e87023d1961c7c5059dd5324188a54b0ab4e",
"containerimage.digest": "sha256:e3cf145238e6e45f7f13b9acaea445c94ff29f76a34ba9fa50828401a5a3cc68"
}

View File

@@ -1,7 +1,7 @@
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.0.2
TAG := v0.2.0
UBUNTU_CONTAINER_DISK_TAG = v1.29.1
image: image-ubuntu-container-disk

View File

@@ -1,4 +1,4 @@
{
"containerimage.config.digest": "sha256:e982cfa2320d3139ed311ae44bcc5ea18db7e4e76d2746e0af04c516288ff0f1",
"containerimage.digest": "sha256:34f6aba5b5a2afbb46bbb891ef4ddc0855c2ffe4f9e5a99e8e553286ddd2c070"
"containerimage.config.digest": "sha256:ee8968be63c7c45621ec45f3687211e0875acb24e8d9784e8d2ebcbf46a3538c",
"containerimage.digest": "sha256:16c3c07e74212585786dc1f1ae31d3ab90a575014806193e8e37d1d7751cb084"
}

View File

@@ -3,7 +3,7 @@ NAME=installer
PUSH := 1
LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.0.2
TAG := v0.2.0
TALOS_VERSION=$(shell awk '/^version:/ {print $$2}' images/talos/profiles/installer.yaml)
show:
@@ -18,19 +18,18 @@ diff:
update:
hack/gen-profiles.sh
image: image-installer image-talos image-matchbox
image: image-cozystack image-talos image-matchbox
image-installer:
docker buildx build -f images/installer/Dockerfile ../../.. \
image-cozystack:
docker buildx build -f images/cozystack/Dockerfile ../../.. \
--provenance false \
--tag $(REGISTRY)/installer:$(TAG) \
--tag $(REGISTRY)/installer:$(TALOS_VERSION)-$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/installer:$(TALOS_VERSION) \
--tag $(REGISTRY)/cozystack:$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/cozystack:$(TAG) \
--cache-to type=inline \
--metadata-file images/installer.json \
--metadata-file images/cozystack.json \
--push=$(PUSH) \
--load=$(LOAD)
echo "$(REGISTRY)/installer:$(TALOS_VERSION)" > images/installer.tag
echo "$(REGISTRY)/cozystack:$(TAG)" > images/cozystack.tag
image-talos:
test -f ../../../_out/assets/installer-amd64.tar || make talos-installer
@@ -55,4 +54,7 @@ image-matchbox:
assets: talos-iso
talos-initramfs talos-kernel talos-installer talos-iso:
cat images/talos/profiles/$(subst talos-,,$@).yaml | docker run --rm -i -v $${PWD}/../../../_out/assets:/out -v /dev:/dev --privileged "ghcr.io/siderolabs/imager:$(TALOS_VERSION)" -
mkdir -p ../../../_out/assets
cat images/talos/profiles/$(subst talos-,,$@).yaml | \
docker run --rm -i -v /dev:/dev --privileged "ghcr.io/siderolabs/imager:$(TALOS_VERSION)" --tar-to-stdout - | \
tar -C ../../../_out/assets -xzf-

View File

@@ -0,0 +1,4 @@
{
"containerimage.config.digest": "sha256:ec8a4983a663f06a1503507482667a206e83e0d8d3663dff60ced9221855d6b0",
"containerimage.digest": "sha256:abb7b2fbc1f143c922f2a35afc4423a74b2b63c0bddfe620750613ed835aa861"
}

View File

@@ -0,0 +1 @@
ghcr.io/aenix-io/cozystack/cozystack:v0.1.0

View File

@@ -1,14 +0,0 @@
{
"containerimage.config.digest": "sha256:5c7f51a9cbc945c13d52157035eba6ba4b6f3b68b76280f8e64b4f6ba239db1a",
"containerimage.descriptor": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:7cda3480faf0539ed4a3dd252aacc7a997645d3a390ece377c36cf55f9e57e11",
"size": 2074,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
"containerimage.digest": "sha256:7cda3480faf0539ed4a3dd252aacc7a997645d3a390ece377c36cf55f9e57e11",
"image.name": "ghcr.io/aenix-io/cozystack/installer:v0.0.2"
}

View File

@@ -1 +0,0 @@
ghcr.io/aenix-io/cozystack/installer:v0.0.2

View File

@@ -1,14 +1,4 @@
{
"containerimage.config.digest": "sha256:cb8cb211017e51f6eb55604287c45cbf6ed8add5df482aaebff3d493a11b5a76",
"containerimage.descriptor": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:3be72cdce2f4ab4886a70fb7b66e4518a1fe4ba0771319c96fa19a0d6f409602",
"size": 1488,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
"containerimage.digest": "sha256:3be72cdce2f4ab4886a70fb7b66e4518a1fe4ba0771319c96fa19a0d6f409602",
"image.name": "ghcr.io/aenix-io/cozystack/matchbox:v0.0.2"
"containerimage.config.digest": "sha256:b869a6324f9c0e6d1dd48eee67cbe3842ee14efd59bdde477736ad2f90568ff7",
"containerimage.digest": "sha256:c30b237c5fa4fbbe47e1aba56e8f99569fe865620aa1953f31fc373794123cd7"
}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/matchbox:v0.0.2
ghcr.io/aenix-io/cozystack/matchbox:v1.6.4

View File

@@ -50,7 +50,7 @@ spec:
serviceAccountName: cozystack
containers:
- name: cozystack
image: "{{ .Files.Get "images/installer.tag" | trim }}@{{ index (.Files.Get "images/installer.json" | fromJson) "containerimage.digest" }}"
image: "{{ .Files.Get "images/cozystack.tag" | trim }}@{{ index (.Files.Get "images/cozystack.json" | fromJson) "containerimage.digest" }}"
env:
- name: KUBERNETES_SERVICE_HOST
value: localhost
@@ -69,7 +69,7 @@ spec:
fieldRef:
fieldPath: metadata.name
- name: darkhttpd
image: "{{ .Files.Get "images/installer.tag" | trim }}@{{ index (.Files.Get "images/installer.json" | fromJson) "containerimage.digest" }}"
image: "{{ .Files.Get "images/cozystack.tag" | trim }}@{{ index (.Files.Get "images/cozystack.json" | fromJson) "containerimage.digest" }}"
command:
- /usr/bin/darkhttpd
- /cozystack/assets

View File

@@ -16,4 +16,4 @@ namespaces-apply:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml | kubectl apply -f-
diff:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml | kubectl diff -f-
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) | kubectl diff -f-

View File

@@ -0,0 +1,96 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: cilium
releaseName: cilium
chart: cozy-cilium
namespace: cozy-cilium
privileged: true
dependsOn: []
- name: fluxcd
releaseName: fluxcd
chart: cozy-fluxcd
namespace: cozy-fluxcd
dependsOn: [cilium]
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: [cilium]
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cilium,cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cilium,cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [cilium,victoria-metrics-operator]
- name: metallb
releaseName: metallb
chart: cozy-metallb
namespace: cozy-metallb
privileged: true
dependsOn: [cilium]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: [cilium]
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [cilium,cert-manager,victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cilium,cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: [cilium]
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: [cilium]
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator
namespace: cozy-linstor
dependsOn: [cilium,cert-manager]
- name: linstor
releaseName: linstor
chart: cozy-linstor
namespace: cozy-linstor
privileged: true
dependsOn: [piraeus-operator,cilium,cert-manager]
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: [kubeovn]

View File

@@ -0,0 +1,177 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: cilium
releaseName: cilium
chart: cozy-cilium
namespace: cozy-cilium
privileged: true
dependsOn: []
- name: kubeovn
releaseName: kubeovn
chart: cozy-kubeovn
namespace: cozy-kubeovn
privileged: true
dependsOn: [cilium]
values:
cozystack:
nodesHash: {{ include "cozystack.master-node-ips" . | sha256sum }}
kube-ovn:
ipv4:
POD_CIDR: "{{ index $cozyConfig.data "ipv4-pod-cidr" }}"
POD_GATEWAY: "{{ index $cozyConfig.data "ipv4-pod-gateway" }}"
SVC_CIDR: "{{ index $cozyConfig.data "ipv4-svc-cidr" }}"
JOIN_CIDR: "{{ index $cozyConfig.data "ipv4-join-cidr" }}"
- name: fluxcd
releaseName: fluxcd
chart: cozy-fluxcd
namespace: cozy-fluxcd
dependsOn: [cilium,kubeovn]
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: [cilium,kubeovn]
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cilium,kubeovn,cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cilium,kubeovn,cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [cilium,kubeovn,victoria-metrics-operator]
- name: kubevirt-operator
releaseName: kubevirt-operator
chart: cozy-kubevirt-operator
namespace: cozy-kubevirt
dependsOn: [cilium,kubeovn]
- name: kubevirt
releaseName: kubevirt
chart: cozy-kubevirt
namespace: cozy-kubevirt
privileged: true
dependsOn: [cilium,kubeovn,kubevirt-operator]
- name: kubevirt-cdi-operator
releaseName: kubevirt-cdi-operator
chart: cozy-kubevirt-cdi-operator
namespace: cozy-kubevirt-cdi
dependsOn: [cilium,kubeovn]
- name: kubevirt-cdi
releaseName: kubevirt-cdi
chart: cozy-kubevirt-cdi
namespace: cozy-kubevirt-cdi
dependsOn: [cilium,kubeovn,kubevirt-cdi-operator]
- name: metallb
releaseName: metallb
chart: cozy-metallb
namespace: cozy-metallb
privileged: true
dependsOn: [cilium,kubeovn]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: [cilium,kubeovn]
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [cilium,kubeovn,cert-manager,victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cilium,kubeovn,cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: [cilium,kubeovn]
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: [cilium,kubeovn]
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator
namespace: cozy-linstor
dependsOn: [cilium,kubeovn,cert-manager]
- name: linstor
releaseName: linstor
chart: cozy-linstor
namespace: cozy-linstor
privileged: true
dependsOn: [piraeus-operator,cilium,kubeovn,cert-manager]
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: [cilium,kubeovn]
- name: dashboard
releaseName: dashboard
chart: cozy-dashboard
namespace: cozy-dashboard
dependsOn: [cilium,kubeovn]
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1beta2" }}
{{- with (lookup "source.toolkit.fluxcd.io/v1beta2" "HelmRepository" "cozy-public" "").items }}
values:
kubeapps:
redis:
master:
podAnnotations:
{{- range $index, $repo := . }}
{{- with (($repo.status).artifact).revision }}
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
- name: kamaji
releaseName: kamaji
chart: cozy-kamaji
namespace: cozy-kamaji
dependsOn: [cilium,kubeovn,cert-manager]
- name: capi-operator
releaseName: capi-operator
chart: cozy-capi-operator
namespace: cozy-cluster-api
privileged: true
dependsOn: [cilium,kubeovn,cert-manager]
- name: capi-providers
releaseName: capi-providers
chart: cozy-capi-providers
namespace: cozy-cluster-api
privileged: true
dependsOn: [cilium,kubeovn,capi-operator]

View File

@@ -0,0 +1,69 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: fluxcd
releaseName: fluxcd
chart: cozy-fluxcd
namespace: cozy-fluxcd
dependsOn: []
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: []
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [victoria-metrics-operator]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: []
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: []
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: []
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: []

View File

@@ -0,0 +1,95 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: fluxcd
releaseName: fluxcd
chart: cozy-fluxcd
namespace: cozy-fluxcd
dependsOn: []
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: []
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [victoria-metrics-operator]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: []
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [cert-manager,victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: []
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: []
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator
namespace: cozy-linstor
dependsOn: [cert-manager]
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: []
- name: dashboard
releaseName: dashboard
chart: cozy-dashboard
namespace: cozy-dashboard
dependsOn: []
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1beta2" }}
{{- with (lookup "source.toolkit.fluxcd.io/v1beta2" "HelmRepository" "cozy-public" "").items }}
values:
kubeapps:
redis:
master:
podAnnotations:
{{- range $index, $repo := . }}
{{- with (($repo.status).artifact).revision }}
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,7 +1,7 @@
{{/*
Get IP-addresses of master nodes
*/}}
{{- define "master.nodeIPs" -}}
{{- define "cozystack.master-node-ips" -}}
{{- $nodes := lookup "v1" "Node" "" "" -}}
{{- $ips := list -}}
{{- range $node := $nodes.items -}}

View File

@@ -1,38 +1,27 @@
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cilium
namespace: cozy-cilium
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: cilium
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-cilium
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $bundleName := index $cozyConfig.data "bundle-name" }}
{{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
{{- $dependencyNamespaces := dict }}
{{- $disabledComponents := splitList "," ((index $cozyConfig.data "bundle-disable") | default "") }}
{{/* collect dependency namespaces from releases */}}
{{- range $x := $bundle.releases }}
{{- $_ := set $dependencyNamespaces $x.name $x.namespace }}
{{- end }}
{{- range $x := $bundle.releases }}
{{- if not (has $x.name $disabledComponents) }}
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease
metadata:
name: kubeovn
namespace: cozy-kubeovn
name: {{ $x.name }}
namespace: {{ $x.namespace }}
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubeovn
releaseName: {{ $x.releaseName | default $x.name }}
install:
remediation:
retries: -1
@@ -41,704 +30,31 @@ spec:
retries: -1
chart:
spec:
chart: cozy-kubeovn
chart: {{ $x.chart }}
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
{{- $values := dict }}
{{- with $x.values }}
{{- $values = merge . $values }}
{{- end }}
{{- with index $cozyConfig.data (printf "values-%s" $x.name) }}
{{- $values = merge (fromYaml .) $values }}
{{- end }}
{{- with $values }}
values:
cozystack:
configHash: {{ index (lookup "v1" "ConfigMap" "cozy-system" "cozystack") "data" | toJson | sha256sum }}
nodesHash: {{ include "master.nodeIPs" . | sha256sum }}
{{- toYaml . | nindent 4}}
{{- end }}
{{- with $x.dependsOn }}
dependsOn:
- name: cilium
namespace: cozy-cilium
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cozy-fluxcd
namespace: cozy-fluxcd
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: fluxcd
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-fluxcd
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cert-manager
namespace: cozy-cert-manager
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: cert-manager
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-cert-manager
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cert-manager-issuers
namespace: cozy-cert-manager
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: cert-manager-issuers
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-cert-manager-issuers
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: victoria-metrics-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-victoria-metrics-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: monitoring
namespace: cozy-monitoring
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: monitoring
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-monitoring
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt-operator
namespace: cozy-kubevirt
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt
namespace: cozy-kubevirt
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: kubevirt-operator
namespace: cozy-kubevirt
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt-cdi-operator
namespace: cozy-kubevirt-cdi
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt-cdi-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt-cdi-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt-cdi
namespace: cozy-kubevirt-cdi
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt-cdi
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt-cdi
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: kubevirt-cdi-operator
namespace: cozy-kubevirt-cdi
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: metallb
namespace: cozy-metallb
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: metallb
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-metallb
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: grafana-operator
namespace: cozy-grafana-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: grafana-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-grafana-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: mariadb-operator
namespace: cozy-mariadb-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: mariadb-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-mariadb-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
- name: victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: postgres-operator
namespace: cozy-postgres-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: postgres-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-postgres-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: rabbitmq-operator
namespace: cozy-rabbitmq-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: rabbitmq-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-rabbitmq-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: redis-operator
namespace: cozy-redis-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: redis-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-redis-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: piraeus-operator
namespace: cozy-linstor
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: piraeus-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-piraeus-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: linstor
namespace: cozy-linstor
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: linstor
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-linstor
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: piraeus-operator
namespace: cozy-linstor
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: telepresence
namespace: cozy-telepresence
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: traffic-manager
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-telepresence
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: dashboard
namespace: cozy-dashboard
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: dashboard
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-dashboard
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kamaji
namespace: cozy-kamaji
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kamaji
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kamaji
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: capi-operator
namespace: cozy-cluster-api
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: capi-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-capi-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: capi-providers
namespace: cozy-cluster-api
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: capi-providers
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-capi-providers
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: capi-operator
namespace: cozy-cluster-api
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
{{- range $dep := . }}
{{- if not (has $dep $disabledComponents) }}
- name: {{ $dep }}
namespace: {{ index $dependencyNamespaces $dep }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,13 +1,29 @@
{{- range $ns := .Values.namespaces }}
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $bundleName := index $cozyConfig.data "bundle-name" }}
{{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
{{- $namespaces := dict }}
{{/* collect namespaces from releases */}}
{{- range $x := $bundle.releases }}
{{- if not (hasKey $namespaces $x.namespace) }}
{{- $_ := set $namespaces $x.namespace false }}
{{- end }}
{{/* if at least one release requires a privileged namespace, then it should be privileged */}}
{{- if or $x.privileged (index $namespaces $x.namespace) }}
{{- $_ := set $namespaces $x.namespace true }}
{{- end }}
{{- end }}
{{- range $namespace, $privileged := $namespaces }}
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
"helm.sh/resource-policy": keep
{{- if $ns.privileged }}
{{- if $privileged }}
labels:
pod-security.kubernetes.io/enforce: privileged
{{- end }}
name: {{ $ns.name }}
name: {{ $namespace }}
{{- end }}

View File

@@ -1,30 +0,0 @@
namespaces:
- name: cozy-public
- name: cozy-system
privileged: true
- name: cozy-cert-manager
- name: cozy-cilium
privileged: true
- name: cozy-fluxcd
- name: cozy-grafana-operator
- name: cozy-kamaji
- name: cozy-cluster-api
privileged: true # for capk only
- name: cozy-dashboard
- name: cozy-kubeovn
privileged: true
- name: cozy-kubevirt
privileged: true
- name: cozy-kubevirt-cdi
- name: cozy-linstor
privileged: true
- name: cozy-mariadb-operator
- name: cozy-metallb
privileged: true
- name: cozy-monitoring
privileged: true
- name: cozy-postgres-operator
- name: cozy-rabbitmq-operator
- name: cozy-redis-operator
- name: cozy-telepresence
- name: cozy-victoria-metrics-operator

View File

@@ -3,7 +3,7 @@ NAMESPACE=cozy-dashboard
PUSH := 1
LOAD := 0
REPOSITORY := ghcr.io/aenix-io/cozystack
TAG := v0.0.2
TAG := v0.2.0
show:
helm template --dry-run=server -n $(NAMESPACE) $(NAME) .

View File

@@ -1,4 +1,4 @@
{
"containerimage.config.digest": "sha256:f5a26c90226016af3a23d5fdb8b23d36b0aca97b1b2a8a1de2e37cc9d5dfeb04",
"containerimage.digest": "sha256:c71ed8953381d8fba2aebd79d538bfc0a6933910ed626afc850c4dbef4a15182"
"containerimage.config.digest": "sha256:51a28848a801e102b3383e6d980ac2459fa29cfd9cbc381d03c561672e94139d",
"containerimage.digest": "sha256:4b1b4ffc7c797b8fb4ab9561e6fa0a68c00d5b0d945fe47e42ecc6e43e9af0d3"
}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/dashboard:v0.0.2
ghcr.io/aenix-io/cozystack/dashboard:v0.1.0

View File

@@ -1,4 +1,4 @@
{
"containerimage.config.digest": "sha256:e87d3b19a59f31b70ae9e67ab9422bf2acf8fe5c6c8c585883be16db39068023",
"containerimage.digest": "sha256:692cd506a2eadf1cf09e94eee60986a379da7a604c4810f89c8f6c5c405c2c73"
"containerimage.config.digest": "sha256:e522ba90c58c3dab629739fe240e42037a50bfc19442d018e957ef54f05aaa77",
"containerimage.digest": "sha256:ea80daaedd7e782bb42641fe25b2c91fc24260b81f8e576637f3d251c9c7d087"
}

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/kubeapps-apis:v0.0.2
ghcr.io/aenix-io/cozystack/kubeapps-apis:v0.1.0

View File

@@ -14,4 +14,3 @@ update:
rm -rf charts && mkdir -p charts/kube-ovn
curl -sSL https://github.com/kubeovn/kube-ovn/archive/refs/heads/master.tar.gz | \
tar -C charts/kube-ovn -xzvf - --strip 2 kube-ovn-master/charts
patch -p4 < patches/cozyconfig.diff

View File

@@ -0,0 +1,24 @@
apiVersion: v2
name: kube-ovn
description: Helm chart for Kube-OVN
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.13.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.13.0"

View File

@@ -0,0 +1,42 @@
# Kube-OVN-helm
Currently supported version: 1.9
Installation :
```bash
$ kubectl label node -lbeta.kubernetes.io/os=linux kubernetes.io/os=linux --overwrite
$ kubectl label node -lnode-role.kubernetes.io/control-plane kube-ovn/role=master --overwrite
$ kubectl label node -lovn.kubernetes.io/ovs_dp_type!=userspace ovn.kubernetes.io/ovs_dp_type=kernel --overwrite
# standard install
$ helm install --debug kubeovn ./charts/kube-ovn --set MASTER_NODES=${Node0}
# high availability install
$ helm install --debug kubeovn ./charts/kube-ovn --set MASTER_NODES=${Node0},${Node1},${Node2}
# upgrade to this version
$ helm upgrade --debug kubeovn ./charts/kube-ovn --set MASTER_NODES=${Node0},${Node1},${Node2}
```
If `MASTER_NODES` unspecified Helm will take internal IPs of nodes with `kube-ovn/role=master` label
### Talos Linux
To install Kube-OVN on Talos Linux, declare openvswitch module in machine config:
```
machine:
kernel:
modules:
- name: openvswitch
```
and use the following options to install this Helm-chart:
```
--set cni_conf.MOUNT_LOCAL_BIN_DIR=false
--set OPENVSWITCH_DIR=/var/lib/openvswitch
--set OVN_DIR=/var/lib/ovn
--set DISABLE_MODULES_MANAGEMENT=true
```

View File

@@ -0,0 +1,54 @@
{{/*
Get IP-addresses of master nodes
*/}}
{{- define "kubeovn.nodeIPs" -}}
{{- $nodes := lookup "v1" "Node" "" "" -}}
{{- $ips := list -}}
{{- range $node := $nodes.items -}}
{{- $label := splitList "=" $.Values.MASTER_NODES_LABEL }}
{{- $key := index $label 0 }}
{{- $val := "" }}
{{- if eq (len $label) 2 }}
{{- $val = index $label 1 }}
{{- end }}
{{- if eq (index $node.metadata.labels $key) $val -}}
{{- range $address := $node.status.addresses -}}
{{- if eq $address.type "InternalIP" -}}
{{- $ips = append $ips $address.address -}}
{{- break -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{ join "," $ips }}
{{- end -}}
{{/*
Number of master nodes
*/}}
{{- define "kubeovn.nodeCount" -}}
{{- len (split "," (.Values.MASTER_NODES | default (include "kubeovn.nodeIPs" .))) }}
{{- end -}}
{{- define "kubeovn.ovs-ovn.updateStrategy" -}}
{{- $ds := lookup "apps/v1" "DaemonSet" $.Values.namespace "ovs-ovn" -}}
{{- if $ds -}}
{{- if eq $ds.spec.updateStrategy.type "RollingUpdate" -}}
RollingUpdate
{{- else -}}
{{- $imageVersion := (index $ds.spec.template.spec.containers 0).image | splitList ":" | last | trimPrefix "v" -}}
{{- $versionRegex := `^(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)` -}}
{{- if regexMatch $versionRegex $imageVersion -}}
{{- if regexFind $versionRegex $imageVersion | semverCompare ">= 1.12.0" -}}
RollingUpdate
{{- else -}}
OnDelete
{{- end -}}
{{- else -}}
OnDelete
{{- end -}}
{{- end -}}
{{- else -}}
RollingUpdate
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,161 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: ovn-central
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/description: |
OVN components: northd, nb and sb.
spec:
replicas: {{ include "kubeovn.nodeCount" . }}
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: ovn-central
template:
metadata:
labels:
app: ovn-central
component: network
type: infra
spec:
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: ovn-central
topologyKey: kubernetes.io/hostname
priorityClassName: system-cluster-critical
serviceAccountName: ovn-ovs
hostNetwork: true
containers:
- name: ovn-central
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- /kube-ovn/start-db.sh
securityContext:
capabilities:
add: ["SYS_NICE"]
env:
- name: ENABLE_SSL
value: "{{ .Values.networking.ENABLE_SSL }}"
- name: NODE_IPS
value: "{{ .Values.MASTER_NODES | default (include "kubeovn.nodeIPs" .) }}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IPS
valueFrom:
fieldRef:
fieldPath: status.podIPs
- name: ENABLE_BIND_LOCAL_IP
value: "{{- .Values.func.ENABLE_BIND_LOCAL_IP }}"
- name: PROBE_INTERVAL
value: "{{ .Values.networking.PROBE_INTERVAL }}"
- name: OVN_NORTHD_PROBE_INTERVAL
value: "{{ .Values.networking.OVN_NORTHD_PROBE_INTERVAL}}"
- name: OVN_LEADER_PROBE_INTERVAL
value: "{{ .Values.networking.OVN_LEADER_PROBE_INTERVAL }}"
- name: OVN_NORTHD_N_THREADS
value: "{{ .Values.networking.OVN_NORTHD_N_THREADS }}"
- name: ENABLE_COMPACT
value: "{{ .Values.networking.ENABLE_COMPACT }}"
{{- if include "kubeovn.ovs-ovn.updateStrategy" . | eq "OnDelete" }}
- name: OVN_VERSION_COMPATIBILITY
value: "21.06"
{{- end }}
resources:
requests:
cpu: {{ index .Values "ovn-central" "requests" "cpu" }}
memory: {{ index .Values "ovn-central" "requests" "memory" }}
limits:
cpu: {{ index .Values "ovn-central" "limits" "cpu" }}
memory: {{ index .Values "ovn-central" "limits" "memory" }}
volumeMounts:
- mountPath: /var/run/openvswitch
name: host-run-ovs
- mountPath: /var/run/ovn
name: host-run-ovn
- mountPath: /etc/openvswitch
name: host-config-openvswitch
- mountPath: /etc/ovn
name: host-config-ovn
- mountPath: /var/log/openvswitch
name: host-log-ovs
- mountPath: /var/log/ovn
name: host-log-ovn
- mountPath: /etc/localtime
name: localtime
readOnly: true
- mountPath: /var/run/tls
name: kube-ovn-tls
readinessProbe:
exec:
command:
- bash
- /kube-ovn/ovn-healthcheck.sh
periodSeconds: 15
timeoutSeconds: 45
livenessProbe:
exec:
command:
- bash
- /kube-ovn/ovn-healthcheck.sh
initialDelaySeconds: 30
periodSeconds: 15
failureThreshold: 5
timeoutSeconds: 45
nodeSelector:
kubernetes.io/os: "linux"
{{- with splitList "=" .Values.MASTER_NODES_LABEL }}
{{ index . 0 }}: "{{ if eq (len .) 2 }}{{ index . 1 }}{{ end }}"
{{- end }}
volumes:
- name: host-run-ovs
hostPath:
path: /run/openvswitch
- name: host-run-ovn
hostPath:
path: /run/ovn
- name: host-config-openvswitch
hostPath:
path: {{ .Values.OPENVSWITCH_DIR }}
- name: host-config-ovn
hostPath:
path: {{ .Values.OVN_DIR }}
- name: host-log-ovs
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/openvswitch
- name: host-log-ovn
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/ovn
- name: localtime
hostPath:
path: /etc/localtime
- name: kube-ovn-tls
secret:
optional: true
secretName: kube-ovn-tls

View File

@@ -0,0 +1,190 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: kube-ovn-controller
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/description: |
kube-ovn controller
spec:
replicas: {{ include "kubeovn.nodeCount" . }}
selector:
matchLabels:
app: kube-ovn-controller
strategy:
rollingUpdate:
maxSurge: 0%
maxUnavailable: 100%
type: RollingUpdate
template:
metadata:
labels:
app: kube-ovn-controller
component: network
type: infra
spec:
tolerations:
- effect: NoSchedule
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: "ovn.kubernetes.io/ic-gw"
operator: NotIn
values:
- "true"
weight: 100
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: kube-ovn-controller
topologyKey: kubernetes.io/hostname
priorityClassName: system-cluster-critical
serviceAccountName: ovn
hostNetwork: true
containers:
- name: kube-ovn-controller
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
- /kube-ovn/start-controller.sh
- --default-ls={{ .Values.networking.DEFAULT_SUBNET }}
- --default-cidr=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.POD_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.POD_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.POD_CIDR }}
{{- end }}
- --default-gateway=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.POD_GATEWAY }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.POD_GATEWAY }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.POD_GATEWAY }}
{{- end }}
- --default-gateway-check={{- .Values.func.CHECK_GATEWAY }}
- --default-logical-gateway={{- .Values.func.LOGICAL_GATEWAY }}
- --default-u2o-interconnection={{- .Values.func.U2O_INTERCONNECTION }}
- --default-exclude-ips={{- .Values.networking.EXCLUDE_IPS }}
- --cluster-router={{ .Values.networking.DEFAULT_VPC }}
- --node-switch={{ .Values.networking.NODE_SUBNET }}
- --node-switch-cidr=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.JOIN_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.JOIN_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.JOIN_CIDR }}
{{- end }}
- --service-cluster-ip-range=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.SVC_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.SVC_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.SVC_CIDR }}
{{- end }}
- --network-type={{- .Values.networking.NETWORK_TYPE }}
- --default-provider-name={{ .Values.networking.vlan.PROVIDER_NAME }}
- --default-interface-name={{- .Values.networking.vlan.VLAN_INTERFACE_NAME }}
- --default-exchange-link-name={{- .Values.networking.EXCHANGE_LINK_NAME }}
- --default-vlan-name={{- .Values.networking.vlan.VLAN_NAME }}
- --default-vlan-id={{- .Values.networking.vlan.VLAN_ID }}
- --ls-dnat-mod-dl-dst={{- .Values.func.LS_DNAT_MOD_DL_DST }}
- --ls-ct-skip-dst-lport-ips={{- .Values.func.LS_CT_SKIP_DST_LPORT_IPS }}
- --pod-nic-type={{- .Values.networking.POD_NIC_TYPE }}
- --enable-lb={{- .Values.func.ENABLE_LB }}
- --enable-np={{- .Values.func.ENABLE_NP }}
- --enable-eip-snat={{- .Values.networking.ENABLE_EIP_SNAT }}
- --enable-external-vpc={{- .Values.func.ENABLE_EXTERNAL_VPC }}
- --enable-ecmp={{- .Values.networking.ENABLE_ECMP }}
- --logtostderr=false
- --alsologtostderr=true
- --gc-interval={{- .Values.performance.GC_INTERVAL }}
- --inspect-interval={{- .Values.performance.INSPECT_INTERVAL }}
- --log_file=/var/log/kube-ovn/kube-ovn-controller.log
- --log_file_max_size=0
- --enable-lb-svc={{- .Values.func.ENABLE_LB_SVC }}
- --keep-vm-ip={{- .Values.func.ENABLE_KEEP_VM_IP }}
- --enable-metrics={{- .Values.networking.ENABLE_METRICS }}
- --node-local-dns-ip={{- .Values.networking.NODE_LOCAL_DNS_IP }}
env:
- name: ENABLE_SSL
value: "{{ .Values.networking.ENABLE_SSL }}"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: KUBE_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: OVN_DB_IPS
value: "{{ .Values.MASTER_NODES | default (include "kubeovn.nodeIPs" .) }}"
- name: POD_IPS
valueFrom:
fieldRef:
fieldPath: status.podIPs
- name: ENABLE_BIND_LOCAL_IP
value: "{{- .Values.func.ENABLE_BIND_LOCAL_IP }}"
volumeMounts:
- mountPath: /etc/localtime
name: localtime
readOnly: true
- mountPath: /var/log/kube-ovn
name: kube-ovn-log
# ovn-ic log directory
- mountPath: /var/log/ovn
name: ovn-log
- mountPath: /var/run/tls
name: kube-ovn-tls
readinessProbe:
exec:
command:
- /kube-ovn/kube-ovn-controller-healthcheck
periodSeconds: 3
timeoutSeconds: 45
livenessProbe:
exec:
command:
- /kube-ovn/kube-ovn-controller-healthcheck
initialDelaySeconds: 300
periodSeconds: 7
failureThreshold: 5
timeoutSeconds: 45
resources:
requests:
cpu: {{ index .Values "kube-ovn-controller" "requests" "cpu" }}
memory: {{ index .Values "kube-ovn-controller" "requests" "memory" }}
limits:
cpu: {{ index .Values "kube-ovn-controller" "limits" "cpu" }}
memory: {{ index .Values "kube-ovn-controller" "limits" "memory" }}
nodeSelector:
kubernetes.io/os: "linux"
volumes:
- name: localtime
hostPath:
path: /etc/localtime
- name: kube-ovn-log
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/kube-ovn
- name: ovn-log
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/ovn
- name: kube-ovn-tls
secret:
optional: true
secretName: kube-ovn-tls

View File

@@ -0,0 +1,16 @@
kind: Service
apiVersion: v1
metadata:
name: kube-ovn-controller
namespace: {{ .Values.namespace }}
labels:
app: kube-ovn-controller
spec:
selector:
app: kube-ovn-controller
ports:
- port: 10660
name: metrics
{{- if eq .Values.networking.NET_STACK "dual_stack" }}
ipFamilyPolicy: PreferDualStack
{{- end }}

View File

@@ -0,0 +1,109 @@
{{- if .Values.func.ENABLE_IC }}
kind: Deployment
apiVersion: apps/v1
metadata:
name: ovn-ic-controller
namespace: kube-system
annotations:
kubernetes.io/description: |
OVN IC Client
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: ovn-ic-controller
template:
metadata:
labels:
app: ovn-ic-controller
component: network
type: infra
spec:
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: ovn-ic-controller
topologyKey: kubernetes.io/hostname
priorityClassName: system-cluster-critical
serviceAccountName: ovn
hostNetwork: true
containers:
- name: ovn-ic-controller
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/kube-ovn/start-ic-controller.sh"]
args:
- --log_file=/var/log/kube-ovn/kube-ovn-ic-controller.log
- --log_file_max_size=0
- --logtostderr=false
- --alsologtostderr=true
securityContext:
capabilities:
add: ["SYS_NICE"]
env:
- name: ENABLE_SSL
value: "{{ .Values.networking.ENABLE_SSL }}"
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OVN_DB_IPS
value: "{{ .Values.MASTER_NODES }}"
resources:
requests:
cpu: 300m
memory: 200Mi
limits:
cpu: 3
memory: 1Gi
volumeMounts:
- mountPath: /var/run/ovn
name: host-run-ovn
- mountPath: /etc/ovn
name: host-config-ovn
- mountPath: /var/log/ovn
name: host-log-ovn
- mountPath: /etc/localtime
name: localtime
- mountPath: /var/run/tls
name: kube-ovn-tls
- mountPath: /var/log/kube-ovn
name: kube-ovn-log
nodeSelector:
kubernetes.io/os: "linux"
kube-ovn/role: "master"
volumes:
- name: host-run-ovn
hostPath:
path: /run/ovn
- name: host-config-ovn
hostPath:
path: /etc/origin/ovn
- name: host-log-ovn
hostPath:
path: /var/log/ovn
- name: localtime
hostPath:
path: /etc/localtime
- name: kube-ovn-log
hostPath:
path: /var/log/kube-ovn
- name: kube-ovn-tls
secret:
optional: true
secretName: kube-ovn-tls
{{- end }}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,139 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: kube-ovn-monitor
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/description: |
Metrics for OVN components: northd, nb and sb.
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: kube-ovn-monitor
template:
metadata:
labels:
app: kube-ovn-monitor
component: network
type: infra
spec:
tolerations:
- effect: NoSchedule
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: kube-ovn-monitor
topologyKey: kubernetes.io/hostname
priorityClassName: system-cluster-critical
serviceAccountName: kube-ovn-app
hostNetwork: true
containers:
- name: kube-ovn-monitor
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/kube-ovn/start-ovn-monitor.sh"]
args:
- --log_file=/var/log/kube-ovn/kube-ovn-monitor.log
- --logtostderr=false
- --alsologtostderr=true
- --log_file_max_size=0
securityContext:
runAsUser: 0
privileged: false
env:
- name: ENABLE_SSL
value: "{{ .Values.networking.ENABLE_SSL }}"
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IPS
valueFrom:
fieldRef:
fieldPath: status.podIPs
- name: ENABLE_BIND_LOCAL_IP
value: "{{- .Values.func.ENABLE_BIND_LOCAL_IP }}"
resources:
requests:
cpu: {{ index .Values "kube-ovn-monitor" "requests" "cpu" }}
memory: {{ index .Values "kube-ovn-monitor" "requests" "memory" }}
limits:
cpu: {{ index .Values "kube-ovn-monitor" "limits" "cpu" }}
memory: {{ index .Values "kube-ovn-monitor" "limits" "memory" }}
volumeMounts:
- mountPath: /var/run/openvswitch
name: host-run-ovs
- mountPath: /var/run/ovn
name: host-run-ovn
- mountPath: /etc/openvswitch
name: host-config-openvswitch
- mountPath: /etc/ovn
name: host-config-ovn
- mountPath: /var/log/ovn
name: host-log-ovn
readOnly: true
- mountPath: /etc/localtime
name: localtime
readOnly: true
- mountPath: /var/run/tls
name: kube-ovn-tls
- mountPath: /var/log/kube-ovn
name: kube-ovn-log
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 7
successThreshold: 1
tcpSocket:
port: 10661
timeoutSeconds: 3
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 7
successThreshold: 1
tcpSocket:
port: 10661
timeoutSeconds: 3
nodeSelector:
kubernetes.io/os: "linux"
{{- with splitList "=" .Values.MASTER_NODES_LABEL }}
{{ index . 0 }}: "{{ if eq (len .) 2 }}{{ index . 1 }}{{ end }}"
{{- end }}
volumes:
- name: host-run-ovs
hostPath:
path: /run/openvswitch
- name: host-run-ovn
hostPath:
path: /run/ovn
- name: host-config-openvswitch
hostPath:
path: {{ .Values.OPENVSWITCH_DIR }}
- name: host-config-ovn
hostPath:
path: {{ .Values.OVN_DIR }}
- name: host-log-ovn
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/ovn
- name: localtime
hostPath:
path: /etc/localtime
- name: kube-ovn-tls
secret:
optional: true
secretName: kube-ovn-tls
- name: kube-ovn-log
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/kube-ovn

View File

@@ -0,0 +1,18 @@
kind: Service
apiVersion: v1
metadata:
name: kube-ovn-monitor
namespace: {{ .Values.namespace }}
labels:
app: kube-ovn-monitor
spec:
ports:
- name: metrics
port: 10661
type: ClusterIP
selector:
app: kube-ovn-monitor
sessionAffinity: None
{{- if eq .Values.networking.NET_STACK "dual_stack" }}
ipFamilyPolicy: PreferDualStack
{{- end }}

View File

@@ -0,0 +1,19 @@
kind: Service
apiVersion: v1
metadata:
name: ovn-nb
namespace: {{ .Values.namespace }}
spec:
ports:
- name: ovn-nb
protocol: TCP
port: 6641
targetPort: 6641
type: ClusterIP
{{- if eq .Values.networking.NET_STACK "dual_stack" }}
ipFamilyPolicy: PreferDualStack
{{- end }}
selector:
app: ovn-central
ovn-nb-leader: "true"
sessionAffinity: None

View File

@@ -0,0 +1,19 @@
kind: Service
apiVersion: v1
metadata:
name: ovn-northd
namespace: {{ .Values.namespace }}
spec:
ports:
- name: ovn-northd
protocol: TCP
port: 6643
targetPort: 6643
type: ClusterIP
{{- if eq .Values.networking.NET_STACK "dual_stack" }}
ipFamilyPolicy: PreferDualStack
{{- end }}
selector:
app: ovn-central
ovn-northd-leader: "true"
sessionAffinity: None

View File

@@ -0,0 +1,256 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.k8s.io/system-only: "true"
name: system:ovn
rules:
- apiGroups:
- "kubeovn.io"
resources:
- vpcs
- vpcs/status
- vpc-nat-gateways
- vpc-nat-gateways/status
- subnets
- subnets/status
- ippools
- ippools/status
- ips
- vips
- vips/status
- vlans
- vlans/status
- provider-networks
- provider-networks/status
- security-groups
- security-groups/status
- iptables-eips
- iptables-fip-rules
- iptables-dnat-rules
- iptables-snat-rules
- iptables-eips/status
- iptables-fip-rules/status
- iptables-dnat-rules/status
- iptables-snat-rules/status
- ovn-eips
- ovn-fips
- ovn-snat-rules
- ovn-eips/status
- ovn-fips/status
- ovn-snat-rules/status
- ovn-dnat-rules
- ovn-dnat-rules/status
- switch-lb-rules
- switch-lb-rules/status
- vpc-dnses
- vpc-dnses/status
- qos-policies
- qos-policies/status
verbs:
- "*"
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- patch
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
- apiGroups:
- "k8s.cni.cncf.io"
resources:
- network-attachment-definitions
verbs:
- get
- apiGroups:
- ""
- networking.k8s.io
resources:
- networkpolicies
- configmaps
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
verbs:
- get
- apiGroups:
- ""
resources:
- services
- services/status
verbs:
- get
- list
- update
- create
- delete
- watch
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- update
- get
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
- deployments
- deployments/scale
verbs:
- get
- list
- create
- delete
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- "*"
- apiGroups:
- "kubevirt.io"
resources:
- virtualmachines
- virtualmachineinstances
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.k8s.io/system-only: "true"
name: system:ovn-ovs
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- patch
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- apiGroups:
- apps
resources:
- controllerrevisions
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.k8s.io/system-only: "true"
name: system:kube-ovn-cni
rules:
- apiGroups:
- "kubeovn.io"
- ""
resources:
- subnets
- provider-networks
- pods
verbs:
- get
- list
- watch
- apiGroups:
- ""
- "kubeovn.io"
resources:
- ovn-eips
- ovn-eips/status
- nodes
verbs:
- get
- list
- patch
- watch
- apiGroups:
- "kubeovn.io"
resources:
- ips
verbs:
- get
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.k8s.io/system-only: "true"
name: system:kube-ovn-app
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- apiGroups:
- apps
resources:
- daemonsets
verbs:
- get

View File

@@ -0,0 +1,54 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ovn
roleRef:
name: system:ovn
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: ovn
namespace: {{ .Values.namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ovn-ovs
roleRef:
name: system:ovn-ovs
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: ovn-ovs
namespace: {{ .Values.namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-ovn-cni
roleRef:
name: system:kube-ovn-cni
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: kube-ovn-cni
namespace: {{ .Values.namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-ovn-app
roleRef:
name: system:kube-ovn-app
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: kube-ovn-app
namespace: {{ .Values.namespace }}

View File

@@ -0,0 +1,164 @@
{{- if .Values.HYBRID_DPDK }}
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: ovs-ovn-dpdk
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/description: |
This daemon set launches the openvswitch daemon.
spec:
selector:
matchLabels:
app: ovs-dpdk
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: ovs-dpdk
component: network
type: infra
spec:
tolerations:
- operator: Exists
priorityClassName: system-node-critical
serviceAccountName: ovn-ovs
hostNetwork: true
hostPID: true
containers:
- name: openvswitch
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}-dpdk
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/kube-ovn/start-ovs-dpdk-v2.sh"]
securityContext:
runAsUser: 0
privileged: true
env:
- name: ENABLE_SSL
value: "{{ .Values.networking.ENABLE_SSL }}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: HW_OFFLOAD
value: "{{- .Values.func.HW_OFFLOAD }}"
- name: TUNNEL_TYPE
value: "{{- .Values.networking.TUNNEL_TYPE }}"
- name: DPDK_TUNNEL_IFACE
value: "{{- .Values.networking.DPDK_TUNNEL_IFACE }}"
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: OVN_DB_IPS
value: "{{ .Values.MASTER_NODES | default (include "kubeovn.nodeIPs" .) }}"
- name: OVN_REMOTE_PROBE_INTERVAL
value: "{{ .Values.networking.OVN_REMOTE_PROBE_INTERVAL }}"
- name: OVN_REMOTE_OPENFLOW_INTERVAL
value: "{{ .Values.networking.OVN_REMOTE_OPENFLOW_INTERVAL }}"
volumeMounts:
- mountPath: /opt/ovs-config
name: host-config-ovs
- name: shareddir
mountPath: {{ .Values.kubelet_conf.KUBELET_DIR }}/pods
- name: hugepage
mountPath: /dev/hugepages
- mountPath: /lib/modules
name: host-modules
readOnly: true
- mountPath: /var/run/openvswitch
name: host-run-ovs
mountPropagation: HostToContainer
- mountPath: /var/run/ovn
name: host-run-ovn
- mountPath: /sys
name: host-sys
- mountPath: /etc/openvswitch
name: host-config-openvswitch
- mountPath: /etc/ovn
name: host-config-ovn
- mountPath: /var/log/openvswitch
name: host-log-ovs
- mountPath: /var/log/ovn
name: host-log-ovn
- mountPath: /etc/localtime
name: localtime
readOnly: true
- mountPath: /var/run/tls
name: kube-ovn-tls
readinessProbe:
exec:
command:
- bash
- -c
- LOG_ROTATE=true /kube-ovn/ovs-healthcheck.sh
periodSeconds: 5
timeoutSeconds: 45
livenessProbe:
exec:
command:
- bash
- /kube-ovn/ovs-healthcheck.sh
initialDelaySeconds: 60
periodSeconds: 5
failureThreshold: 5
timeoutSeconds: 45
resources:
requests:
cpu: {{ index .Values "ovs-ovn" "requests" "cpu" }}
memory: {{ index .Values "ovs-ovn" "requests" "memory" }}
limits:
cpu: {{ index .Values "ovs-ovn" "limits" "cpu" }}
{{.Values.HUGEPAGE_SIZE_TYPE}}: {{.Values.HUGEPAGES}}
memory: {{ index .Values "ovs-ovn" "limits" "memory" }}
nodeSelector:
kubernetes.io/os: "linux"
ovn.kubernetes.io/ovs_dp_type: "userspace"
volumes:
- name: host-config-ovs
hostPath:
path: /opt/ovs-config
type: DirectoryOrCreate
- name: shareddir
hostPath:
path: {{ .Values.kubelet_conf.KUBELET_DIR }}/pods
type: ''
- name: hugepage
emptyDir:
medium: HugePages
- name: host-modules
hostPath:
path: /lib/modules
- name: host-run-ovs
hostPath:
path: /run/openvswitch
- name: host-run-ovn
hostPath:
path: /run/ovn
- name: host-sys
hostPath:
path: /sys
- name: host-config-openvswitch
hostPath:
path: {{ .Values.OPENVSWITCH_DIR }}
- name: host-config-ovn
hostPath:
path: {{ .Values.OVN_DIR }}
- name: host-log-ovs
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/openvswitch
- name: host-log-ovn
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/ovn
- name: localtime
hostPath:
path: /etc/localtime
- name: kube-ovn-tls
secret:
optional: true
secretName: kube-ovn-tls
{{- end }}

View File

@@ -0,0 +1,34 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: ovn
namespace: {{ .Values.namespace }}
{{- if .Values.global.registry.imagePullSecrets }}
imagePullSecrets:
{{- range $index, $secret := .Values.global.registry.imagePullSecrets }}
{{- if $secret }}
- name: {{ $secret | quote}}
{{- end }}
{{- end }}
{{- end }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ovn-ovs
namespace: {{ .Values.namespace }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-ovn-cni
namespace: {{ .Values.namespace }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-ovn-app
namespace: {{ .Values.namespace }}

View File

@@ -0,0 +1,23 @@
{{- if .Values.networking.ENABLE_SSL }}
{{- $cn := "ovn" -}}
{{- $ca := genCA "ovn-ca" 3650 -}}
---
apiVersion: v1
kind: Secret
metadata:
name: kube-ovn-tls
namespace: {{ .Values.namespace }}
data:
{{- $existingSecret := lookup "v1" "Secret" .Values.namespace "kube-ovn-tls" }}
{{- if $existingSecret }}
cacert: {{ index $existingSecret.data "cacert" }}
cert: {{ index $existingSecret.data "cert" }}
key: {{ index $existingSecret.data "key" }}
{{- else }}
{{- with genSignedCert $cn nil nil 3650 $ca }}
cacert: {{ b64enc $ca.Cert }}
cert: {{ b64enc .Cert }}
key: {{ b64enc .Key }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,206 @@
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: kube-ovn-cni
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/description: |
This daemon set launches the kube-ovn cni daemon.
spec:
selector:
matchLabels:
app: kube-ovn-cni
template:
metadata:
labels:
app: kube-ovn-cni
component: network
type: infra
spec:
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
priorityClassName: system-node-critical
serviceAccountName: kube-ovn-cni
hostNetwork: true
hostPID: true
initContainers:
- name: install-cni
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/kube-ovn/install-cni.sh"]
securityContext:
runAsUser: 0
privileged: true
volumeMounts:
- mountPath: /opt/cni/bin
name: cni-bin
{{- if .Values.cni_conf.MOUNT_LOCAL_BIN_DIR }}
- mountPath: /usr/local/bin
name: local-bin
{{- end }}
containers:
- name: cni-server
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- bash
- /kube-ovn/start-cniserver.sh
args:
- --enable-mirror={{- .Values.debug.ENABLE_MIRROR }}
- --mirror-iface={{- .Values.debug.MIRROR_IFACE }}
- --node-switch={{ .Values.networking.NODE_SUBNET }}
- --encap-checksum=true
- --service-cluster-ip-range=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.SVC_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.SVC_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.SVC_CIDR }}
{{- end }}
{{- if eq .Values.networking.NETWORK_TYPE "vlan" }}
- --iface=
{{- else}}
- --iface={{- .Values.networking.IFACE }}
{{- end }}
- --dpdk-tunnel-iface={{- .Values.networking.DPDK_TUNNEL_IFACE }}
- --network-type={{- .Values.networking.TUNNEL_TYPE }}
- --default-interface-name={{- .Values.networking.vlan.VLAN_INTERFACE_NAME }}
- --cni-conf-dir={{ .Values.cni_conf.CNI_CONF_DIR }}
- --cni-conf-file={{ .Values.cni_conf.CNI_CONF_FILE }}
- --cni-conf-name={{- .Values.cni_conf.CNI_CONFIG_PRIORITY -}}-kube-ovn.conflist
- --logtostderr=false
- --alsologtostderr=true
- --log_file=/var/log/kube-ovn/kube-ovn-cni.log
- --log_file_max_size=0
- --enable-metrics={{- .Values.networking.ENABLE_METRICS }}
- --kubelet-dir={{ .Values.kubelet_conf.KUBELET_DIR }}
- --enable-tproxy={{ .Values.func.ENABLE_TPROXY }}
- --ovs-vsctl-concurrency={{ .Values.performance.OVS_VSCTL_CONCURRENCY }}
securityContext:
runAsUser: 0
privileged: true
env:
- name: ENABLE_SSL
value: "{{ .Values.networking.ENABLE_SSL }}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IPS
valueFrom:
fieldRef:
fieldPath: status.podIPs
- name: ENABLE_BIND_LOCAL_IP
value: "{{- .Values.func.ENABLE_BIND_LOCAL_IP }}"
- name: DBUS_SYSTEM_BUS_ADDRESS
value: "unix:path=/host/var/run/dbus/system_bus_socket"
volumeMounts:
- name: host-modules
mountPath: /lib/modules
readOnly: true
- name: shared-dir
mountPath: {{ .Values.kubelet_conf.KUBELET_DIR }}/pods
- mountPath: /etc/openvswitch
name: systemid
readOnly: true
- mountPath: /etc/cni/net.d
name: cni-conf
- mountPath: /run/openvswitch
name: host-run-ovs
mountPropagation: Bidirectional
- mountPath: /run/ovn
name: host-run-ovn
- mountPath: /host/var/run/dbus
name: host-dbus
mountPropagation: HostToContainer
- mountPath: /var/run/netns
name: host-ns
mountPropagation: HostToContainer
- mountPath: /var/log/kube-ovn
name: kube-ovn-log
- mountPath: /var/log/openvswitch
name: host-log-ovs
- mountPath: /var/log/ovn
name: host-log-ovn
- mountPath: /etc/localtime
name: localtime
readOnly: true
readinessProbe:
failureThreshold: 3
periodSeconds: 7
successThreshold: 1
tcpSocket:
port: 10665
timeoutSeconds: 3
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 7
successThreshold: 1
tcpSocket:
port: 10665
timeoutSeconds: 3
resources:
requests:
cpu: {{ index .Values "kube-ovn-cni" "requests" "cpu" }}
memory: {{ index .Values "kube-ovn-cni" "requests" "memory" }}
limits:
cpu: {{ index .Values "kube-ovn-cni" "limits" "cpu" }}
memory: {{ index .Values "kube-ovn-cni" "limits" "memory" }}
nodeSelector:
kubernetes.io/os: "linux"
volumes:
- name: host-modules
hostPath:
path: /lib/modules
- name: shared-dir
hostPath:
path: {{ .Values.kubelet_conf.KUBELET_DIR }}/pods
- name: systemid
hostPath:
path: {{ .Values.OPENVSWITCH_DIR }}
- name: host-run-ovs
hostPath:
path: /run/openvswitch
- name: host-run-ovn
hostPath:
path: /run/ovn
- name: cni-conf
hostPath:
path: {{ .Values.cni_conf.CNI_CONF_DIR }}
- name: cni-bin
hostPath:
path: {{ .Values.cni_conf.CNI_BIN_DIR }}
- name: host-ns
hostPath:
path: /var/run/netns
- name: host-dbus
hostPath:
path: /var/run/dbus
- name: kube-ovn-log
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/kube-ovn
- name: localtime
hostPath:
path: /etc/localtime
- name: host-log-ovs
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/openvswitch
- name: host-log-ovn
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/ovn
{{- if .Values.cni_conf.MOUNT_LOCAL_BIN_DIR }}
- name: local-bin
hostPath:
path: {{ .Values.cni_conf.MOUNT_LOCAL_BIN_DIR }}
{{- end }}

View File

@@ -0,0 +1,16 @@
kind: Service
apiVersion: v1
metadata:
name: kube-ovn-cni
namespace: {{ .Values.namespace }}
labels:
app: kube-ovn-cni
spec:
selector:
app: kube-ovn-cni
ports:
- port: 10665
name: metrics
{{- if eq .Values.networking.NET_STACK "dual_stack" }}
ipFamilyPolicy: PreferDualStack
{{- end }}

View File

@@ -0,0 +1,221 @@
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: ovs-ovn
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/description: |
This daemon set launches the openvswitch daemon.
chart-version: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
selector:
matchLabels:
app: ovs
updateStrategy:
type: {{ include "kubeovn.ovs-ovn.updateStrategy" . }}
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: ovs
component: network
type: infra
annotations:
chart-version: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
priorityClassName: system-node-critical
serviceAccountName: ovn-ovs
hostNetwork: true
hostPID: true
containers:
- name: openvswitch
{{- if .Values.DPDK }}
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.dpdkRepository }}:{{ .Values.DPDK_VERSION }}-{{ .Values.global.images.kubeovn.tag }}
{{- else }}
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
{{- end }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.DPDK }}
command: ["/kube-ovn/start-ovs-dpdk.sh"]
{{- else }}
command:
{{- if .Values.DISABLE_MODULES_MANAGEMENT }}
- /bin/sh
- -ec
- |
ln -sf /bin/true /usr/sbin/modprobe
ln -sf /bin/true /usr/sbin/modinfo
ln -sf /bin/true /usr/sbin/rmmod
exec /kube-ovn/start-ovs.sh
{{- else }}
- /kube-ovn/start-ovs.sh
{{- end }}
{{- end }}
securityContext:
runAsUser: 0
privileged: true
env:
- name: ENABLE_SSL
value: "{{ .Values.networking.ENABLE_SSL }}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: HW_OFFLOAD
value: "{{- .Values.func.HW_OFFLOAD }}"
- name: TUNNEL_TYPE
value: "{{- .Values.networking.TUNNEL_TYPE }}"
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: OVN_DB_IPS
value: "{{ .Values.MASTER_NODES | default (include "kubeovn.nodeIPs" .) }}"
- name: OVN_REMOTE_PROBE_INTERVAL
value: "{{ .Values.networking.OVN_REMOTE_PROBE_INTERVAL }}"
- name: OVN_REMOTE_OPENFLOW_INTERVAL
value: "{{ .Values.networking.OVN_REMOTE_OPENFLOW_INTERVAL }}"
volumeMounts:
- mountPath: /var/run/netns
name: host-ns
mountPropagation: HostToContainer
- mountPath: /lib/modules
name: host-modules
readOnly: true
- mountPath: /var/run/openvswitch
name: host-run-ovs
- mountPath: /var/run/ovn
name: host-run-ovn
- mountPath: /etc/openvswitch
name: host-config-openvswitch
- mountPath: /etc/ovn
name: host-config-ovn
- mountPath: /var/log/openvswitch
name: host-log-ovs
- mountPath: /var/log/ovn
name: host-log-ovn
- mountPath: /etc/localtime
name: localtime
readOnly: true
- mountPath: /var/run/tls
name: kube-ovn-tls
- mountPath: /var/run/containerd
name: cruntime
readOnly: true
{{- if .Values.DPDK }}
- mountPath: /opt/ovs-config
name: host-config-ovs
- mountPath: /dev/hugepages
name: hugepage
{{- end }}
readinessProbe:
exec:
{{- if .Values.DPDK }}
command:
- bash
- /kube-ovn/ovs-dpdk-healthcheck.sh
{{- else }}
command:
- bash
- -c
- LOG_ROTATE=true /kube-ovn/ovs-healthcheck.sh
{{- end }}
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 45
livenessProbe:
exec:
{{- if .Values.DPDK }}
command:
- bash
- /kube-ovn/ovs-dpdk-healthcheck.sh
{{- else }}
command:
- bash
- /kube-ovn/ovs-healthcheck.sh
{{- end }}
initialDelaySeconds: 60
periodSeconds: 5
failureThreshold: 5
timeoutSeconds: 45
resources:
requests:
{{- if .Values.DPDK }}
cpu: {{ .Values.DPDK_CPU }}
memory: {{ .Values.DPDK_MEMORY }}
{{- else }}
cpu: {{ index .Values "ovs-ovn" "requests" "cpu" }}
memory: {{ index .Values "ovs-ovn" "requests" "memory" }}
{{- end }}
limits:
{{- if .Values.DPDK }}
cpu: {{ .Values.DPDK_CPU }}
memory: {{ .Values.DPDK_MEMORY }}
hugepages-1Gi: 1Gi
{{- else }}
cpu: {{ index .Values "ovs-ovn" "limits" "cpu" }}
memory: {{ index .Values "ovs-ovn" "limits" "memory" }}
{{- end }}
nodeSelector:
kubernetes.io/os: "linux"
volumes:
- name: host-modules
hostPath:
path: /lib/modules
- name: host-run-ovs
hostPath:
path: /run/openvswitch
- name: host-run-ovn
hostPath:
path: /run/ovn
- name: host-config-openvswitch
hostPath:
path: {{ .Values.OPENVSWITCH_DIR }}
- name: host-config-ovn
hostPath:
path: {{ .Values.OVN_DIR }}
- name: host-log-ovs
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/openvswitch
- name: host-log-ovn
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/ovn
- name: localtime
hostPath:
path: /etc/localtime
- name: kube-ovn-tls
secret:
optional: true
secretName: kube-ovn-tls
- name: host-ns
hostPath:
path: /var/run/netns
- hostPath:
path: /var/run/containerd
name: cruntime
{{- if .Values.DPDK }}
- name: host-config-ovs
hostPath:
path: /opt/ovs-config
type: DirectoryOrCreate
- name: hugepage
emptyDir:
medium: HugePages
{{- end }}

View File

@@ -0,0 +1,137 @@
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: kube-ovn-pinger
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/description: |
This daemon set launches the openvswitch daemon.
spec:
selector:
matchLabels:
app: kube-ovn-pinger
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: kube-ovn-pinger
component: network
type: infra
spec:
priorityClassName: system-node-critical
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
- key: CriticalAddonsOnly
operator: Exists
serviceAccountName: kube-ovn-app
hostPID: true
containers:
- name: pinger
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
command:
- /kube-ovn/kube-ovn-pinger
args:
- --external-address=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.PINGER_EXTERNAL_ADDRESS }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.PINGER_EXTERNAL_ADDRESS }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.PINGER_EXTERNAL_ADDRESS }}
{{- end }}
- --external-dns=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.PINGER_EXTERNAL_DOMAIN }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.PINGER_EXTERNAL_DOMAIN }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.PINGER_EXTERNAL_DOMAIN }}
{{- end }}
- --ds-namespace={{ .Values.namespace }}
- --logtostderr=false
- --alsologtostderr=true
- --log_file=/var/log/kube-ovn/kube-ovn-pinger.log
- --log_file_max_size=0
- --enable-metrics={{- .Values.networking.ENABLE_METRICS }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
runAsUser: 0
privileged: false
env:
- name: ENABLE_SSL
value: "{{ .Values.networking.ENABLE_SSL }}"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /var/run/openvswitch
name: host-run-ovs
- mountPath: /var/run/ovn
name: host-run-ovn
- mountPath: /etc/openvswitch
name: host-config-openvswitch
- mountPath: /var/log/openvswitch
name: host-log-ovs
readOnly: true
- mountPath: /var/log/ovn
name: host-log-ovn
readOnly: true
- mountPath: /var/log/kube-ovn
name: kube-ovn-log
- mountPath: /etc/localtime
name: localtime
readOnly: true
- mountPath: /var/run/tls
name: kube-ovn-tls
resources:
requests:
cpu: {{ index .Values "kube-ovn-pinger" "requests" "cpu" }}
memory: {{ index .Values "kube-ovn-pinger" "requests" "memory" }}
limits:
cpu: {{ index .Values "kube-ovn-pinger" "limits" "cpu" }}
memory: {{ index .Values "kube-ovn-pinger" "limits" "memory" }}
nodeSelector:
kubernetes.io/os: "linux"
volumes:
- name: host-run-ovs
hostPath:
path: /run/openvswitch
- name: host-run-ovn
hostPath:
path: /run/ovn
- name: host-config-openvswitch
hostPath:
path: {{ .Values.OPENVSWITCH_DIR }}
- name: host-log-ovs
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/openvswitch
- name: kube-ovn-log
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/kube-ovn
- name: host-log-ovn
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/ovn
- name: localtime
hostPath:
path: /etc/localtime
- name: kube-ovn-tls
secret:
optional: true
secretName: kube-ovn-tls

View File

@@ -0,0 +1,16 @@
kind: Service
apiVersion: v1
metadata:
name: kube-ovn-pinger
namespace: {{ .Values.namespace }}
labels:
app: kube-ovn-pinger
spec:
selector:
app: kube-ovn-pinger
ports:
- port: 8080
name: metrics
{{- if eq .Values.networking.NET_STACK "dual_stack" }}
ipFamilyPolicy: PreferDualStack
{{- end }}

View File

@@ -0,0 +1,123 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-ovn-pre-delete-hook
namespace: {{ .Values.namespace }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": hook-succeeded
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.k8s.io/system-only: "true"
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "2"
"helm.sh/hook-delete-policy": hook-succeeded
name: system:kube-ovn-pre-delete-hook
rules:
- apiGroups:
- kubeovn.io
resources:
- subnets
verbs:
- get
- list
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-ovn-pre-delete-hook
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "3"
"helm.sh/hook-delete-policy": hook-succeeded
roleRef:
name: system:kube-ovn-pre-delete-hook
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: kube-ovn-pre-delete-hook
namespace: {{ .Values.namespace }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Chart.Name }}-pre-delete-hook"
namespace: {{ .Values.namespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-delete
"helm.sh/hook-weight": "4"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
completions: 1
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: kube-ovn-pre-delete-hook
component: job
spec:
tolerations:
- key: ""
operator: "Exists"
effect: "NoSchedule"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- kube-ovn-pre-delete-hook
- key: component
operator: In
values:
- job
restartPolicy: Never
hostNetwork: true
nodeSelector:
kubernetes.io/os: "linux"
serviceAccount: kube-ovn-pre-delete-hook
serviceAccountName: kube-ovn-pre-delete-hook
containers:
- name: remove-subnet-finalizer
image: "{{ .Values.global.registry.address}}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- sh
- -c
- /kube-ovn/remove-subnet-finalizer.sh 2>&1 | tee -a /var/log/kube-ovn/remove-subnet-finalizer.log
volumeMounts:
- mountPath: /var/log/kube-ovn
name: kube-ovn-log
volumes:
- name: kube-ovn-log
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/kube-ovn

View File

@@ -0,0 +1,19 @@
kind: Service
apiVersion: v1
metadata:
name: ovn-sb
namespace: {{ .Values.namespace }}
spec:
ports:
- name: ovn-sb
protocol: TCP
port: 6642
targetPort: 6642
type: ClusterIP
{{- if eq .Values.networking.NET_STACK "dual_stack" }}
ipFamilyPolicy: PreferDualStack
{{- end }}
selector:
app: ovn-central
ovn-sb-leader: "true"
sessionAffinity: None

View File

@@ -0,0 +1,163 @@
{{- if eq (include "kubeovn.ovs-ovn.updateStrategy" .) "OnDelete" }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ovs-ovn-upgrade
namespace: {{ .Values.namespace }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": hook-succeeded
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.k8s.io/system-only: "true"
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "2"
"helm.sh/hook-delete-policy": hook-succeeded
name: system:ovs-ovn-upgrade
rules:
- apiGroups:
- apps
resources:
- daemonsets
resourceNames:
- ovs-ovn
verbs:
- get
- apiGroups:
- apps
resources:
- deployments
resourceNames:
- ovn-central
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- get
- watch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ovs-ovn-upgrade
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "3"
"helm.sh/hook-delete-policy": hook-succeeded
roleRef:
name: system:ovs-ovn-upgrade
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: ovs-ovn-upgrade
namespace: {{ .Values.namespace }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Chart.Name }}-post-upgrade-hook"
namespace: {{ .Values.namespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "4"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
completions: 1
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: post-upgrade
component: job
spec:
tolerations:
- key: ""
operator: "Exists"
effect: "NoSchedule"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- post-upgrade
- key: component
operator: In
values:
- job
restartPolicy: Never
hostNetwork: true
nodeSelector:
kubernetes.io/os: "linux"
serviceAccount: ovs-ovn-upgrade
serviceAccountName: ovs-ovn-upgrade
containers:
- name: ovs-ovn-upgrade
image: "{{ .Values.global.registry.address}}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}"
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: ENABLE_SSL
value: "{{ .Values.networking.ENABLE_SSL }}"
- name: OVN_DB_IPS
value: "{{ .Values.MASTER_NODES | default (include "kubeovn.nodeIPs" .) }}"
command:
- bash
- -eo
- pipefail
- -c
- /kube-ovn/upgrade-ovs.sh 2>&1 | tee -a /var/log/kube-ovn/upgrade-ovs.log
volumeMounts:
- mountPath: /var/log/kube-ovn
name: kube-ovn-log
- mountPath: /var/run/tls
name: kube-ovn-tls
volumes:
- name: kube-ovn-log
hostPath:
path: {{ .Values.log_conf.LOG_DIR }}/kube-ovn
- name: kube-ovn-tls
secret:
optional: true
secretName: kube-ovn-tls
{{ end }}

View File

@@ -0,0 +1,10 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: ovn-vpc-nat-config
namespace: {{ .Values.namespace }}
annotations:
kubernetes.io/description: |
kube-ovn vpc-nat common config
data:
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.vpcRepository }}:{{ .Values.global.images.kubeovn.tag }}

View File

@@ -0,0 +1,181 @@
# Default values for kubeovn.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
global:
registry:
address: docker.io/kubeovn
imagePullSecrets: []
images:
kubeovn:
repository: kube-ovn
dpdkRepository: kube-ovn-dpdk
vpcRepository: vpc-nat-gateway
tag: v1.13.0
support_arm: true
thirdparty: true
image:
pullPolicy: IfNotPresent
namespace: kube-system
MASTER_NODES: ""
MASTER_NODES_LABEL: "kube-ovn/role=master"
networking:
# NET_STACK could be dual_stack, ipv4, ipv6
NET_STACK: ipv4
ENABLE_SSL: false
# network type could be geneve or vlan
NETWORK_TYPE: geneve
# tunnel type could be geneve, vxlan or stt
TUNNEL_TYPE: geneve
IFACE: ""
DPDK_TUNNEL_IFACE: "br-phy"
EXCLUDE_IPS: ""
POD_NIC_TYPE: "veth-pair"
vlan:
PROVIDER_NAME: "provider"
VLAN_INTERFACE_NAME: ""
VLAN_NAME: "ovn-vlan"
VLAN_ID: "100"
EXCHANGE_LINK_NAME: false
ENABLE_EIP_SNAT: true
DEFAULT_SUBNET: "ovn-default"
DEFAULT_VPC: "ovn-cluster"
NODE_SUBNET: "join"
ENABLE_ECMP: false
ENABLE_METRICS: true
NODE_LOCAL_DNS_IP: ""
PROBE_INTERVAL: 180000
OVN_NORTHD_PROBE_INTERVAL: 5000
OVN_LEADER_PROBE_INTERVAL: 5
OVN_REMOTE_PROBE_INTERVAL: 10000
OVN_REMOTE_OPENFLOW_INTERVAL: 180
OVN_NORTHD_N_THREADS: 1
ENABLE_COMPACT: false
func:
ENABLE_LB: true
ENABLE_NP: true
ENABLE_EIP_SNAT: true
ENABLE_EXTERNAL_VPC: true
HW_OFFLOAD: false
ENABLE_LB_SVC: false
ENABLE_KEEP_VM_IP: true
LS_DNAT_MOD_DL_DST: true
LS_CT_SKIP_DST_LPORT_IPS: true
CHECK_GATEWAY: true
LOGICAL_GATEWAY: false
ENABLE_BIND_LOCAL_IP: true
U2O_INTERCONNECTION: false
ENABLE_TPROXY: false
ENABLE_IC: false
ipv4:
POD_CIDR: "10.16.0.0/16"
POD_GATEWAY: "10.16.0.1"
SVC_CIDR: "10.96.0.0/12"
JOIN_CIDR: "100.64.0.0/16"
PINGER_EXTERNAL_ADDRESS: "1.1.1.1"
PINGER_EXTERNAL_DOMAIN: "alauda.cn."
ipv6:
POD_CIDR: "fd00:10:16::/112"
POD_GATEWAY: "fd00:10:16::1"
SVC_CIDR: "fd00:10:96::/112"
JOIN_CIDR: "fd00:100:64::/112"
PINGER_EXTERNAL_ADDRESS: "2606:4700:4700::1111"
PINGER_EXTERNAL_DOMAIN: "google.com."
dual_stack:
POD_CIDR: "10.16.0.0/16,fd00:10:16::/112"
POD_GATEWAY: "10.16.0.1,fd00:10:16::1"
SVC_CIDR: "10.96.0.0/12,fd00:10:96::/112"
JOIN_CIDR: "100.64.0.0/16,fd00:100:64::/112"
PINGER_EXTERNAL_ADDRESS: "1.1.1.1,2606:4700:4700::1111"
PINGER_EXTERNAL_DOMAIN: "google.com."
performance:
GC_INTERVAL: 360
INSPECT_INTERVAL: 20
OVS_VSCTL_CONCURRENCY: 100
debug:
ENABLE_MIRROR: false
MIRROR_IFACE: "mirror0"
cni_conf:
CNI_CONFIG_PRIORITY: "01"
CNI_CONF_DIR: "/etc/cni/net.d"
CNI_BIN_DIR: "/opt/cni/bin"
CNI_CONF_FILE: "/kube-ovn/01-kube-ovn.conflist"
LOCAL_BIN_DIR: "/usr/local/bin"
MOUNT_LOCAL_BIN_DIR: false
kubelet_conf:
KUBELET_DIR: "/var/lib/kubelet"
log_conf:
LOG_DIR: "/var/log"
OPENVSWITCH_DIR: "/etc/origin/openvswitch"
OVN_DIR: "/etc/origin/ovn"
DISABLE_MODULES_MANAGEMENT: false
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
# hybrid dpdk
HYBRID_DPDK: false
HUGEPAGE_SIZE_TYPE: hugepages-2Mi # Default
HUGEPAGES: 1Gi
# DPDK
DPDK: false
DPDK_VERSION: "19.11"
DPDK_CPU: "1000m" # Default CPU configuration
DPDK_MEMORY: "2Gi" # Default Memory configuration
ovn-central:
requests:
cpu: "300m"
memory: "200Mi"
limits:
cpu: "3"
memory: "4Gi"
ovs-ovn:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "2"
memory: "1000Mi"
kube-ovn-controller:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "1000m"
memory: "1Gi"
kube-ovn-cni:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "1000m"
memory: "1Gi"
kube-ovn-pinger:
requests:
cpu: "100m"
memory: "100Mi"
limits:
cpu: "200m"
memory: "400Mi"
kube-ovn-monitor:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "200m"
memory: "200Mi"

View File

@@ -52,19 +52,46 @@ spec:
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
- /kube-ovn/start-controller.sh
- --default-ls={{ .Values.networking.DEFAULT_SUBNET }}
- --default-cidr={{ index $cozyConfig.data "ipv4-pod-cidr" }}
- --default-gateway={{ index $cozyConfig.data "ipv4-pod-gateway" }}
- --default-cidr=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.POD_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.POD_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.POD_CIDR }}
{{- end }}
- --default-gateway=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.POD_GATEWAY }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.POD_GATEWAY }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.POD_GATEWAY }}
{{- end }}
- --default-gateway-check={{- .Values.func.CHECK_GATEWAY }}
- --default-logical-gateway={{- .Values.func.LOGICAL_GATEWAY }}
- --default-u2o-interconnection={{- .Values.func.U2O_INTERCONNECTION }}
- --default-exclude-ips={{- .Values.networking.EXCLUDE_IPS }}
- --cluster-router={{ .Values.networking.DEFAULT_VPC }}
- --node-switch={{ .Values.networking.NODE_SUBNET }}
- --node-switch-cidr={{ index $cozyConfig.data "ipv4-join-cidr" }}
- --service-cluster-ip-range={{ index $cozyConfig.data "ipv4-svc-cidr" }}
- --node-switch-cidr=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.JOIN_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.JOIN_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.JOIN_CIDR }}
{{- end }}
- --service-cluster-ip-range=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.SVC_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.SVC_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.SVC_CIDR }}
{{- end }}
- --network-type={{- .Values.networking.NETWORK_TYPE }}
- --default-provider-name={{ .Values.networking.vlan.PROVIDER_NAME }}
- --default-interface-name={{- .Values.networking.vlan.VLAN_INTERFACE_NAME }}

View File

@@ -51,12 +51,18 @@ spec:
- bash
- /kube-ovn/start-cniserver.sh
args:
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
- --enable-mirror={{- .Values.debug.ENABLE_MIRROR }}
- --mirror-iface={{- .Values.debug.MIRROR_IFACE }}
- --node-switch={{ .Values.networking.NODE_SUBNET }}
- --encap-checksum=true
- --service-cluster-ip-range={{ index $cozyConfig.data "ipv4-svc-cidr" }}
- --service-cluster-ip-range=
{{- if eq .Values.networking.NET_STACK "dual_stack" -}}
{{ .Values.dual_stack.SVC_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv4" -}}
{{ .Values.ipv4.SVC_CIDR }}
{{- else if eq .Values.networking.NET_STACK "ipv6" -}}
{{ .Values.ipv6.SVC_CIDR }}
{{- end }}
{{- if eq .Values.networking.NETWORK_TYPE "vlan" }}
- --iface=
{{- else}}

View File

@@ -70,6 +70,10 @@ func:
ENABLE_TPROXY: false
ipv4:
POD_CIDR: "10.16.0.0/16"
POD_GATEWAY: "10.16.0.1"
SVC_CIDR: "10.96.0.0/12"
JOIN_CIDR: "100.64.0.0/16"
PINGER_EXTERNAL_ADDRESS: "1.1.1.1"
PINGER_EXTERNAL_DOMAIN: "alauda.cn."

View File

@@ -1,97 +0,0 @@
diff --git a/packages/system/kubeovn/charts/kube-ovn/templates/ovncni-ds.yaml b/packages/system/kubeovn/charts/kube-ovn/templates/ovncni-ds.yaml
index d9a9a67..b2e12dd 100644
--- a/packages/system/kubeovn/charts/kube-ovn/templates/ovncni-ds.yaml
+++ b/packages/system/kubeovn/charts/kube-ovn/templates/ovncni-ds.yaml
@@ -51,18 +51,12 @@ spec:
- bash
- /kube-ovn/start-cniserver.sh
args:
+ {{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
- --enable-mirror={{- .Values.debug.ENABLE_MIRROR }}
- --mirror-iface={{- .Values.debug.MIRROR_IFACE }}
- --node-switch={{ .Values.networking.NODE_SUBNET }}
- --encap-checksum=true
- - --service-cluster-ip-range=
- {{- if eq .Values.networking.NET_STACK "dual_stack" -}}
- {{ .Values.dual_stack.SVC_CIDR }}
- {{- else if eq .Values.networking.NET_STACK "ipv4" -}}
- {{ .Values.ipv4.SVC_CIDR }}
- {{- else if eq .Values.networking.NET_STACK "ipv6" -}}
- {{ .Values.ipv6.SVC_CIDR }}
- {{- end }}
+ - --service-cluster-ip-range={{ index $cozyConfig.data "ipv4-svc-cidr" }}
{{- if eq .Values.networking.NETWORK_TYPE "vlan" }}
- --iface=
{{- else}}
diff --git a/packages/system/kubeovn/charts/kube-ovn/templates/controller-deploy.yaml b/packages/system/kubeovn/charts/kube-ovn/templates/controller-deploy.yaml
index 0e69494..756eb7c 100644
--- a/packages/system/kubeovn/charts/kube-ovn/templates/controller-deploy.yaml
+++ b/packages/system/kubeovn/charts/kube-ovn/templates/controller-deploy.yaml
@@ -52,46 +52,19 @@ spec:
image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.repository }}:{{ .Values.global.images.kubeovn.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
+ {{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
- /kube-ovn/start-controller.sh
- --default-ls={{ .Values.networking.DEFAULT_SUBNET }}
- - --default-cidr=
- {{- if eq .Values.networking.NET_STACK "dual_stack" -}}
- {{ .Values.dual_stack.POD_CIDR }}
- {{- else if eq .Values.networking.NET_STACK "ipv4" -}}
- {{ .Values.ipv4.POD_CIDR }}
- {{- else if eq .Values.networking.NET_STACK "ipv6" -}}
- {{ .Values.ipv6.POD_CIDR }}
- {{- end }}
- - --default-gateway=
- {{- if eq .Values.networking.NET_STACK "dual_stack" -}}
- {{ .Values.dual_stack.POD_GATEWAY }}
- {{- else if eq .Values.networking.NET_STACK "ipv4" -}}
- {{ .Values.ipv4.POD_GATEWAY }}
- {{- else if eq .Values.networking.NET_STACK "ipv6" -}}
- {{ .Values.ipv6.POD_GATEWAY }}
- {{- end }}
+ - --default-cidr={{ index $cozyConfig.data "ipv4-pod-cidr" }}
+ - --default-gateway={{ index $cozyConfig.data "ipv4-pod-gateway" }}
- --default-gateway-check={{- .Values.func.CHECK_GATEWAY }}
- --default-logical-gateway={{- .Values.func.LOGICAL_GATEWAY }}
- --default-u2o-interconnection={{- .Values.func.U2O_INTERCONNECTION }}
- --default-exclude-ips={{- .Values.networking.EXCLUDE_IPS }}
- --cluster-router={{ .Values.networking.DEFAULT_VPC }}
- --node-switch={{ .Values.networking.NODE_SUBNET }}
- - --node-switch-cidr=
- {{- if eq .Values.networking.NET_STACK "dual_stack" -}}
- {{ .Values.dual_stack.JOIN_CIDR }}
- {{- else if eq .Values.networking.NET_STACK "ipv4" -}}
- {{ .Values.ipv4.JOIN_CIDR }}
- {{- else if eq .Values.networking.NET_STACK "ipv6" -}}
- {{ .Values.ipv6.JOIN_CIDR }}
- {{- end }}
- - --service-cluster-ip-range=
- {{- if eq .Values.networking.NET_STACK "dual_stack" -}}
- {{ .Values.dual_stack.SVC_CIDR }}
- {{- else if eq .Values.networking.NET_STACK "ipv4" -}}
- {{ .Values.ipv4.SVC_CIDR }}
- {{- else if eq .Values.networking.NET_STACK "ipv6" -}}
- {{ .Values.ipv6.SVC_CIDR }}
- {{- end }}
+ - --node-switch-cidr={{ index $cozyConfig.data "ipv4-join-cidr" }}
+ - --service-cluster-ip-range={{ index $cozyConfig.data "ipv4-svc-cidr" }}
- --network-type={{- .Values.networking.NETWORK_TYPE }}
- --default-provider-name={{ .Values.networking.vlan.PROVIDER_NAME }}
- --default-interface-name={{- .Values.networking.vlan.VLAN_INTERFACE_NAME }}
diff --git a/packages/system/kubeovn/charts/kube-ovn/values.yaml b/packages/system/kubeovn/charts/kube-ovn/values.yaml
index bfffc4d..b880749 100644
--- a/packages/system/kubeovn/charts/kube-ovn/values.yaml
+++ b/packages/system/kubeovn/charts/kube-ovn/values.yaml
@@ -70,10 +70,6 @@ func:
ENABLE_TPROXY: false
ipv4:
- POD_CIDR: "10.16.0.0/16"
- POD_GATEWAY: "10.16.0.1"
- SVC_CIDR: "10.96.0.0/12"
- JOIN_CIDR: "100.64.0.0/16"
PINGER_EXTERNAL_ADDRESS: "1.1.1.1"
PINGER_EXTERNAL_DOMAIN: "alauda.cn."

View File

@@ -12,6 +12,12 @@ kube-ovn:
func:
ENABLE_NP: false
ipv4:
POD_CIDR: "10.244.0.0/16"
POD_GATEWAY: "10.244.0.1"
SVC_CIDR: "10.96.0.0/16"
JOIN_CIDR: "100.64.0.0/16"
MASTER_NODES_LABEL: "node-role.kubernetes.io/control-plane"
networking:
ENABLE_SSL: true

View File

@@ -3,8 +3,8 @@ name: piraeus
description: |
The Piraeus Operator manages software defined storage clusters using LINSTOR in Kubernetes.
type: application
version: 2.3.0
appVersion: "v2.3.0"
version: 2.4.1
appVersion: "v2.4.1"
maintainers:
- name: Piraeus Datastore
url: https://piraeus.io

View File

@@ -17,19 +17,19 @@ data:
# quay.io/piraeusdatastore/piraeus-server:v1.24.2
components:
linstor-controller:
tag: v1.25.1
tag: v1.26.2
image: piraeus-server
linstor-satellite:
tag: v1.25.1
tag: v1.26.2
image: piraeus-server
linstor-csi:
tag: v1.3.0
tag: v1.4.0
image: piraeus-csi
drbd-reactor:
tag: v1.4.0
image: drbd-reactor
ha-controller:
tag: v1.1.4
tag: v1.2.0
image: piraeus-ha-controller
drbd-shutdown-guard:
tag: v1.0.0
@@ -38,7 +38,7 @@ data:
tag: v0.10
image: ktls-utils
drbd-module-loader:
tag: v9.2.6
tag: v9.2.8
# The special "match" attribute is used to select an image based on the node's reported OS.
# The operator will first check the k8s node's ".status.nodeInfo.osImage" field, and compare it against the list
# here. If one matches, that specific image name will be used instead of the fallback image.
@@ -54,12 +54,18 @@ data:
image: drbd9-almalinux8
- osImage: AlmaLinux 9
image: drbd9-almalinux9
- osImage: Rocky Linux 8
image: drbd9-almalinux8
- osImage: Rocky Linux 9
image: drbd9-almalinux9
- osImage: Ubuntu 18\.04
image: drbd9-bionic
- osImage: Ubuntu 20\.04
image: drbd9-focal
- osImage: Ubuntu 22\.04
image: drbd9-jammy
- osImage: Debian GNU/Linux 12
image: drbd9-bookworm
- osImage: Debian GNU/Linux 11
image: drbd9-bullseye
- osImage: Debian GNU/Linux 10
@@ -69,25 +75,25 @@ data:
base: registry.k8s.io/sig-storage
components:
csi-attacher:
tag: v4.4.2
tag: v4.5.0
image: csi-attacher
csi-livenessprobe:
tag: v2.11.0
tag: v2.12.0
image: livenessprobe
csi-provisioner:
tag: v3.6.2
tag: v4.0.0
image: csi-provisioner
csi-snapshotter:
tag: v6.3.2
tag: v7.0.1
image: csi-snapshotter
csi-resizer:
tag: v1.9.2
tag: v1.10.0
image: csi-resizer
csi-external-health-monitor-controller:
tag: v0.10.0
tag: v0.11.0
image: csi-external-health-monitor-controller
csi-node-driver-registrar:
tag: v2.9.1
tag: v2.10.0
image: csi-node-driver-registrar
{{- range $idx, $value := .Values.imageConfigOverride }}
{{ add $idx 1 }}_helm_override.yaml: |

View File

@@ -152,3 +152,27 @@ webhooks:
resources:
- linstorsatelliteconfigurations
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: '{{ include "piraeus-operator.fullname" . }}-webhook-service'
namespace: '{{ .Release.Namespace }}'
path: /validate-storage-k8s-io-v1-storageclass
{{- if not .Values.tls.certManagerIssuerRef }}
caBundle: {{ $ca }}
{{- end }}
failurePolicy: {{ .Values.webhook.failurePolicy }}
timeoutSeconds: {{ .Values.webhook.timeoutSeconds }}
name: vstorageclass.kb.io
rules:
- apiGroups:
- storage.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- storageclasses
sideEffects: None

View File

@@ -1,9 +1,11 @@
apiVersion: v2
appVersion: 1.21.1
description: CloudNativePG Helm Chart
appVersion: 1.22.2
description: CloudNativePG Operator Helm Chart
home: https://cloudnative-pg.io
icon: https://raw.githubusercontent.com/cloudnative-pg/artwork/main/cloudnativepg-logo.svg
keywords:
- operator
- controller
- postgresql
- postgres
- database
@@ -14,4 +16,4 @@ name: cloudnative-pg
sources:
- https://github.com/cloudnative-pg/charts
type: application
version: 0.19.1
version: 0.20.2

File diff suppressed because one or more lines are too long

View File

@@ -31,8 +31,9 @@ spec:
{{- include "cloudnative-pg.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:

View File

@@ -0,0 +1,12 @@
{{- if .Values.monitoring.grafanaDashboard.create -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.monitoring.grafanaDashboard.configMapName }}
namespace: {{ default .Release.Namespace .Values.monitoring.grafanaDashboard.namespace }}
labels:
{{ .Values.monitoring.grafanaDashboard.sidecarLabel }}: {{ .Values.monitoring.grafanaDashboard.sidecarLabelValue | quote }}
data:
cnp.json: |-
{{ .Files.Get "monitoring/grafana-dashboard.json" | indent 6 }}
{{- end -}}

View File

@@ -95,6 +95,26 @@
"monitoring": {
"type": "object",
"properties": {
"grafanaDashboard": {
"type": "object",
"properties": {
"configMapName": {
"type": "string"
},
"create": {
"type": "boolean"
},
"namespace": {
"type": "string"
},
"sidecarLabel": {
"type": "string"
},
"sidecarLabelValue": {
"type": "string"
}
}
},
"podMonitorEnabled": {
"type": "boolean"
}

View File

@@ -139,6 +139,16 @@ affinity: {}
monitoring:
# -- Specifies whether the monitoring should be enabled. Requires Prometheus Operator CRDs.
podMonitorEnabled: false
grafanaDashboard:
create: false
# -- Allows overriding the namespace where the ConfigMap will be created, defaulting to the same one as the Release.
namespace: ""
# -- The name of the ConfigMap containing the dashboard.
configMapName: "cnpg-grafana-dashboard"
# -- Label that ConfigMaps should have to be loaded as dashboards.
sidecarLabel: "grafana_dashboard"
# -- Label value that ConfigMaps should have to be loaded as dashboards.
sidecarLabelValue: "1"
# Default monitoring queries
monitoringQueriesConfigMap:

View File

@@ -35,11 +35,6 @@ kubectl annotate helmrepositories.source.toolkit.fluxcd.io -A -l cozystack.io/re
# Install platform chart
make -C packages/core/platform apply
# Flush kubeapps cache
if kubectl wait --for=condition=ready -n cozy-dashboard pod/dashboard-redis-master-0 --timeout=1s; then
kubectl exec -ti -n cozy-dashboard dashboard-redis-master-0 -- sh -c 'redis-cli -a "$REDIS_PASSWORD" flushdb'
fi
# Reconcile platform chart
trap 'exit' INT TERM
while true; do