Compare commits

..

19 Commits

Author SHA1 Message Date
Andrei Kvapil
cdac172ebf fix: Automatically build helm charts when building cozystack image 2024-04-01 21:19:29 +02:00
Andrei Kvapil
33bc23cfca Introduce bundles (#53)
* bundles

* Allow overriding values by prividng values-<release>: <json|yaml> in cozystack-config

* match bundle-name from cozystack-config

* add extra bundles
2024-04-01 17:42:51 +02:00
Andrei Kvapil
c5ead1932f mariadb-operator v0.27.0 (#51) 2024-04-01 17:42:33 +02:00
Andrei Kvapil
a7d12c1430 update kubeapps and flux (#50)
* Update fluxcd 2.2.3

* Update kubeapps 14.7.2
2024-04-01 17:42:22 +02:00
Timur Tukaev
5e1380df76 Update README.md (#49)
Fix link to cozystack website
2024-03-23 22:00:44 +01:00
Andrei Kvapil
03fab7a831 Update Cilium v1.14.5 (#47) 2024-03-15 22:01:30 +01:00
Andrei Kvapil
e17dcaa65e Update CNPG to 1.22.2 (#46)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-15 21:15:36 +01:00
Andrei Kvapil
85d4ed251d Update piraeus-operator and LINSTOR v2.4.1 (#45) 2024-03-15 21:15:27 +01:00
Andrei Kvapil
f1c01a0fe8 Add link to roadmap (#41)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-15 21:15:17 +01:00
Andrei Kvapil
2cff181279 Preapre release v0.2.0 (#38)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-15 21:15:06 +01:00
Andrei Kvapil
2e3555600d Positioning Cozystack as framework for building clouds (#31)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-03-05 11:05:40 +01:00
George Gaál
98f488fcac Fix gitignore (#26)
Signed-off-by: George Gaál <gb12335@gmail.com>
2024-02-21 12:33:52 +01:00
Andrei Kvapil
1c6de1ccf5 Preapre release v0.1.0 (#25) 2024-02-20 12:24:39 +01:00
Andrei Kvapil
235a2fcf47 Workaround: The declarative way to flush redis for our dashboard (#24) 2024-02-19 20:13:55 +01:00
Andrei Kvapil
24151b09f3 Update README.md (#21) 2024-02-19 15:16:01 +01:00
Andrei Kvapil
b37071f05e Enable ZFS support for LINSTOR (#22) 2024-02-19 15:12:49 +01:00
Andrei Kvapil
c64c6b549b Enable leader election for Cozystack (#23) 2024-02-19 15:08:16 +01:00
Andrei Kvapil
df47d2f4a6 Change tag for matchbox image (#15)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-02-17 19:11:09 +01:00
Andrei Kvapil
c0aea5a106 Add Adopters, Code of Conduct, Contributing, and Maintainers guides (#14)
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
2024-02-17 17:53:49 +01:00
318 changed files with 62629 additions and 12448 deletions

2
.gitignore vendored
View File

@@ -1 +1,3 @@
_out _out
.git
.idea

553
README.md
View File

@@ -10,7 +10,7 @@
# Cozystack # Cozystack
**Cozystack** is an open-source **PaaS platform** for cloud providers. **Cozystack** is a free PaaS platform and framework for building clouds.
With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease. With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
@@ -18,548 +18,55 @@ You can use Cozystack to build your own cloud or to provide a cost-effective dev
## Use-Cases ## Use-Cases
### As a backend for a public cloud * [**Using Cozystack to build public cloud**](https://cozystack.io/docs/use-cases/public-cloud/)
You can use Cozystack as backend for a public cloud
Cozystack positions itself as a kind of framework for building public clouds. The key word here is framework. In this case, it's important to understand that Cozystack is made for cloud providers, not for end users. * [**Using Cozystack to build private cloud**](https://cozystack.io/docs/use-cases/private-cloud/)
You can use Cozystack as platform to build a private cloud powered by Infrastructure-as-Code approach
Despite having a graphical interface, the current security model does not imply public user access to your management cluster. * [**Using Cozystack as Kubernetes distribution**](https://cozystack.io/docs/use-cases/kubernetes-distribution/)
You can use Cozystack as Kubernetes distribution for Bare Metal
Instead, end users get access to their own Kubernetes clusters, can order LoadBalancers and additional services from it, but they have no access and know nothing about your management cluster powered by Cozystack.
Thus, to integrate with your billing system, it's enough to teach your system to go to the management Kubernetes and place a YAML file signifying the service you're interested in. Cozystack will do the rest of the work for you.
![](https://aenix.io/wp-content/uploads/2024/02/Wireframe-1.png)
### As a private cloud for Infrastructure-as-Code
One of the use cases is a self-portal for users within your company, where they can order the service they're interested in or a managed database.
You can implement best GitOps practices, where users will launch their own Kubernetes clusters and databases for their needs with a simple commit of configuration into your infrastructure Git repository.
Thanks to the standardization of the approach to deploying applications, you can expand the platform's capabilities using the functionality of standard Helm charts.
### As a Kubernetes distribution for Bare Metal
We created Cozystack primarily for our own needs, having vast experience in building reliable systems on bare metal infrastructure. This experience led to the formation of a separate boxed product, which is aimed at standardizing and providing a ready-to-use tool for managing your infrastructure.
Currently, Cozystack already solves a huge scope of infrastructure tasks: starting from provisioning bare metal servers, having a ready monitoring system, fast and reliable storage, a network fabric with the possibility of interconnect with your infrastructure, the ability to run virtual machines, databases, and much more right out of the box.
All this makes Cozystack a convenient platform for delivering and launching your application on Bare Metal.
## Screenshot ## Screenshot
![](https://aenix.io/wp-content/uploads/2023/12/cozystack1-1.png) ![Cozystack screenshot](https://cozystack.io/img/screenshot.png)
## Core values ## Documentation
### Standardization and unification The documentation is located on official [cozystack.io](https://cozystack.io) website.
All components of the platform are based on open source tools and technologies which are widely known in the industry.
### Collaborate, not compete Read [Get Started](https://cozystack.io/docs/get-started/) section for a quick start.
If a feature being developed for the platform could be useful to a upstream project, it should be contributed to upstream project, rather than being implemented within the platform.
### API-first If you encounter any difficulties, start with the [troubleshooting guide](https://cozystack.io/docs/troubleshooting/), and work your way through the process that we've outlined.
Cozystack is based on Kubernetes and involves close interaction with its API. We don't aim to completely hide the all elements behind a pretty UI or any sort of customizations; instead, we provide a standard interface and teach users how to work with basic primitives. The web interface is used solely for deploying applications and quickly diving into basic concepts of platform.
## Quick Start ## Versioning
### Prepare infrastructure Versioning adheres to the [Semantic Versioning](http://semver.org/) principles.
A full list of the available releases is available in the GitHub repository's [Release](https://github.com/aenix-io/cozystack/releases) section.
- [Roadmap](https://github.com/orgs/aenix-io/projects/2)
![](https://aenix.io/wp-content/uploads/2024/02/Wireframe-2.png) ## Contributions
You need 3 physical servers or VMs with nested virtualisation: Contributions are highly appreciated and very welcomed!
``` In case of bugs, please, check if the issue has been already opened by checking the [GitHub Issues](https://github.com/aenix-io/cozystack/issues) section.
CPU: 4 cores In case it isn't, you can open a new one: a detailed report will help us to replicate it, assess it, and work on a fix.
CPU model: host
RAM: 8-16 GB
HDD1: 32 GB
HDD2: 100GB (raw)
```
And one management VM or physical server connected to the same network. You can express your intention in working on the fix on your own.
Any Linux system installed on it (eg. Ubuntu should be enough) Commits are used to generate the changelog, and their author will be referenced in it.
**Note:** The VM should support `x86-64-v2` architecture, the most probably you can achieve this by setting cpu model to `host` In case of **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/aenix-io/cozystack/discussions/categories/feature-requests).
#### Install dependencies: ## License
- `docker` Cozystack is licensed under Apache 2.0.
- `talosctl` The code is provided as-is with no warranties.
- `dialog`
- `nmap`
- `make`
- `yq`
- `kubectl`
- `helm`
### Netboot server ## Commercial Support
Start matchbox with prebuilt Talos image for Cozystack: [**Ænix**](https://aenix.io) offers enterprise-grade support, available 24/7.
```bash We provide all types of assistance, including consultations, development of missing features, design, assistance with installation, and integration.
sudo docker run --name=matchbox -d --net=host ghcr.io/aenix-io/cozystack/matchbox:v0.0.2 \
-address=:8080 \
-log-level=debug
```
Start DHCP-Server: [Contact us](https://aenix.io/contact/)
```bash
sudo docker run --name=dnsmasq -d --cap-add=NET_ADMIN --net=host quay.io/poseidon/dnsmasq \
-d -q -p0 \
--dhcp-range=192.168.100.3,192.168.100.254 \
--dhcp-option=option:router,192.168.100.1 \
--enable-tftp \
--tftp-root=/var/lib/tftpboot \
--dhcp-match=set:bios,option:client-arch,0 \
--dhcp-boot=tag:bios,undionly.kpxe \
--dhcp-match=set:efi32,option:client-arch,6 \
--dhcp-boot=tag:efi32,ipxe.efi \
--dhcp-match=set:efibc,option:client-arch,7 \
--dhcp-boot=tag:efibc,ipxe.efi \
--dhcp-match=set:efi64,option:client-arch,9 \
--dhcp-boot=tag:efi64,ipxe.efi \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:ipxe,http://192.168.100.254:8080/boot.ipxe \
--log-queries \
--log-dhcp
```
Where:
- `192.168.100.3,192.168.100.254` range to allocate IPs from
- `192.168.100.1` your gateway
- `192.168.100.254` is address of your management server
Check status of containers:
```
docker ps
```
example output:
```console
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
22044f26f74d quay.io/poseidon/dnsmasq "/usr/sbin/dnsmasq -…" 6 seconds ago Up 5 seconds dnsmasq
231ad81ff9e0 ghcr.io/aenix-io/cozystack/matchbox:v0.0.2 "/matchbox -address=…" 58 seconds ago Up 57 seconds matchbox
```
### Bootstrap cluster
Write configuration for Cozystack:
```yaml
cat > patch.yaml <<\EOT
machine:
kubelet:
nodeIP:
validSubnets:
- 192.168.100.0/24
kernel:
modules:
- name: openvswitch
- name: drbd
parameters:
- usermode_helper=disabled
- name: zfs
install:
image: ghcr.io/aenix-io/cozystack/talos:v1.6.4
files:
- content: |
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = true
path: /etc/cri/conf.d/20-customization.part
op: create
cluster:
network:
cni:
name: none
podSubnets:
- 10.244.0.0/16
serviceSubnets:
- 10.96.0.0/16
EOT
cat > patch-controlplane.yaml <<\EOT
cluster:
allowSchedulingOnControlPlanes: true
controllerManager:
extraArgs:
bind-address: 0.0.0.0
scheduler:
extraArgs:
bind-address: 0.0.0.0
apiServer:
certSANs:
- 127.0.0.1
proxy:
disabled: true
discovery:
enabled: false
etcd:
advertisedSubnets:
- 192.168.100.0/24
EOT
```
Run [talos-bootstrap](https://github.com/aenix-io/talos-bootstrap/) to deploy cluster:
```bash
talos-bootstrap install
```
Save admin kubeconfig to access your Kubernetes cluster:
```bash
cp -i kubeconfig ~/.kube/config
```
Check connection:
```bash
kubectl get ns
```
example output:
```console
NAME STATUS AGE
default Active 7m56s
kube-node-lease Active 7m56s
kube-public Active 7m56s
kube-system Active 7m56s
```
**Note:**: All nodes should currently show as "Not Ready", don't worry about that, this is because you disabled the default CNI plugin in the previous step. Cozystack will install it's own CNI-plugin on the next step.
### Install Cozystack
write config for cozystack:
**Note:** please make sure that you written the same setting specified in `patch.yaml` and `patch-controlplane.yaml` files.
```yaml
cat > cozystack-config.yaml <<\EOT
apiVersion: v1
kind: ConfigMap
metadata:
name: cozystack
namespace: cozy-system
data:
cluster-name: "cozystack"
ipv4-pod-cidr: "10.244.0.0/16"
ipv4-pod-gateway: "10.244.0.1"
ipv4-svc-cidr: "10.96.0.0/16"
ipv4-join-cidr: "100.64.0.0/16"
EOT
```
Create namesapce and install Cozystack system components:
```bash
kubectl create ns cozy-system
kubectl apply -f cozystack-config.yaml
kubectl apply -f manifests/cozystack-installer.yaml
```
(optional) You can track the logs of installer:
```bash
kubectl logs -n cozy-system deploy/cozystack -f
```
Wait for a while, then check the status of installation:
```bash
kubectl get hr -A
```
Wait until all releases become to `Ready` state:
```console
NAMESPACE NAME AGE READY STATUS
cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
cozy-cilium cilium 4m1s True Release reconciliation succeeded
cozy-cluster-api capi-operator 4m1s True Release reconciliation succeeded
cozy-cluster-api capi-providers 4m1s True Release reconciliation succeeded
cozy-dashboard dashboard 4m1s True Release reconciliation succeeded
cozy-fluxcd cozy-fluxcd 4m1s True Release reconciliation succeeded
cozy-grafana-operator grafana-operator 4m1s True Release reconciliation succeeded
cozy-kamaji kamaji 4m1s True Release reconciliation succeeded
cozy-kubeovn kubeovn 4m1s True Release reconciliation succeeded
cozy-kubevirt-cdi kubevirt-cdi 4m1s True Release reconciliation succeeded
cozy-kubevirt-cdi kubevirt-cdi-operator 4m1s True Release reconciliation succeeded
cozy-kubevirt kubevirt 4m1s True Release reconciliation succeeded
cozy-kubevirt kubevirt-operator 4m1s True Release reconciliation succeeded
cozy-linstor linstor 4m1s True Release reconciliation succeeded
cozy-linstor piraeus-operator 4m1s True Release reconciliation succeeded
cozy-mariadb-operator mariadb-operator 4m1s True Release reconciliation succeeded
cozy-metallb metallb 4m1s True Release reconciliation succeeded
cozy-monitoring monitoring 4m1s True Release reconciliation succeeded
cozy-postgres-operator postgres-operator 4m1s True Release reconciliation succeeded
cozy-rabbitmq-operator rabbitmq-operator 4m1s True Release reconciliation succeeded
cozy-redis-operator redis-operator 4m1s True Release reconciliation succeeded
cozy-telepresence telepresence 4m1s True Release reconciliation succeeded
cozy-victoria-metrics-operator victoria-metrics-operator 4m1s True Release reconciliation succeeded
tenant-root tenant-root 4m1s True Release reconciliation succeeded
```
#### Configure Storage
Setup alias to access LINSTOR:
```bash
alias linstor='kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor'
```
list your nodes
```bash
linstor node list
```
example output:
```console
+-------------------------------------------------------+
| Node | NodeType | Addresses | State |
|=======================================================|
| srv1 | SATELLITE | 192.168.100.11:3367 (SSL) | Online |
| srv2 | SATELLITE | 192.168.100.12:3367 (SSL) | Online |
| srv3 | SATELLITE | 192.168.100.13:3367 (SSL) | Online |
+-------------------------------------------------------+
```
list empty devices:
```bash
linstor physical-storage list
```
example output:
```console
+--------------------------------------------+
| Size | Rotational | Nodes |
|============================================|
| 107374182400 | True | srv3[/dev/sdb] |
| | | srv1[/dev/sdb] |
| | | srv2[/dev/sdb] |
+--------------------------------------------+
```
create storage pools:
```bash
linstor ps cdp lvm srv1 /dev/sdb --pool-name data --storage-pool data
linstor ps cdp lvm srv2 /dev/sdb --pool-name data --storage-pool data
linstor ps cdp lvm srv3 /dev/sdb --pool-name data --storage-pool data
```
list storage pools:
```bash
linstor sp l
```
example output:
```console
+-------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
|=====================================================================================================================================|
| DfltDisklessStorPool | srv1 | DISKLESS | | | | False | Ok | srv1;DfltDisklessStorPool |
| DfltDisklessStorPool | srv2 | DISKLESS | | | | False | Ok | srv2;DfltDisklessStorPool |
| DfltDisklessStorPool | srv3 | DISKLESS | | | | False | Ok | srv3;DfltDisklessStorPool |
| data | srv1 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv1;data |
| data | srv2 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv2;data |
| data | srv3 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv3;data |
+-------------------------------------------------------------------------------------------------------------------------------------+
```
Create default storage classes:
```yaml
kubectl create -f- <<EOT
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/layerList: "storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicated
provisioner: linstor.csi.linbit.com
parameters:
linstor.csi.linbit.com/storagePool: "data"
linstor.csi.linbit.com/autoPlace: "3"
linstor.csi.linbit.com/layerList: "drbd storage"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOT
```
list storageclasses:
```bash
kubectl get storageclasses
```
example output:
```console
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local (default) linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
replicated linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
```
#### Configure Networking interconnection
To access your services select the range of unused IPs, eg. `192.168.100.200-192.168.100.250`
**Note:** These IPs should be from the same network as nodes or they should have all necessary routes for them.
Configure MetalLB to use and announce this range:
```yaml
kubectl create -f- <<EOT
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: cozystack
namespace: cozy-metallb
spec:
ipAddressPools:
- cozystack
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: cozystack
namespace: cozy-metallb
spec:
addresses:
- 192.168.100.200-192.168.100.250
autoAssign: true
avoidBuggyIPs: false
EOT
```
#### Setup basic applications
Get token from `tenant-root`:
```bash
kubectl get secret -n tenant-root tenant-root -o go-template='{{ printf "%s\n" (index .data "token" | base64decode) }}'
```
Enable port forward to cozy-dashboard:
```bash
kubectl port-forward -n cozy-dashboard svc/dashboard 8080:80
```
Open: http://localhost:8080/
- Select `tenant-root`
- Click `Upgrade` button
- Write a domain into `host` which you wish to use as parent domain for all deployed applications
**Note:**
- if you have no domain yet, you can use `192.168.100.200.nip.io` where `192.168.100.200` is a first IP address in your network addresses range.
- alternatively you can leave the default value, however you'll be need to modify your `/etc/hosts` every time you want to access specific application.
- Set `etcd`, `monitoring` and `ingress` to enabled position
- Click Deploy
Check persistent volumes provisioned:
```bash
kubectl get pvc -n tenant-root
```
example output:
```console
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-etcd-0 Bound pvc-4cbd29cc-a29f-453d-b412-451647cd04bf 10Gi RWO local <unset> 2m10s
data-etcd-1 Bound pvc-1579f95a-a69d-4a26-bcc2-b15ccdbede0d 10Gi RWO local <unset> 115s
data-etcd-2 Bound pvc-907009e5-88bf-4d18-91e7-b56b0dbfb97e 10Gi RWO local <unset> 91s
grafana-db-1 Bound pvc-7b3f4e23-228a-46fd-b820-d033ef4679af 10Gi RWO local <unset> 2m41s
grafana-db-2 Bound pvc-ac9b72a4-f40e-47e8-ad24-f50d843b55e4 10Gi RWO local <unset> 113s
vmselect-cachedir-vmselect-longterm-0 Bound pvc-622fa398-2104-459f-8744-565eee0a13f1 2Gi RWO local <unset> 2m21s
vmselect-cachedir-vmselect-longterm-1 Bound pvc-fc9349f5-02b2-4e25-8bef-6cbc5cc6d690 2Gi RWO local <unset> 2m21s
vmselect-cachedir-vmselect-shortterm-0 Bound pvc-7acc7ff6-6b9b-4676-bd1f-6867ea7165e2 2Gi RWO local <unset> 2m41s
vmselect-cachedir-vmselect-shortterm-1 Bound pvc-e514f12b-f1f6-40ff-9838-a6bda3580eb7 2Gi RWO local <unset> 2m40s
vmstorage-db-vmstorage-longterm-0 Bound pvc-e8ac7fc3-df0d-4692-aebf-9f66f72f9fef 10Gi RWO local <unset> 2m21s
vmstorage-db-vmstorage-longterm-1 Bound pvc-68b5ceaf-3ed1-4e5a-9568-6b95911c7c3a 10Gi RWO local <unset> 2m21s
vmstorage-db-vmstorage-shortterm-0 Bound pvc-cee3a2a4-5680-4880-bc2a-85c14dba9380 10Gi RWO local <unset> 2m41s
vmstorage-db-vmstorage-shortterm-1 Bound pvc-d55c235d-cada-4c4a-8299-e5fc3f161789 10Gi RWO local <unset> 2m41s
```
Check all pods are running:
```bash
kubectl get pod -n tenant-root
```
example output:
```console
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 2m1s
etcd-1 1/1 Running 0 106s
etcd-2 1/1 Running 0 82s
grafana-db-1 1/1 Running 0 119s
grafana-db-2 1/1 Running 0 13s
grafana-deployment-74b5656d6-5dcvn 1/1 Running 0 90s
grafana-deployment-74b5656d6-q5589 1/1 Running 1 (105s ago) 111s
root-ingress-controller-6ccf55bc6d-pg79l 2/2 Running 0 2m27s
root-ingress-controller-6ccf55bc6d-xbs6x 2/2 Running 0 2m29s
root-ingress-defaultbackend-686bcbbd6c-5zbvp 1/1 Running 0 2m29s
vmalert-vmalert-644986d5c-7hvwk 2/2 Running 0 2m30s
vmalertmanager-alertmanager-0 2/2 Running 0 2m32s
vmalertmanager-alertmanager-1 2/2 Running 0 2m31s
vminsert-longterm-75789465f-hc6cz 1/1 Running 0 2m10s
vminsert-longterm-75789465f-m2v4t 1/1 Running 0 2m12s
vminsert-shortterm-78456f8fd9-wlwww 1/1 Running 0 2m29s
vminsert-shortterm-78456f8fd9-xg7cw 1/1 Running 0 2m28s
vmselect-longterm-0 1/1 Running 0 2m12s
vmselect-longterm-1 1/1 Running 0 2m12s
vmselect-shortterm-0 1/1 Running 0 2m31s
vmselect-shortterm-1 1/1 Running 0 2m30s
vmstorage-longterm-0 1/1 Running 0 2m12s
vmstorage-longterm-1 1/1 Running 0 2m12s
vmstorage-shortterm-0 1/1 Running 0 2m32s
vmstorage-shortterm-1 1/1 Running 0 2m31s
```
Now you can get public IP of ingress controller:
```
kubectl get svc -n tenant-root root-ingress-controller
```
example output:
```console
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
root-ingress-controller LoadBalancer 10.96.16.141 192.168.100.200 80:31632/TCP,443:30113/TCP 3m33s
```
Use `grafana.example.org` (under 192.168.100.200) to access system monitoring, where `example.org` is your domain specified for `tenant-root`
- login: `admin`
- password:
```bash
kubectl get secret -n tenant-root grafana-admin-password -o go-template='{{ printf "%s\n" (index .data "password" | base64decode) }}'
```

View File

@@ -12,9 +12,6 @@ talos_version=$(awk '/^version:/ {print $2}' packages/core/installer/images/talo
set -x set -x
sed -i "s|\(ghcr.io/aenix-io/cozystack/matchbox:\)v[^ ]\+|\1${version}|g" README.md
sed -i "s|\(ghcr.io/aenix-io/cozystack/talos:\)v[^ ]\+|\1${talos_version}|g" README.md
sed -i "/^TAG / s|=.*|= ${version}|" \ sed -i "/^TAG / s|=.*|= ${version}|" \
packages/apps/http-cache/Makefile \ packages/apps/http-cache/Makefile \
packages/apps/kubernetes/Makefile \ packages/apps/kubernetes/Makefile \

View File

@@ -61,8 +61,6 @@ spec:
selector: selector:
matchLabels: matchLabels:
app: cozystack app: cozystack
strategy:
type: Recreate
template: template:
metadata: metadata:
labels: labels:
@@ -72,14 +70,26 @@ spec:
serviceAccountName: cozystack serviceAccountName: cozystack
containers: containers:
- name: cozystack - name: cozystack
image: "ghcr.io/aenix-io/cozystack/installer:v0.0.2" image: "ghcr.io/aenix-io/cozystack/cozystack:v0.1.0"
env: env:
- name: KUBERNETES_SERVICE_HOST - name: KUBERNETES_SERVICE_HOST
value: localhost value: localhost
- name: KUBERNETES_SERVICE_PORT - name: KUBERNETES_SERVICE_PORT
value: "7445" value: "7445"
- name: K8S_AWAIT_ELECTION_ENABLED
value: "1"
- name: K8S_AWAIT_ELECTION_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAMESPACE
value: cozy-system
- name: K8S_AWAIT_ELECTION_IDENTITY
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: darkhttpd - name: darkhttpd
image: "ghcr.io/aenix-io/cozystack/installer:v0.0.2" image: "ghcr.io/aenix-io/cozystack/cozystack:v0.1.0"
command: command:
- /usr/bin/darkhttpd - /usr/bin/darkhttpd
- /cozystack/assets - /cozystack/assets

View File

@@ -2,7 +2,7 @@ PUSH := 1
LOAD := 0 LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack REGISTRY := ghcr.io/aenix-io/cozystack
NGINX_CACHE_TAG = v0.1.0 NGINX_CACHE_TAG = v0.1.0
TAG := v0.0.2 TAG := v0.2.0
image: image-nginx image: image-nginx

View File

@@ -1,14 +1,4 @@
{ {
"containerimage.config.digest": "sha256:f4ad0559a74749de0d11b1835823bf9c95332962b0909450251d849113f22c19", "containerimage.config.digest": "sha256:318fd8d0d6f6127387042f6ad150e87023d1961c7c5059dd5324188a54b0ab4e",
"containerimage.descriptor": { "containerimage.digest": "sha256:e3cf145238e6e45f7f13b9acaea445c94ff29f76a34ba9fa50828401a5a3cc68"
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:3a0e8d791e0ccf681711766387ea9278e7d39f1956509cead2f72aa0001797ef",
"size": 1093,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
"containerimage.digest": "sha256:3a0e8d791e0ccf681711766387ea9278e7d39f1956509cead2f72aa0001797ef",
"image.name": "ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0,ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0-v0.0.2"
} }

View File

@@ -1,7 +1,7 @@
PUSH := 1 PUSH := 1
LOAD := 0 LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.0.2 TAG := v0.2.0
UBUNTU_CONTAINER_DISK_TAG = v1.29.1 UBUNTU_CONTAINER_DISK_TAG = v1.29.1
image: image-ubuntu-container-disk image: image-ubuntu-container-disk

View File

@@ -1,4 +1,4 @@
{ {
"containerimage.config.digest": "sha256:e982cfa2320d3139ed311ae44bcc5ea18db7e4e76d2746e0af04c516288ff0f1", "containerimage.config.digest": "sha256:ee8968be63c7c45621ec45f3687211e0875acb24e8d9784e8d2ebcbf46a3538c",
"containerimage.digest": "sha256:34f6aba5b5a2afbb46bbb891ef4ddc0855c2ffe4f9e5a99e8e553286ddd2c070" "containerimage.digest": "sha256:16c3c07e74212585786dc1f1ae31d3ab90a575014806193e8e37d1d7751cb084"
} }

View File

@@ -3,7 +3,7 @@ NAME=installer
PUSH := 1 PUSH := 1
LOAD := 0 LOAD := 0
REGISTRY := ghcr.io/aenix-io/cozystack REGISTRY := ghcr.io/aenix-io/cozystack
TAG := v0.0.2 TAG := v0.2.0
TALOS_VERSION=$(shell awk '/^version:/ {print $$2}' images/talos/profiles/installer.yaml) TALOS_VERSION=$(shell awk '/^version:/ {print $$2}' images/talos/profiles/installer.yaml)
show: show:
@@ -18,18 +18,19 @@ diff:
update: update:
hack/gen-profiles.sh hack/gen-profiles.sh
image: image-installer image-talos image-matchbox image: image-cozystack image-talos image-matchbox
image-installer: image-cozystack:
docker buildx build -f images/installer/Dockerfile ../../.. \ make -C ../../.. repos
docker buildx build -f images/cozystack/Dockerfile ../../.. \
--provenance false \ --provenance false \
--tag $(REGISTRY)/installer:$(TAG) \ --tag $(REGISTRY)/cozystack:$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/installer:$(TAG) \ --cache-from type=registry,ref=$(REGISTRY)/cozystack:$(TAG) \
--cache-to type=inline \ --cache-to type=inline \
--metadata-file images/installer.json \ --metadata-file images/cozystack.json \
--push=$(PUSH) \ --push=$(PUSH) \
--load=$(LOAD) --load=$(LOAD)
echo "$(REGISTRY)/installer:$(TAG)" > images/installer.tag echo "$(REGISTRY)/cozystack:$(TAG)" > images/cozystack.tag
image-talos: image-talos:
test -f ../../../_out/assets/installer-amd64.tar || make talos-installer test -f ../../../_out/assets/installer-amd64.tar || make talos-installer
@@ -43,14 +44,18 @@ image-matchbox:
docker buildx build -f images/matchbox/Dockerfile ../../.. \ docker buildx build -f images/matchbox/Dockerfile ../../.. \
--provenance false \ --provenance false \
--tag $(REGISTRY)/matchbox:$(TAG) \ --tag $(REGISTRY)/matchbox:$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/matchbox:$(TAG) \ --tag $(REGISTRY)/matchbox:$(TALOS_VERSION)-$(TAG) \
--cache-from type=registry,ref=$(REGISTRY)/matchbox:$(TALOS_VERSION) \
--cache-to type=inline \ --cache-to type=inline \
--metadata-file images/matchbox.json \ --metadata-file images/matchbox.json \
--push=$(PUSH) \ --push=$(PUSH) \
--load=$(LOAD) --load=$(LOAD)
echo "$(REGISTRY)/matchbox:$(TAG)" > images/matchbox.tag echo "$(REGISTRY)/matchbox:$(TALOS_VERSION)" > images/matchbox.tag
assets: talos-iso assets: talos-iso
talos-initramfs talos-kernel talos-installer talos-iso: talos-initramfs talos-kernel talos-installer talos-iso:
cat images/talos/profiles/$(subst talos-,,$@).yaml | docker run --rm -i -v $${PWD}/../../../_out/assets:/out -v /dev:/dev --privileged "ghcr.io/siderolabs/imager:$(TALOS_VERSION)" - mkdir -p ../../../_out/assets
cat images/talos/profiles/$(subst talos-,,$@).yaml | \
docker run --rm -i -v /dev:/dev --privileged "ghcr.io/siderolabs/imager:$(TALOS_VERSION)" --tar-to-stdout - | \
tar -C ../../../_out/assets -xzf-

View File

@@ -0,0 +1,4 @@
{
"containerimage.config.digest": "sha256:ec8a4983a663f06a1503507482667a206e83e0d8d3663dff60ced9221855d6b0",
"containerimage.digest": "sha256:abb7b2fbc1f143c922f2a35afc4423a74b2b63c0bddfe620750613ed835aa861"
}

View File

@@ -0,0 +1 @@
ghcr.io/aenix-io/cozystack/cozystack:v0.1.0

View File

@@ -1,3 +1,15 @@
FROM golang:alpine3.19 as k8s-await-election-builder
ARG K8S_AWAIT_ELECTION_GITREPO=https://github.com/LINBIT/k8s-await-election
ARG K8S_AWAIT_ELECTION_VERSION=0.4.1
RUN apk add --no-cache git make
RUN git clone ${K8S_AWAIT_ELECTION_GITREPO} /usr/local/go/k8s-await-election/ \
&& cd /usr/local/go/k8s-await-election \
&& git reset --hard v${K8S_AWAIT_ELECTION_VERSION} \
&& make \
&& mv ./out/k8s-await-election-amd64 /k8s-await-election
FROM alpine:3.19 AS builder FROM alpine:3.19 AS builder
RUN apk add --no-cache make git RUN apk add --no-cache make git
@@ -18,7 +30,8 @@ COPY scripts /cozystack/scripts
COPY --from=builder /src/packages/core /cozystack/packages/core COPY --from=builder /src/packages/core /cozystack/packages/core
COPY --from=builder /src/packages/system /cozystack/packages/system COPY --from=builder /src/packages/system /cozystack/packages/system
COPY --from=builder /src/_out/repos /cozystack/assets/repos COPY --from=builder /src/_out/repos /cozystack/assets/repos
COPY --from=k8s-await-election-builder /k8s-await-election /usr/bin/k8s-await-election
COPY dashboards /cozystack/assets/dashboards COPY dashboards /cozystack/assets/dashboards
WORKDIR /cozystack WORKDIR /cozystack
ENTRYPOINT [ "/cozystack/scripts/installer.sh" ] ENTRYPOINT ["/usr/bin/k8s-await-election", "/cozystack/scripts/installer.sh" ]

View File

@@ -1,14 +0,0 @@
{
"containerimage.config.digest": "sha256:5c7f51a9cbc945c13d52157035eba6ba4b6f3b68b76280f8e64b4f6ba239db1a",
"containerimage.descriptor": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:7cda3480faf0539ed4a3dd252aacc7a997645d3a390ece377c36cf55f9e57e11",
"size": 2074,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
"containerimage.digest": "sha256:7cda3480faf0539ed4a3dd252aacc7a997645d3a390ece377c36cf55f9e57e11",
"image.name": "ghcr.io/aenix-io/cozystack/installer:v0.0.2"
}

View File

@@ -1 +0,0 @@
ghcr.io/aenix-io/cozystack/installer:v0.0.2

View File

@@ -1,14 +1,4 @@
{ {
"containerimage.config.digest": "sha256:cb8cb211017e51f6eb55604287c45cbf6ed8add5df482aaebff3d493a11b5a76", "containerimage.config.digest": "sha256:b869a6324f9c0e6d1dd48eee67cbe3842ee14efd59bdde477736ad2f90568ff7",
"containerimage.descriptor": { "containerimage.digest": "sha256:c30b237c5fa4fbbe47e1aba56e8f99569fe865620aa1953f31fc373794123cd7"
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"digest": "sha256:3be72cdce2f4ab4886a70fb7b66e4518a1fe4ba0771319c96fa19a0d6f409602",
"size": 1488,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
"containerimage.digest": "sha256:3be72cdce2f4ab4886a70fb7b66e4518a1fe4ba0771319c96fa19a0d6f409602",
"image.name": "ghcr.io/aenix-io/cozystack/matchbox:v0.0.2"
} }

View File

@@ -1 +1 @@
ghcr.io/aenix-io/cozystack/matchbox:v0.0.2 ghcr.io/aenix-io/cozystack/matchbox:v1.6.4

View File

@@ -41,8 +41,6 @@ spec:
selector: selector:
matchLabels: matchLabels:
app: cozystack app: cozystack
strategy:
type: Recreate
template: template:
metadata: metadata:
labels: labels:
@@ -52,14 +50,26 @@ spec:
serviceAccountName: cozystack serviceAccountName: cozystack
containers: containers:
- name: cozystack - name: cozystack
image: "{{ .Files.Get "images/installer.tag" | trim }}@{{ index (.Files.Get "images/installer.json" | fromJson) "containerimage.digest" }}" image: "{{ .Files.Get "images/cozystack.tag" | trim }}@{{ index (.Files.Get "images/cozystack.json" | fromJson) "containerimage.digest" }}"
env: env:
- name: KUBERNETES_SERVICE_HOST - name: KUBERNETES_SERVICE_HOST
value: localhost value: localhost
- name: KUBERNETES_SERVICE_PORT - name: KUBERNETES_SERVICE_PORT
value: "7445" value: "7445"
- name: K8S_AWAIT_ELECTION_ENABLED
value: "1"
- name: K8S_AWAIT_ELECTION_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAME
value: cozystack
- name: K8S_AWAIT_ELECTION_LOCK_NAMESPACE
value: cozy-system
- name: K8S_AWAIT_ELECTION_IDENTITY
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: darkhttpd - name: darkhttpd
image: "{{ .Files.Get "images/installer.tag" | trim }}@{{ index (.Files.Get "images/installer.json" | fromJson) "containerimage.digest" }}" image: "{{ .Files.Get "images/cozystack.tag" | trim }}@{{ index (.Files.Get "images/cozystack.json" | fromJson) "containerimage.digest" }}"
command: command:
- /usr/bin/darkhttpd - /usr/bin/darkhttpd
- /cozystack/assets - /cozystack/assets

View File

@@ -16,4 +16,4 @@ namespaces-apply:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml | kubectl apply -f- helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml | kubectl apply -f-
diff: diff:
helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) -s templates/namespaces.yaml | kubectl diff -f- helm template -n $(NAMESPACE) $(NAME) . --dry-run=server $(API_VERSIONS_FLAGS) | kubectl diff -f-

View File

@@ -0,0 +1,96 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: cilium
releaseName: cilium
chart: cozy-cilium
namespace: cozy-cilium
privileged: true
dependsOn: []
- name: fluxcd
releaseName: fluxcd
chart: cozy-fluxcd
namespace: cozy-fluxcd
dependsOn: [cilium]
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: [cilium]
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cilium,cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cilium,cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [cilium,victoria-metrics-operator]
- name: metallb
releaseName: metallb
chart: cozy-metallb
namespace: cozy-metallb
privileged: true
dependsOn: [cilium]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: [cilium]
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [cilium,cert-manager,victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cilium,cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: [cilium]
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: [cilium]
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator
namespace: cozy-linstor
dependsOn: [cilium,cert-manager]
- name: linstor
releaseName: linstor
chart: cozy-linstor
namespace: cozy-linstor
privileged: true
dependsOn: [piraeus-operator,cilium,cert-manager]
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: [kubeovn]

View File

@@ -0,0 +1,177 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: cilium
releaseName: cilium
chart: cozy-cilium
namespace: cozy-cilium
privileged: true
dependsOn: []
- name: kubeovn
releaseName: kubeovn
chart: cozy-kubeovn
namespace: cozy-kubeovn
privileged: true
dependsOn: [cilium]
values:
cozystack:
nodesHash: {{ include "cozystack.master-node-ips" . | sha256sum }}
kube-ovn:
ipv4:
POD_CIDR: "{{ index $cozyConfig.data "ipv4-pod-cidr" }}"
POD_GATEWAY: "{{ index $cozyConfig.data "ipv4-pod-gateway" }}"
SVC_CIDR: "{{ index $cozyConfig.data "ipv4-svc-cidr" }}"
JOIN_CIDR: "{{ index $cozyConfig.data "ipv4-join-cidr" }}"
- name: fluxcd
releaseName: fluxcd
chart: cozy-fluxcd
namespace: cozy-fluxcd
dependsOn: [cilium,kubeovn]
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: [cilium,kubeovn]
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cilium,kubeovn,cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cilium,kubeovn,cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [cilium,kubeovn,victoria-metrics-operator]
- name: kubevirt-operator
releaseName: kubevirt-operator
chart: cozy-kubevirt-operator
namespace: cozy-kubevirt
dependsOn: [cilium,kubeovn]
- name: kubevirt
releaseName: kubevirt
chart: cozy-kubevirt
namespace: cozy-kubevirt
privileged: true
dependsOn: [cilium,kubeovn,kubevirt-operator]
- name: kubevirt-cdi-operator
releaseName: kubevirt-cdi-operator
chart: cozy-kubevirt-cdi-operator
namespace: cozy-kubevirt-cdi
dependsOn: [cilium,kubeovn]
- name: kubevirt-cdi
releaseName: kubevirt-cdi
chart: cozy-kubevirt-cdi
namespace: cozy-kubevirt-cdi
dependsOn: [cilium,kubeovn,kubevirt-cdi-operator]
- name: metallb
releaseName: metallb
chart: cozy-metallb
namespace: cozy-metallb
privileged: true
dependsOn: [cilium,kubeovn]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: [cilium,kubeovn]
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [cilium,kubeovn,cert-manager,victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cilium,kubeovn,cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: [cilium,kubeovn]
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: [cilium,kubeovn]
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator
namespace: cozy-linstor
dependsOn: [cilium,kubeovn,cert-manager]
- name: linstor
releaseName: linstor
chart: cozy-linstor
namespace: cozy-linstor
privileged: true
dependsOn: [piraeus-operator,cilium,kubeovn,cert-manager]
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: [cilium,kubeovn]
- name: dashboard
releaseName: dashboard
chart: cozy-dashboard
namespace: cozy-dashboard
dependsOn: [cilium,kubeovn]
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1beta2" }}
{{- with (lookup "source.toolkit.fluxcd.io/v1beta2" "HelmRepository" "cozy-public" "").items }}
values:
kubeapps:
redis:
master:
podAnnotations:
{{- range $index, $repo := . }}
{{- with (($repo.status).artifact).revision }}
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
- name: kamaji
releaseName: kamaji
chart: cozy-kamaji
namespace: cozy-kamaji
dependsOn: [cilium,kubeovn,cert-manager]
- name: capi-operator
releaseName: capi-operator
chart: cozy-capi-operator
namespace: cozy-cluster-api
privileged: true
dependsOn: [cilium,kubeovn,cert-manager]
- name: capi-providers
releaseName: capi-providers
chart: cozy-capi-providers
namespace: cozy-cluster-api
privileged: true
dependsOn: [cilium,kubeovn,capi-operator]

View File

@@ -0,0 +1,69 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: fluxcd
releaseName: fluxcd
chart: cozy-fluxcd
namespace: cozy-fluxcd
dependsOn: []
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: []
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [victoria-metrics-operator]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: []
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: []
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: []
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: []

View File

@@ -0,0 +1,95 @@
{{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
releases:
- name: fluxcd
releaseName: fluxcd
chart: cozy-fluxcd
namespace: cozy-fluxcd
dependsOn: []
- name: cert-manager
releaseName: cert-manager
chart: cozy-cert-manager
namespace: cozy-cert-manager
dependsOn: []
- name: cert-manager-issuers
releaseName: cert-manager-issuers
chart: cozy-cert-manager-issuers
namespace: cozy-cert-manager
dependsOn: [cert-manager]
- name: victoria-metrics-operator
releaseName: victoria-metrics-operator
chart: cozy-victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
dependsOn: [cert-manager]
- name: monitoring
releaseName: monitoring
chart: cozy-monitoring
namespace: cozy-monitoring
privileged: true
dependsOn: [victoria-metrics-operator]
- name: grafana-operator
releaseName: grafana-operator
chart: cozy-grafana-operator
namespace: cozy-grafana-operator
dependsOn: []
- name: mariadb-operator
releaseName: mariadb-operator
chart: cozy-mariadb-operator
namespace: cozy-mariadb-operator
dependsOn: [cert-manager,victoria-metrics-operator]
- name: postgres-operator
releaseName: postgres-operator
chart: cozy-postgres-operator
namespace: cozy-postgres-operator
dependsOn: [cert-manager]
- name: rabbitmq-operator
releaseName: rabbitmq-operator
chart: cozy-rabbitmq-operator
namespace: cozy-rabbitmq-operator
dependsOn: []
- name: redis-operator
releaseName: redis-operator
chart: cozy-redis-operator
namespace: cozy-redis-operator
dependsOn: []
- name: piraeus-operator
releaseName: piraeus-operator
chart: cozy-piraeus-operator
namespace: cozy-linstor
dependsOn: [cert-manager]
- name: telepresence
releaseName: traffic-manager
chart: cozy-telepresence
namespace: cozy-telepresence
dependsOn: []
- name: dashboard
releaseName: dashboard
chart: cozy-dashboard
namespace: cozy-dashboard
dependsOn: []
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1beta2" }}
{{- with (lookup "source.toolkit.fluxcd.io/v1beta2" "HelmRepository" "cozy-public" "").items }}
values:
kubeapps:
redis:
master:
podAnnotations:
{{- range $index, $repo := . }}
{{- with (($repo.status).artifact).revision }}
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -1,7 +1,7 @@
{{/* {{/*
Get IP-addresses of master nodes Get IP-addresses of master nodes
*/}} */}}
{{- define "master.nodeIPs" -}} {{- define "cozystack.master-node-ips" -}}
{{- $nodes := lookup "v1" "Node" "" "" -}} {{- $nodes := lookup "v1" "Node" "" "" -}}
{{- $ips := list -}} {{- $ips := list -}}
{{- range $node := $nodes.items -}} {{- range $node := $nodes.items -}}

View File

@@ -1,38 +1,27 @@
apiVersion: helm.toolkit.fluxcd.io/v2beta1 {{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
kind: HelmRelease {{- $bundleName := index $cozyConfig.data "bundle-name" }}
metadata: {{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
name: cilium {{- $dependencyNamespaces := dict }}
namespace: cozy-cilium {{- $disabledComponents := splitList "," ((index $cozyConfig.data "bundle-disable") | default "") }}
labels:
cozystack.io/repository: system {{/* collect dependency namespaces from releases */}}
spec: {{- range $x := $bundle.releases }}
interval: 1m {{- $_ := set $dependencyNamespaces $x.name $x.namespace }}
releaseName: cilium {{- end }}
install:
remediation: {{- range $x := $bundle.releases }}
retries: -1 {{- if not (has $x.name $disabledComponents) }}
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-cilium
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
--- ---
apiVersion: helm.toolkit.fluxcd.io/v2beta1 apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease kind: HelmRelease
metadata: metadata:
name: kubeovn name: {{ $x.name }}
namespace: cozy-kubeovn namespace: {{ $x.namespace }}
labels: labels:
cozystack.io/repository: system cozystack.io/repository: system
spec: spec:
interval: 1m interval: 1m
releaseName: kubeovn releaseName: {{ $x.releaseName | default $x.name }}
install: install:
remediation: remediation:
retries: -1 retries: -1
@@ -41,704 +30,31 @@ spec:
retries: -1 retries: -1
chart: chart:
spec: spec:
chart: cozy-kubeovn chart: {{ $x.chart }}
reconcileStrategy: Revision reconcileStrategy: Revision
sourceRef: sourceRef:
kind: HelmRepository kind: HelmRepository
name: cozystack-system name: cozystack-system
namespace: cozy-system namespace: cozy-system
{{- $values := dict }}
{{- with $x.values }}
{{- $values = merge . $values }}
{{- end }}
{{- with index $cozyConfig.data (printf "values-%s" $x.name) }}
{{- $values = merge (fromYaml .) $values }}
{{- end }}
{{- with $values }}
values: values:
cozystack: {{- toYaml . | nindent 4}}
configHash: {{ index (lookup "v1" "ConfigMap" "cozy-system" "cozystack") "data" | toJson | sha256sum }} {{- end }}
nodesHash: {{ include "master.nodeIPs" . | sha256sum }} {{- with $x.dependsOn }}
dependsOn: dependsOn:
- name: cilium {{- range $dep := . }}
namespace: cozy-cilium {{- if not (has $dep $disabledComponents) }}
--- - name: {{ $dep }}
apiVersion: helm.toolkit.fluxcd.io/v2beta1 namespace: {{ index $dependencyNamespaces $dep }}
kind: HelmRelease {{- end }}
metadata: {{- end }}
name: cozy-fluxcd {{- end }}
namespace: cozy-fluxcd {{- end }}
labels: {{- end }}
cozystack.io/repository: system
spec:
interval: 1m
releaseName: fluxcd
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-fluxcd
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cert-manager
namespace: cozy-cert-manager
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: cert-manager
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-cert-manager
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: cert-manager-issuers
namespace: cozy-cert-manager
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: cert-manager-issuers
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-cert-manager-issuers
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: victoria-metrics-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-victoria-metrics-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: monitoring
namespace: cozy-monitoring
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: monitoring
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-monitoring
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt-operator
namespace: cozy-kubevirt
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt
namespace: cozy-kubevirt
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: kubevirt-operator
namespace: cozy-kubevirt
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt-cdi-operator
namespace: cozy-kubevirt-cdi
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt-cdi-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt-cdi-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kubevirt-cdi
namespace: cozy-kubevirt-cdi
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kubevirt-cdi
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kubevirt-cdi
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: kubevirt-cdi-operator
namespace: cozy-kubevirt-cdi
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: metallb
namespace: cozy-metallb
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: metallb
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-metallb
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: grafana-operator
namespace: cozy-grafana-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: grafana-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-grafana-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: mariadb-operator
namespace: cozy-mariadb-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: mariadb-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-mariadb-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
- name: victoria-metrics-operator
namespace: cozy-victoria-metrics-operator
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: postgres-operator
namespace: cozy-postgres-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: postgres-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-postgres-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: rabbitmq-operator
namespace: cozy-rabbitmq-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: rabbitmq-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-rabbitmq-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: redis-operator
namespace: cozy-redis-operator
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: redis-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-redis-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: piraeus-operator
namespace: cozy-linstor
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: piraeus-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-piraeus-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: linstor
namespace: cozy-linstor
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: linstor
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-linstor
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: piraeus-operator
namespace: cozy-linstor
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: telepresence
namespace: cozy-telepresence
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: traffic-manager
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-telepresence
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: dashboard
namespace: cozy-dashboard
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: dashboard
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-dashboard
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kamaji
namespace: cozy-kamaji
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: kamaji
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-kamaji
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: capi-operator
namespace: cozy-cluster-api
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: capi-operator
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-capi-operator
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn
- name: cert-manager
namespace: cozy-cert-manager
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: capi-providers
namespace: cozy-cluster-api
labels:
cozystack.io/repository: system
spec:
interval: 1m
releaseName: capi-providers
install:
remediation:
retries: -1
upgrade:
remediation:
retries: -1
chart:
spec:
chart: cozy-capi-providers
reconcileStrategy: Revision
sourceRef:
kind: HelmRepository
name: cozystack-system
namespace: cozy-system
dependsOn:
- name: capi-operator
namespace: cozy-cluster-api
- name: cilium
namespace: cozy-cilium
- name: kubeovn
namespace: cozy-kubeovn

View File

@@ -1,13 +1,29 @@
{{- range $ns := .Values.namespaces }} {{- $cozyConfig := lookup "v1" "ConfigMap" "cozy-system" "cozystack" }}
{{- $bundleName := index $cozyConfig.data "bundle-name" }}
{{- $bundle := tpl (.Files.Get (printf "bundles/%s.yaml" $bundleName)) . | fromYaml }}
{{- $namespaces := dict }}
{{/* collect namespaces from releases */}}
{{- range $x := $bundle.releases }}
{{- if not (hasKey $namespaces $x.namespace) }}
{{- $_ := set $namespaces $x.namespace false }}
{{- end }}
{{/* if at least one release requires a privileged namespace, then it should be privileged */}}
{{- if or $x.privileged (index $namespaces $x.namespace) }}
{{- $_ := set $namespaces $x.namespace true }}
{{- end }}
{{- end }}
{{- range $namespace, $privileged := $namespaces }}
--- ---
apiVersion: v1 apiVersion: v1
kind: Namespace kind: Namespace
metadata: metadata:
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
{{- if $ns.privileged }} {{- if $privileged }}
labels: labels:
pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce: privileged
{{- end }} {{- end }}
name: {{ $ns.name }} name: {{ $namespace }}
{{- end }} {{- end }}

View File

@@ -1,30 +0,0 @@
namespaces:
- name: cozy-public
- name: cozy-system
privileged: true
- name: cozy-cert-manager
- name: cozy-cilium
privileged: true
- name: cozy-fluxcd
- name: cozy-grafana-operator
- name: cozy-kamaji
- name: cozy-cluster-api
privileged: true # for capk only
- name: cozy-dashboard
- name: cozy-kubeovn
privileged: true
- name: cozy-kubevirt
privileged: true
- name: cozy-kubevirt-cdi
- name: cozy-linstor
privileged: true
- name: cozy-mariadb-operator
- name: cozy-metallb
privileged: true
- name: cozy-monitoring
privileged: true
- name: cozy-postgres-operator
- name: cozy-rabbitmq-operator
- name: cozy-redis-operator
- name: cozy-telepresence
- name: cozy-victoria-metrics-operator

View File

@@ -1,131 +1,88 @@
annotations: annotations:
artifacthub.io/crds: | artifacthub.io/crds: "- kind: CiliumNetworkPolicy\n version: v2\n name: ciliumnetworkpolicies.cilium.io\n
- kind: CiliumNetworkPolicy \ displayName: Cilium Network Policy\n description: |\n Cilium Network Policies
version: v2 provide additional functionality beyond what\n is provided by standard Kubernetes
name: ciliumnetworkpolicies.cilium.io NetworkPolicy such as the ability\n to allow traffic based on FQDNs, or to
displayName: Cilium Network Policy filter at Layer 7.\n- kind: CiliumClusterwideNetworkPolicy\n version: v2\n name:
description: | ciliumclusterwidenetworkpolicies.cilium.io\n displayName: Cilium Clusterwide
Cilium Network Policies provide additional functionality beyond what Network Policy\n description: |\n Cilium Clusterwide Network Policies support
is provided by standard Kubernetes NetworkPolicy such as the ability configuring network traffic\n policiies across the entire cluster, including
to allow traffic based on FQDNs, or to filter at Layer 7. applying node firewalls.\n- kind: CiliumExternalWorkload\n version: v2\n name:
- kind: CiliumClusterwideNetworkPolicy ciliumexternalworkloads.cilium.io\n displayName: Cilium External Workload\n description:
version: v2 |\n Cilium External Workload supports configuring the ability for external\n
name: ciliumclusterwidenetworkpolicies.cilium.io \ non-Kubernetes workloads to join the cluster.\n- kind: CiliumLocalRedirectPolicy\n
displayName: Cilium Clusterwide Network Policy \ version: v2\n name: ciliumlocalredirectpolicies.cilium.io\n displayName: Cilium
description: | Local Redirect Policy\n description: |\n Cilium Local Redirect Policy allows
Cilium Clusterwide Network Policies support configuring network traffic local redirects to be configured\n within a node to support use cases like
policiies across the entire cluster, including applying node firewalls. Node-Local DNS or KIAM.\n- kind: CiliumNode\n version: v2\n name: ciliumnodes.cilium.io\n
- kind: CiliumExternalWorkload \ displayName: Cilium Node\n description: |\n Cilium Node represents a node
version: v2 managed by Cilium. It contains a\n specification to control various node specific
name: ciliumexternalworkloads.cilium.io configuration aspects\n and a status section to represent the status of the
displayName: Cilium External Workload node.\n- kind: CiliumIdentity\n version: v2\n name: ciliumidentities.cilium.io\n
description: | \ displayName: Cilium Identity\n description: |\n Cilium Identity allows introspection
Cilium External Workload supports configuring the ability for external into security identities that\n Cilium allocates which identify sets of labels
non-Kubernetes workloads to join the cluster. that are assigned to\n individual endpoints in the cluster.\n- kind: CiliumEndpoint\n
- kind: CiliumLocalRedirectPolicy \ version: v2\n name: ciliumendpoints.cilium.io\n displayName: Cilium Endpoint\n
version: v2 \ description: |\n Cilium Endpoint represents the status of individual pods
name: ciliumlocalredirectpolicies.cilium.io or nodes in\n the cluster which are managed by Cilium, including enforcement
displayName: Cilium Local Redirect Policy status,\n IP addressing and whether the networking is successfully operational.\n-
description: | kind: CiliumEndpointSlice\n version: v2alpha1\n name: ciliumendpointslices.cilium.io\n
Cilium Local Redirect Policy allows local redirects to be configured \ displayName: Cilium Endpoint Slice\n description: |\n Cilium Endpoint Slice
within a node to support use cases like Node-Local DNS or KIAM. represents the status of groups of pods or nodes\n in the cluster which are
- kind: CiliumNode managed by Cilium, including enforcement status,\n IP addressing and whether
version: v2 the networking is successfully operational.\n- kind: CiliumEgressGatewayPolicy\n
name: ciliumnodes.cilium.io \ version: v2\n name: ciliumegressgatewaypolicies.cilium.io\n displayName: Cilium
displayName: Cilium Node Egress Gateway Policy\n description: |\n Cilium Egress Gateway Policy provides
description: | control over the way that traffic\n leaves the cluster and which source addresses
Cilium Node represents a node managed by Cilium. It contains a to use for that traffic.\n- kind: CiliumClusterwideEnvoyConfig\n version: v2\n
specification to control various node specific configuration aspects \ name: ciliumclusterwideenvoyconfigs.cilium.io\n displayName: Cilium Clusterwide
and a status section to represent the status of the node. Envoy Config\n description: |\n Cilium Clusterwide Envoy Config specifies
- kind: CiliumIdentity Envoy resources and K8s service mappings\n to be provisioned into Cilium host
version: v2 proxy instances in cluster context.\n- kind: CiliumEnvoyConfig\n version: v2\n
name: ciliumidentities.cilium.io \ name: ciliumenvoyconfigs.cilium.io\n displayName: Cilium Envoy Config\n description:
displayName: Cilium Identity |\n Cilium Envoy Config specifies Envoy resources and K8s service mappings\n
description: | \ to be provisioned into Cilium host proxy instances in namespace context.\n-
Cilium Identity allows introspection into security identities that kind: CiliumBGPPeeringPolicy\n version: v2alpha1\n name: ciliumbgppeeringpolicies.cilium.io\n
Cilium allocates which identify sets of labels that are assigned to \ displayName: Cilium BGP Peering Policy\n description: |\n Cilium BGP Peering
individual endpoints in the cluster. Policy instructs Cilium to create specific BGP peering\n configurations.\n-
- kind: CiliumEndpoint kind: CiliumBGPClusterConfig\n version: v2alpha1\n name: ciliumbgpclusterconfigs.cilium.io\n
version: v2 \ displayName: Cilium BGP Cluster Config\n description: |\n Cilium BGP Cluster
name: ciliumendpoints.cilium.io Config instructs Cilium operator to create specific BGP cluster\n configurations.\n-
displayName: Cilium Endpoint kind: CiliumBGPPeerConfig\n version: v2alpha1\n name: ciliumbgppeerconfigs.cilium.io\n
description: | \ displayName: Cilium BGP Peer Config\n description: |\n CiliumBGPPeerConfig
Cilium Endpoint represents the status of individual pods or nodes in is a common set of BGP peer configurations. It can be referenced \n by multiple
the cluster which are managed by Cilium, including enforcement status, peers from CiliumBGPClusterConfig.\n- kind: CiliumBGPAdvertisement\n version:
IP addressing and whether the networking is succesfully operational. v2alpha1\n name: ciliumbgpadvertisements.cilium.io\n displayName: Cilium BGP
- kind: CiliumEndpointSlice Advertisement\n description: |\n CiliumBGPAdvertisement is used to define
version: v2alpha1 source of BGP advertisement as well as BGP attributes \n to be advertised with
name: ciliumendpointslices.cilium.io those prefixes.\n- kind: CiliumBGPNodeConfig\n version: v2alpha1\n name: ciliumbgpnodeconfigs.cilium.io\n
displayName: Cilium Endpoint Slice \ displayName: Cilium BGP Node Config\n description: |\n CiliumBGPNodeConfig
description: | is read only node specific BGP configuration. It is constructed by Cilium operator.\n
Cilium Endpoint Slice represents the status of groups of pods or nodes \ It will also contain node local BGP state information.\n- kind: CiliumBGPNodeConfigOverride\n
in the cluster which are managed by Cilium, including enforcement status, \ version: v2alpha1\n name: ciliumbgpnodeconfigoverrides.cilium.io\n displayName:
IP addressing and whether the networking is succesfully operational. Cilium BGP Node Config Override\n description: |\n CiliumBGPNodeConfigOverride
- kind: CiliumEgressGatewayPolicy can be used to override node specific BGP configuration.\n- kind: CiliumLoadBalancerIPPool\n
version: v2 \ version: v2alpha1\n name: ciliumloadbalancerippools.cilium.io\n displayName:
name: ciliumegressgatewaypolicies.cilium.io Cilium Load Balancer IP Pool\n description: |\n Defining a Cilium Load Balancer
displayName: Cilium Egress Gateway Policy IP Pool instructs Cilium to assign IPs to LoadBalancer Services.\n- kind: CiliumNodeConfig\n
description: | \ version: v2alpha1\n name: ciliumnodeconfigs.cilium.io\n displayName: Cilium
Cilium Egress Gateway Policy provides control over the way that traffic Node Configuration\n description: |\n CiliumNodeConfig is a list of configuration
leaves the cluster and which source addresses to use for that traffic. key-value pairs. It is applied to\n nodes indicated by a label selector.\n-
- kind: CiliumClusterwideEnvoyConfig kind: CiliumCIDRGroup\n version: v2alpha1\n name: ciliumcidrgroups.cilium.io\n
version: v2 \ displayName: Cilium CIDR Group\n description: |\n CiliumCIDRGroup is a list
name: ciliumclusterwideenvoyconfigs.cilium.io of CIDRs that can be referenced as a single entity from CiliumNetworkPolicies.\n-
displayName: Cilium Clusterwide Envoy Config kind: CiliumL2AnnouncementPolicy\n version: v2alpha1\n name: ciliuml2announcementpolicies.cilium.io\n
description: | \ displayName: Cilium L2 Announcement Policy\n description: |\n CiliumL2AnnouncementPolicy
Cilium Clusterwide Envoy Config specifies Envoy resources and K8s service mappings is a policy which determines which service IPs will be announced to\n the local
to be provisioned into Cilium host proxy instances in cluster context. area network, by which nodes, and via which interfaces.\n- kind: CiliumPodIPPool\n
- kind: CiliumEnvoyConfig \ version: v2alpha1\n name: ciliumpodippools.cilium.io\n displayName: Cilium
version: v2 Pod IP Pool\n description: |\n CiliumPodIPPool defines an IP pool that can
name: ciliumenvoyconfigs.cilium.io be used for pooled IPAM (i.e. the multi-pool IPAM mode).\n"
displayName: Cilium Envoy Config
description: |
Cilium Envoy Config specifies Envoy resources and K8s service mappings
to be provisioned into Cilium host proxy instances in namespace context.
- kind: CiliumBGPPeeringPolicy
version: v2alpha1
name: ciliumbgppeeringpolicies.cilium.io
displayName: Cilium BGP Peering Policy
description: |
Cilium BGP Peering Policy instructs Cilium to create specific BGP peering
configurations.
- kind: CiliumLoadBalancerIPPool
version: v2alpha1
name: ciliumloadbalancerippools.cilium.io
displayName: Cilium Load Balancer IP Pool
description: |
Defining a Cilium Load Balancer IP Pool instructs Cilium to assign IPs to LoadBalancer Services.
- kind: CiliumNodeConfig
version: v2alpha1
name: ciliumnodeconfigs.cilium.io
displayName: Cilium Node Configuration
description: |
CiliumNodeConfig is a list of configuration key-value pairs. It is applied to
nodes indicated by a label selector.
- kind: CiliumCIDRGroup
version: v2alpha1
name: ciliumcidrgroups.cilium.io
displayName: Cilium CIDR Group
description: |
CiliumCIDRGroup is a list of CIDRs that can be referenced as a single entity from CiliumNetworkPolicies.
- kind: CiliumL2AnnouncementPolicy
version: v2alpha1
name: ciliuml2announcementpolicies.cilium.io
displayName: Cilium L2 Announcement Policy
description: |
CiliumL2AnnouncementPolicy is a policy which determines which service IPs will be announced to
the local area network, by which nodes, and via which interfaces.
- kind: CiliumPodIPPool
version: v2alpha1
name: ciliumpodippools.cilium.io
displayName: Cilium Pod IP Pool
description: |
CiliumPodIPPool defines an IP pool that can be used for pooled IPAM (i.e. the multi-pool IPAM mode).
apiVersion: v2 apiVersion: v2
appVersion: 1.14.5 appVersion: 1.15.2
description: eBPF-based Networking, Security, and Observability description: eBPF-based Networking, Security, and Observability
home: https://cilium.io/ home: https://cilium.io/
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@v1.14/Documentation/images/logo-solo.svg icon: https://cdn.jsdelivr.net/gh/cilium/cilium@v1.15/Documentation/images/logo-solo.svg
keywords: keywords:
- BPF - BPF
- eBPF - eBPF
@@ -138,4 +95,4 @@ kubeVersion: '>= 1.16.0-0'
name: cilium name: cilium
sources: sources:
- https://github.com/cilium/cilium - https://github.com/cilium/cilium
version: 1.14.5 version: 1.15.2

View File

@@ -1,6 +1,6 @@
# cilium # cilium
![Version: 1.14.5](https://img.shields.io/badge/Version-1.14.5-informational?style=flat-square) ![AppVersion: 1.14.5](https://img.shields.io/badge/AppVersion-1.14.5-informational?style=flat-square) ![Version: 1.15.2](https://img.shields.io/badge/Version-1.15.2-informational?style=flat-square) ![AppVersion: 1.15.2](https://img.shields.io/badge/AppVersion-1.15.2-informational?style=flat-square)
Cilium is open source software for providing and transparently securing Cilium is open source software for providing and transparently securing
network connectivity and loadbalancing between application workloads such as network connectivity and loadbalancing between application workloads such as
@@ -60,24 +60,30 @@ contributors across the globe, there is almost always someone available to help.
| aksbyocni.enabled | bool | `false` | Enable AKS BYOCNI integration. Note that this is incompatible with AKS clusters not created in BYOCNI mode: use Azure integration (`azure.enabled`) instead. | | aksbyocni.enabled | bool | `false` | Enable AKS BYOCNI integration. Note that this is incompatible with AKS clusters not created in BYOCNI mode: use Azure integration (`azure.enabled`) instead. |
| alibabacloud.enabled | bool | `false` | Enable AlibabaCloud ENI integration | | alibabacloud.enabled | bool | `false` | Enable AlibabaCloud ENI integration |
| annotateK8sNode | bool | `false` | Annotate k8s node upon initialization with Cilium's metadata. | | annotateK8sNode | bool | `false` | Annotate k8s node upon initialization with Cilium's metadata. |
| annotations | object | `{}` | Annotations to be added to all top-level cilium-agent objects (resources under templates/cilium-agent) |
| apiRateLimit | string | `nil` | The api-rate-limit option can be used to overwrite individual settings of the default configuration for rate limiting calls to the Cilium Agent API |
| authentication.enabled | bool | `true` | Enable authentication processing and garbage collection. Note that if disabled, policy enforcement will still block requests that require authentication. But the resulting authentication requests for these requests will not be processed, therefore the requests not be allowed. | | authentication.enabled | bool | `true` | Enable authentication processing and garbage collection. Note that if disabled, policy enforcement will still block requests that require authentication. But the resulting authentication requests for these requests will not be processed, therefore the requests not be allowed. |
| authentication.gcInterval | string | `"5m0s"` | Interval for garbage collection of auth map entries. | | authentication.gcInterval | string | `"5m0s"` | Interval for garbage collection of auth map entries. |
| authentication.mutual.connectTimeout | string | `"5s"` | Timeout for connecting to the remote node TCP socket |
| authentication.mutual.port | int | `4250` | Port on the agent where mutual authentication handshakes between agents will be performed | | authentication.mutual.port | int | `4250` | Port on the agent where mutual authentication handshakes between agents will be performed |
| authentication.mutual.spire.adminSocketPath | string | `"/run/spire/sockets/admin.sock"` | SPIRE socket path where the SPIRE delegated api agent is listening | | authentication.mutual.spire.adminSocketPath | string | `"/run/spire/sockets/admin.sock"` | SPIRE socket path where the SPIRE delegated api agent is listening |
| authentication.mutual.spire.agentSocketPath | string | `"/run/spire/sockets/agent/agent.sock"` | SPIRE socket path where the SPIRE workload agent is listening. Applies to both the Cilium Agent and Operator | | authentication.mutual.spire.agentSocketPath | string | `"/run/spire/sockets/agent/agent.sock"` | SPIRE socket path where the SPIRE workload agent is listening. Applies to both the Cilium Agent and Operator |
| authentication.mutual.spire.annotations | object | `{}` | Annotations to be added to all top-level spire objects (resources under templates/spire) |
| authentication.mutual.spire.connectionTimeout | string | `"30s"` | SPIRE connection timeout | | authentication.mutual.spire.connectionTimeout | string | `"30s"` | SPIRE connection timeout |
| authentication.mutual.spire.enabled | bool | `false` | Enable SPIRE integration (beta) | | authentication.mutual.spire.enabled | bool | `false` | Enable SPIRE integration (beta) |
| authentication.mutual.spire.install.agent.affinity | object | `{}` | SPIRE agent affinity configuration | | authentication.mutual.spire.install.agent.affinity | object | `{}` | SPIRE agent affinity configuration |
| authentication.mutual.spire.install.agent.annotations | object | `{}` | SPIRE agent annotations | | authentication.mutual.spire.install.agent.annotations | object | `{}` | SPIRE agent annotations |
| authentication.mutual.spire.install.agent.image | string | `"ghcr.io/spiffe/spire-agent:1.6.3@sha256:8eef9857bf223181ecef10d9bbcd2f7838f3689e9bd2445bede35066a732e823"` | SPIRE agent image | | authentication.mutual.spire.install.agent.image | object | `{"digest":"sha256:99405637647968245ff9fe215f8bd2bd0ea9807be9725f8bf19fe1b21471e52b","override":null,"pullPolicy":"IfNotPresent","repository":"ghcr.io/spiffe/spire-agent","tag":"1.8.5","useDigest":true}` | SPIRE agent image |
| authentication.mutual.spire.install.agent.labels | object | `{}` | SPIRE agent labels | | authentication.mutual.spire.install.agent.labels | object | `{}` | SPIRE agent labels |
| authentication.mutual.spire.install.agent.nodeSelector | object | `{}` | SPIRE agent nodeSelector configuration ref: ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector | | authentication.mutual.spire.install.agent.nodeSelector | object | `{}` | SPIRE agent nodeSelector configuration ref: ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| authentication.mutual.spire.install.agent.podSecurityContext | object | `{}` | Security context to be added to spire agent pods. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod | | authentication.mutual.spire.install.agent.podSecurityContext | object | `{}` | Security context to be added to spire agent pods. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod |
| authentication.mutual.spire.install.agent.securityContext | object | `{}` | Security context to be added to spire agent containers. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container | | authentication.mutual.spire.install.agent.securityContext | object | `{}` | Security context to be added to spire agent containers. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container |
| authentication.mutual.spire.install.agent.serviceAccount | object | `{"create":true,"name":"spire-agent"}` | SPIRE agent service account | | authentication.mutual.spire.install.agent.serviceAccount | object | `{"create":true,"name":"spire-agent"}` | SPIRE agent service account |
| authentication.mutual.spire.install.agent.skipKubeletVerification | bool | `true` | SPIRE Workload Attestor kubelet verification. | | authentication.mutual.spire.install.agent.skipKubeletVerification | bool | `true` | SPIRE Workload Attestor kubelet verification. |
| authentication.mutual.spire.install.agent.tolerations | list | `[]` | SPIRE agent tolerations configuration ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ | | authentication.mutual.spire.install.agent.tolerations | list | `[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"},{"key":"CriticalAddonsOnly","operator":"Exists"}]` | SPIRE agent tolerations configuration By default it follows the same tolerations as the agent itself to allow the Cilium agent on this node to connect to SPIRE. ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
| authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true | | authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true |
| authentication.mutual.spire.install.existingNamespace | bool | `false` | SPIRE namespace already exists. Set to true if Helm should not create, manage, and import the SPIRE namespace. |
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:223ae047b1065bd069aac01ae3ac8088b3ca4a527827e283b85112f29385fb1b","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.36.1","useDigest":true}` | init container image of SPIRE agent and server |
| authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into | | authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into |
| authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration | | authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration |
| authentication.mutual.spire.install.server.annotations | object | `{}` | SPIRE server annotations | | authentication.mutual.spire.install.server.annotations | object | `{}` | SPIRE server annotations |
@@ -87,10 +93,12 @@ contributors across the globe, there is almost always someone available to help.
| authentication.mutual.spire.install.server.dataStorage.enabled | bool | `true` | Enable SPIRE server data storage | | authentication.mutual.spire.install.server.dataStorage.enabled | bool | `true` | Enable SPIRE server data storage |
| authentication.mutual.spire.install.server.dataStorage.size | string | `"1Gi"` | Size of the SPIRE server data storage | | authentication.mutual.spire.install.server.dataStorage.size | string | `"1Gi"` | Size of the SPIRE server data storage |
| authentication.mutual.spire.install.server.dataStorage.storageClass | string | `nil` | StorageClass of the SPIRE server data storage | | authentication.mutual.spire.install.server.dataStorage.storageClass | string | `nil` | StorageClass of the SPIRE server data storage |
| authentication.mutual.spire.install.server.image | string | `"ghcr.io/spiffe/spire-server:1.6.3@sha256:f4bc49fb0bd1d817a6c46204cc7ce943c73fb0a5496a78e0e4dc20c9a816ad7f"` | SPIRE server image | | authentication.mutual.spire.install.server.image | object | `{"digest":"sha256:28269265882048dcf0fed32fe47663cd98613727210b8d1a55618826f9bf5428","override":null,"pullPolicy":"IfNotPresent","repository":"ghcr.io/spiffe/spire-server","tag":"1.8.5","useDigest":true}` | SPIRE server image |
| authentication.mutual.spire.install.server.initContainers | list | `[]` | SPIRE server init containers | | authentication.mutual.spire.install.server.initContainers | list | `[]` | SPIRE server init containers |
| authentication.mutual.spire.install.server.labels | object | `{}` | SPIRE server labels | | authentication.mutual.spire.install.server.labels | object | `{}` | SPIRE server labels |
| authentication.mutual.spire.install.server.nodeSelector | object | `{}` | SPIRE server nodeSelector configuration ref: ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector | | authentication.mutual.spire.install.server.nodeSelector | object | `{}` | SPIRE server nodeSelector configuration ref: ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| authentication.mutual.spire.install.server.podSecurityContext | object | `{}` | Security context to be added to spire server pods. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod |
| authentication.mutual.spire.install.server.securityContext | object | `{}` | Security context to be added to spire server containers. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container |
| authentication.mutual.spire.install.server.service.annotations | object | `{}` | Annotations to be added to the SPIRE server service | | authentication.mutual.spire.install.server.service.annotations | object | `{}` | Annotations to be added to the SPIRE server service |
| authentication.mutual.spire.install.server.service.labels | object | `{}` | Labels to be added to the SPIRE server service | | authentication.mutual.spire.install.server.service.labels | object | `{}` | Labels to be added to the SPIRE server service |
| authentication.mutual.spire.install.server.service.type | string | `"ClusterIP"` | Service type for the SPIRE server service | | authentication.mutual.spire.install.server.service.type | string | `"ClusterIP"` | Service type for the SPIRE server service |
@@ -109,8 +117,11 @@ contributors across the globe, there is almost always someone available to help.
| bgp.announce.loadbalancerIP | bool | `false` | Enable allocation and announcement of service LoadBalancer IPs | | bgp.announce.loadbalancerIP | bool | `false` | Enable allocation and announcement of service LoadBalancer IPs |
| bgp.announce.podCIDR | bool | `false` | Enable announcement of node pod CIDR | | bgp.announce.podCIDR | bool | `false` | Enable announcement of node pod CIDR |
| bgp.enabled | bool | `false` | Enable BGP support inside Cilium; embeds a new ConfigMap for BGP inside cilium-agent and cilium-operator | | bgp.enabled | bool | `false` | Enable BGP support inside Cilium; embeds a new ConfigMap for BGP inside cilium-agent and cilium-operator |
| bgpControlPlane | object | `{"enabled":false}` | This feature set enables virtual BGP routers to be created via CiliumBGPPeeringPolicy CRDs. | | bgpControlPlane | object | `{"enabled":false,"secretsNamespace":{"create":false,"name":"kube-system"}}` | This feature set enables virtual BGP routers to be created via CiliumBGPPeeringPolicy CRDs. |
| bgpControlPlane.enabled | bool | `false` | Enables the BGP control plane. | | bgpControlPlane.enabled | bool | `false` | Enables the BGP control plane. |
| bgpControlPlane.secretsNamespace | object | `{"create":false,"name":"kube-system"}` | SecretsNamespace is the namespace which BGP support will retrieve secrets from. |
| bgpControlPlane.secretsNamespace.create | bool | `false` | Create secrets namespace for BGP secrets. |
| bgpControlPlane.secretsNamespace.name | string | `"kube-system"` | The name of the secret namespace to which Cilium agents are given read access |
| bpf.authMapMax | int | `524288` | Configure the maximum number of entries in auth map. | | bpf.authMapMax | int | `524288` | Configure the maximum number of entries in auth map. |
| bpf.autoMount.enabled | bool | `true` | Enable automatic mount of BPF filesystem When `autoMount` is enabled, the BPF filesystem is mounted at `bpf.root` path on the underlying host and inside the cilium agent pod. If users disable `autoMount`, it's expected that users have mounted bpffs filesystem at the specified `bpf.root` volume, and then the volume will be mounted inside the cilium agent pod at the same path. | | bpf.autoMount.enabled | bool | `true` | Enable automatic mount of BPF filesystem When `autoMount` is enabled, the BPF filesystem is mounted at `bpf.root` path on the underlying host and inside the cilium agent pod. If users disable `autoMount`, it's expected that users have mounted bpffs filesystem at the specified `bpf.root` volume, and then the volume will be mounted inside the cilium agent pod at the same path. |
| bpf.ctAnyMax | int | `262144` | Configure the maximum number of entries for the non-TCP connection tracking table. | | bpf.ctAnyMax | int | `262144` | Configure the maximum number of entries for the non-TCP connection tracking table. |
@@ -131,7 +142,8 @@ contributors across the globe, there is almost always someone available to help.
| bpf.tproxy | bool | `false` | Configure the eBPF-based TPROXY to reduce reliance on iptables rules for implementing Layer 7 policy. | | bpf.tproxy | bool | `false` | Configure the eBPF-based TPROXY to reduce reliance on iptables rules for implementing Layer 7 policy. |
| bpf.vlanBypass | list | `[]` | Configure explicitly allowed VLAN id's for bpf logic bypass. [0] will allow all VLAN id's without any filtering. | | bpf.vlanBypass | list | `[]` | Configure explicitly allowed VLAN id's for bpf logic bypass. [0] will allow all VLAN id's without any filtering. |
| bpfClockProbe | bool | `false` | Enable BPF clock source probing for more efficient tick retrieval. | | bpfClockProbe | bool | `false` | Enable BPF clock source probing for more efficient tick retrieval. |
| certgen | object | `{"annotations":{"cronJob":{},"job":{}},"extraVolumeMounts":[],"extraVolumes":[],"image":{"digest":"sha256:89a0847753686444daabde9474b48340993bd19c7bea66a46e45b2974b82041f","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/certgen","tag":"v0.1.9","useDigest":true},"podLabels":{},"tolerations":[],"ttlSecondsAfterFinished":1800}` | Configure certificate generation for Hubble integration. If hubble.tls.auto.method=cronJob, these values are used for the Kubernetes CronJob which will be scheduled regularly to (re)generate any certificates not provided manually. | | certgen | object | `{"affinity":{},"annotations":{"cronJob":{},"job":{}},"extraVolumeMounts":[],"extraVolumes":[],"image":{"digest":"sha256:89a0847753686444daabde9474b48340993bd19c7bea66a46e45b2974b82041f","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/certgen","tag":"v0.1.9","useDigest":true},"podLabels":{},"tolerations":[],"ttlSecondsAfterFinished":1800}` | Configure certificate generation for Hubble integration. If hubble.tls.auto.method=cronJob, these values are used for the Kubernetes CronJob which will be scheduled regularly to (re)generate any certificates not provided manually. |
| certgen.affinity | object | `{}` | Affinity for certgen |
| certgen.annotations | object | `{"cronJob":{},"job":{}}` | Annotations to be added to the hubble-certgen initial Job and CronJob | | certgen.annotations | object | `{"cronJob":{},"job":{}}` | Annotations to be added to the hubble-certgen initial Job and CronJob |
| certgen.extraVolumeMounts | list | `[]` | Additional certgen volumeMounts. | | certgen.extraVolumeMounts | list | `[]` | Additional certgen volumeMounts. |
| certgen.extraVolumes | list | `[]` | Additional certgen volumes. | | certgen.extraVolumes | list | `[]` | Additional certgen volumes. |
@@ -146,25 +158,29 @@ contributors across the globe, there is almost always someone available to help.
| cleanState | bool | `false` | Clean all local Cilium state from the initContainer of the cilium-agent DaemonSet. Implies cleanBpfState: true. WARNING: Use with care! | | cleanState | bool | `false` | Clean all local Cilium state from the initContainer of the cilium-agent DaemonSet. Implies cleanBpfState: true. WARNING: Use with care! |
| cluster.id | int | `0` | Unique ID of the cluster. Must be unique across all connected clusters and in the range of 1 to 255. Only required for Cluster Mesh, may be 0 if Cluster Mesh is not used. | | cluster.id | int | `0` | Unique ID of the cluster. Must be unique across all connected clusters and in the range of 1 to 255. Only required for Cluster Mesh, may be 0 if Cluster Mesh is not used. |
| cluster.name | string | `"default"` | Name of the cluster. Only required for Cluster Mesh and mutual authentication with SPIRE. | | cluster.name | string | `"default"` | Name of the cluster. Only required for Cluster Mesh and mutual authentication with SPIRE. |
| clustermesh.annotations | object | `{}` | Annotations to be added to all top-level clustermesh objects (resources under templates/clustermesh-apiserver and templates/clustermesh-config) |
| clustermesh.apiserver.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"clustermesh-apiserver"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for clustermesh.apiserver | | clustermesh.apiserver.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"clustermesh-apiserver"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for clustermesh.apiserver |
| clustermesh.apiserver.etcd.image | object | `{"digest":"sha256:795d8660c48c439a7c3764c2330ed9222ab5db5bb524d8d0607cac76f7ba82a3","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/coreos/etcd","tag":"v3.5.4","useDigest":true}` | Clustermesh API server etcd image. | | clustermesh.apiserver.etcd.init.extraArgs | list | `[]` | Additional arguments to `clustermesh-apiserver etcdinit`. |
| clustermesh.apiserver.etcd.init.extraEnv | list | `[]` | Additional environment variables to `clustermesh-apiserver etcdinit`. |
| clustermesh.apiserver.etcd.init.resources | object | `{}` | Specifies the resources for etcd init container in the apiserver | | clustermesh.apiserver.etcd.init.resources | object | `{}` | Specifies the resources for etcd init container in the apiserver |
| clustermesh.apiserver.etcd.lifecycle | object | `{}` | lifecycle setting for the etcd container |
| clustermesh.apiserver.etcd.resources | object | `{}` | Specifies the resources for etcd container in the apiserver | | clustermesh.apiserver.etcd.resources | object | `{}` | Specifies the resources for etcd container in the apiserver |
| clustermesh.apiserver.etcd.securityContext | object | `{}` | Security context to be added to clustermesh-apiserver etcd containers | | clustermesh.apiserver.etcd.securityContext | object | `{}` | Security context to be added to clustermesh-apiserver etcd containers |
| clustermesh.apiserver.extraArgs | list | `[]` | Additional clustermesh-apiserver arguments. | | clustermesh.apiserver.extraArgs | list | `[]` | Additional clustermesh-apiserver arguments. |
| clustermesh.apiserver.extraEnv | list | `[]` | Additional clustermesh-apiserver environment variables. | | clustermesh.apiserver.extraEnv | list | `[]` | Additional clustermesh-apiserver environment variables. |
| clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. | | clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. |
| clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. | | clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. |
| clustermesh.apiserver.image | object | `{"digest":"sha256:7eaa35cf5452c43b1f7d0cde0d707823ae7e49965bcb54c053e31ea4e04c3d96","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.14.5","useDigest":true}` | Clustermesh API server image. | | clustermesh.apiserver.image | object | `{"digest":"sha256:478c77371f34d6fe5251427ff90c3912567c69b2bdc87d72377e42a42054f1c2","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.15.2","useDigest":true}` | Clustermesh API server image. |
| clustermesh.apiserver.kvstoremesh.enabled | bool | `false` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance. | | clustermesh.apiserver.kvstoremesh.enabled | bool | `false` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance. |
| clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. | | clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. |
| clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. | | clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. |
| clustermesh.apiserver.kvstoremesh.extraVolumeMounts | list | `[]` | Additional KVStoreMesh volumeMounts. | | clustermesh.apiserver.kvstoremesh.extraVolumeMounts | list | `[]` | Additional KVStoreMesh volumeMounts. |
| clustermesh.apiserver.kvstoremesh.image | object | `{"digest":"sha256:d7137edd0efa2b1407b20088af3980a9993bb616d85bf9b55ea2891d1b99023a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/kvstoremesh","tag":"v1.14.5","useDigest":true}` | KVStoreMesh image. | | clustermesh.apiserver.kvstoremesh.lifecycle | object | `{}` | lifecycle setting for the KVStoreMesh container |
| clustermesh.apiserver.kvstoremesh.resources | object | `{}` | Resource requests and limits for the KVStoreMesh container | | clustermesh.apiserver.kvstoremesh.resources | object | `{}` | Resource requests and limits for the KVStoreMesh container |
| clustermesh.apiserver.kvstoremesh.securityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}}` | KVStoreMesh Security context | | clustermesh.apiserver.kvstoremesh.securityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}}` | KVStoreMesh Security context |
| clustermesh.apiserver.lifecycle | object | `{}` | lifecycle setting for the apiserver container |
| clustermesh.apiserver.metrics.enabled | bool | `true` | Enables exporting apiserver metrics in OpenMetrics format. | | clustermesh.apiserver.metrics.enabled | bool | `true` | Enables exporting apiserver metrics in OpenMetrics format. |
| clustermesh.apiserver.metrics.etcd.enabled | bool | `false` | Enables exporting etcd metrics in OpenMetrics format. | | clustermesh.apiserver.metrics.etcd.enabled | bool | `true` | Enables exporting etcd metrics in OpenMetrics format. |
| clustermesh.apiserver.metrics.etcd.mode | string | `"basic"` | Set level of detail for etcd metrics; specify 'extensive' to include server side gRPC histogram metrics. | | clustermesh.apiserver.metrics.etcd.mode | string | `"basic"` | Set level of detail for etcd metrics; specify 'extensive' to include server side gRPC histogram metrics. |
| clustermesh.apiserver.metrics.etcd.port | int | `9963` | Configure the port the etcd metric server listens on. | | clustermesh.apiserver.metrics.etcd.port | int | `9963` | Configure the port the etcd metric server listens on. |
| clustermesh.apiserver.metrics.kvstoremesh.enabled | bool | `true` | Enables exporting KVStoreMesh metrics in OpenMetrics format. | | clustermesh.apiserver.metrics.kvstoremesh.enabled | bool | `true` | Enables exporting KVStoreMesh metrics in OpenMetrics format. |
@@ -198,15 +214,13 @@ contributors across the globe, there is almost always someone available to help.
| clustermesh.apiserver.service.internalTrafficPolicy | string | `nil` | The internalTrafficPolicy of service used for apiserver access. | | clustermesh.apiserver.service.internalTrafficPolicy | string | `nil` | The internalTrafficPolicy of service used for apiserver access. |
| clustermesh.apiserver.service.nodePort | int | `32379` | Optional port to use as the node port for apiserver access. WARNING: make sure to configure a different NodePort in each cluster if kube-proxy replacement is enabled, as Cilium is currently affected by a known bug (#24692) when NodePorts are handled by the KPR implementation. If a service with the same NodePort exists both in the local and the remote cluster, all traffic originating from inside the cluster and targeting the corresponding NodePort will be redirected to a local backend, regardless of whether the destination node belongs to the local or the remote cluster. | | clustermesh.apiserver.service.nodePort | int | `32379` | Optional port to use as the node port for apiserver access. WARNING: make sure to configure a different NodePort in each cluster if kube-proxy replacement is enabled, as Cilium is currently affected by a known bug (#24692) when NodePorts are handled by the KPR implementation. If a service with the same NodePort exists both in the local and the remote cluster, all traffic originating from inside the cluster and targeting the corresponding NodePort will be redirected to a local backend, regardless of whether the destination node belongs to the local or the remote cluster. |
| clustermesh.apiserver.service.type | string | `"NodePort"` | The type of service used for apiserver access. | | clustermesh.apiserver.service.type | string | `"NodePort"` | The type of service used for apiserver access. |
| clustermesh.apiserver.terminationGracePeriodSeconds | int | `30` | terminationGracePeriodSeconds for the clustermesh-apiserver deployment |
| clustermesh.apiserver.tls.admin | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver admin certificate and private key. Used if 'auto' is not enabled. | | clustermesh.apiserver.tls.admin | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver admin certificate and private key. Used if 'auto' is not enabled. |
| clustermesh.apiserver.tls.authMode | string | `"legacy"` | Configure the clustermesh authentication mode. Supported values: - legacy: All clusters access remote clustermesh instances with the same username (i.e., remote). The "remote" certificate must be generated with CN=remote if provided manually. - migration: Intermediate mode required to upgrade from legacy to cluster (and vice versa) with no disruption. Specifically, it enables the creation of the per-cluster usernames, while still using the common one for authentication. The "remote" certificate must be generated with CN=remote if provided manually (same as legacy). - cluster: Each cluster accesses remote etcd instances with a username depending on the local cluster name (i.e., remote-<cluster-name>). The "remote" certificate must be generated with CN=remote-<cluster-name> if provided manually. Cluster mode is meaningful only when the same CA is shared across all clusters part of the mesh. | | clustermesh.apiserver.tls.authMode | string | `"legacy"` | Configure the clustermesh authentication mode. Supported values: - legacy: All clusters access remote clustermesh instances with the same username (i.e., remote). The "remote" certificate must be generated with CN=remote if provided manually. - migration: Intermediate mode required to upgrade from legacy to cluster (and vice versa) with no disruption. Specifically, it enables the creation of the per-cluster usernames, while still using the common one for authentication. The "remote" certificate must be generated with CN=remote if provided manually (same as legacy). - cluster: Each cluster accesses remote etcd instances with a username depending on the local cluster name (i.e., remote-<cluster-name>). The "remote" certificate must be generated with CN=remote-<cluster-name> if provided manually. Cluster mode is meaningful only when the same CA is shared across all clusters part of the mesh. |
| clustermesh.apiserver.tls.auto | object | `{"certManagerIssuerRef":{},"certValidityDuration":1095,"enabled":true,"method":"helm"}` | Configure automatic TLS certificates generation. A Kubernetes CronJob is used the generate any certificates not provided by the user at installation time. | | clustermesh.apiserver.tls.auto | object | `{"certManagerIssuerRef":{},"certValidityDuration":1095,"enabled":true,"method":"helm"}` | Configure automatic TLS certificates generation. A Kubernetes CronJob is used the generate any certificates not provided by the user at installation time. |
| clustermesh.apiserver.tls.auto.certManagerIssuerRef | object | `{}` | certmanager issuer used when clustermesh.apiserver.tls.auto.method=certmanager. | | clustermesh.apiserver.tls.auto.certManagerIssuerRef | object | `{}` | certmanager issuer used when clustermesh.apiserver.tls.auto.method=certmanager. |
| clustermesh.apiserver.tls.auto.certValidityDuration | int | `1095` | Generated certificates validity duration in days. | | clustermesh.apiserver.tls.auto.certValidityDuration | int | `1095` | Generated certificates validity duration in days. |
| clustermesh.apiserver.tls.auto.enabled | bool | `true` | When set to true, automatically generate a CA and certificates to enable mTLS between clustermesh-apiserver and external workload instances. If set to false, the certs to be provided by setting appropriate values below. | | clustermesh.apiserver.tls.auto.enabled | bool | `true` | When set to true, automatically generate a CA and certificates to enable mTLS between clustermesh-apiserver and external workload instances. If set to false, the certs to be provided by setting appropriate values below. |
| clustermesh.apiserver.tls.ca | object | `{"cert":"","key":""}` | Deprecated in favor of tls.ca. To be removed in 1.15. base64 encoded PEM values for the ExternalWorkload CA certificate and private key. |
| clustermesh.apiserver.tls.ca.cert | string | `""` | Deprecated in favor of tls.ca.cert. To be removed in 1.15. Optional CA cert. If it is provided, it will be used by the 'cronJob' method to generate all other certificates. Otherwise, an ephemeral CA is generated. |
| clustermesh.apiserver.tls.ca.key | string | `""` | Deprecated in favor of tls.ca.key. To be removed in 1.15. Optional CA private key. If it is provided, it will be used by the 'cronJob' method to generate all other certificates. Otherwise, an ephemeral CA is generated. |
| clustermesh.apiserver.tls.client | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver client certificate and private key. Used if 'auto' is not enabled. | | clustermesh.apiserver.tls.client | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver client certificate and private key. Used if 'auto' is not enabled. |
| clustermesh.apiserver.tls.remote | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver remote cluster certificate and private key. Used if 'auto' is not enabled. | | clustermesh.apiserver.tls.remote | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver remote cluster certificate and private key. Used if 'auto' is not enabled. |
| clustermesh.apiserver.tls.server | object | `{"cert":"","extraDnsNames":[],"extraIpAddresses":[],"key":""}` | base64 encoded PEM values for the clustermesh-apiserver server certificate and private key. Used if 'auto' is not enabled. | | clustermesh.apiserver.tls.server | object | `{"cert":"","extraDnsNames":[],"extraIpAddresses":[],"key":""}` | base64 encoded PEM values for the clustermesh-apiserver server certificate and private key. Used if 'auto' is not enabled. |
@@ -219,6 +233,7 @@ contributors across the globe, there is almost always someone available to help.
| clustermesh.config.clusters | list | `[]` | List of clusters to be peered in the mesh. | | clustermesh.config.clusters | list | `[]` | List of clusters to be peered in the mesh. |
| clustermesh.config.domain | string | `"mesh.cilium.io"` | Default dns domain for the Clustermesh API servers This is used in the case cluster addresses are not provided and IPs are used. | | clustermesh.config.domain | string | `"mesh.cilium.io"` | Default dns domain for the Clustermesh API servers This is used in the case cluster addresses are not provided and IPs are used. |
| clustermesh.config.enabled | bool | `false` | Enable the Clustermesh explicit configuration. | | clustermesh.config.enabled | bool | `false` | Enable the Clustermesh explicit configuration. |
| clustermesh.maxConnectedClusters | int | `255` | The maximum number of clusters to support in a ClusterMesh. This value cannot be changed on running clusters, and all clusters in a ClusterMesh must be configured with the same value. Values > 255 will decrease the maximum allocatable cluster-local identities. Supported values are 255 and 511. |
| clustermesh.useAPIServer | bool | `false` | Deploy clustermesh-apiserver for clustermesh | | clustermesh.useAPIServer | bool | `false` | Deploy clustermesh-apiserver for clustermesh |
| cni.binPath | string | `"/opt/cni/bin"` | Configure the path to the CNI binary directory on the host. | | cni.binPath | string | `"/opt/cni/bin"` | Configure the path to the CNI binary directory on the host. |
| cni.chainingMode | string | `nil` | Configure chaining on top of other CNI plugins. Possible values: - none - aws-cni - flannel - generic-veth - portmap | | cni.chainingMode | string | `nil` | Configure chaining on top of other CNI plugins. Possible values: - none - aws-cni - flannel - generic-veth - portmap |
@@ -231,6 +246,7 @@ contributors across the globe, there is almost always someone available to help.
| cni.hostConfDirMountPath | string | `"/host/etc/cni/net.d"` | Configure the path to where the CNI configuration directory is mounted inside the agent pod. | | cni.hostConfDirMountPath | string | `"/host/etc/cni/net.d"` | Configure the path to where the CNI configuration directory is mounted inside the agent pod. |
| cni.install | bool | `true` | Install the CNI configuration and binary files into the filesystem. | | cni.install | bool | `true` | Install the CNI configuration and binary files into the filesystem. |
| cni.logFile | string | `"/var/run/cilium/cilium-cni.log"` | Configure the log file for CNI logging with retention policy of 7 days. Disable CNI file logging by setting this field to empty explicitly. | | cni.logFile | string | `"/var/run/cilium/cilium-cni.log"` | Configure the log file for CNI logging with retention policy of 7 days. Disable CNI file logging by setting this field to empty explicitly. |
| cni.resources | object | `{"requests":{"cpu":"100m","memory":"10Mi"}}` | Specifies the resources for the cni initContainer |
| cni.uninstall | bool | `false` | Remove the CNI configuration and binary files on agent shutdown. Enable this if you're removing Cilium from the cluster. Disable this to prevent the CNI configuration file from being removed during agent upgrade, which can cause nodes to go unmanageable. | | cni.uninstall | bool | `false` | Remove the CNI configuration and binary files on agent shutdown. Enable this if you're removing Cilium from the cluster. Disable this to prevent the CNI configuration file from being removed during agent upgrade, which can cause nodes to go unmanageable. |
| conntrackGCInterval | string | `"0s"` | Configure how frequently garbage collection should occur for the datapath connection tracking table. | | conntrackGCInterval | string | `"0s"` | Configure how frequently garbage collection should occur for the datapath connection tracking table. |
| conntrackGCMaxInterval | string | `""` | Configure the maximum frequency for the garbage collection of the connection tracking table. Only affects the automatic computation for the frequency and has no effect when 'conntrackGCInterval' is set. This can be set to more frequently clean up unused identities created from ToFQDN policies. | | conntrackGCMaxInterval | string | `""` | Configure the maximum frequency for the garbage collection of the connection tracking table. Only affects the automatic computation for the frequency and has no effect when 'conntrackGCInterval' is set. This can be set to more frequently clean up unused identities created from ToFQDN policies. |
@@ -245,7 +261,7 @@ contributors across the globe, there is almost always someone available to help.
| daemon.runPath | string | `"/var/run/cilium"` | Configure where Cilium runtime state should be stored. | | daemon.runPath | string | `"/var/run/cilium"` | Configure where Cilium runtime state should be stored. |
| dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for cilium-agent grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards | | dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for cilium-agent grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards |
| debug.enabled | bool | `false` | Enable debug logging | | debug.enabled | bool | `false` | Enable debug logging |
| debug.verbose | string | `nil` | Configure verbosity levels for debug logging This option is used to enable debug messages for operations related to such sub-system such as (e.g. kvstore, envoy, datapath or policy), and flow is for enabling debug messages emitted per request, message and connection. Applicable values: - flow - kvstore - envoy - datapath - policy | | debug.verbose | string | `nil` | Configure verbosity levels for debug logging This option is used to enable debug messages for operations related to such sub-system such as (e.g. kvstore, envoy, datapath or policy), and flow is for enabling debug messages emitted per request, message and connection. Multiple values can be set via a space-separated string (e.g. "datapath envoy"). Applicable values: - flow - kvstore - envoy - datapath - policy |
| disableEndpointCRD | bool | `false` | Disable the usage of CiliumEndpoint CRD. | | disableEndpointCRD | bool | `false` | Disable the usage of CiliumEndpoint CRD. |
| dnsPolicy | string | `""` | DNS policy for Cilium agent pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy | | dnsPolicy | string | `""` | DNS policy for Cilium agent pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy |
| dnsProxy.dnsRejectResponseCode | string | `"refused"` | DNS response code for rejecting DNS requests, available options are '[nameError refused]'. | | dnsProxy.dnsRejectResponseCode | string | `"refused"` | DNS response code for rejecting DNS requests, available options are '[nameError refused]'. |
@@ -257,18 +273,17 @@ contributors across the globe, there is almost always someone available to help.
| dnsProxy.preCache | string | `""` | DNS cache data at this path is preloaded on agent startup. | | dnsProxy.preCache | string | `""` | DNS cache data at this path is preloaded on agent startup. |
| dnsProxy.proxyPort | int | `0` | Global port on which the in-agent DNS proxy should listen. Default 0 is a OS-assigned port. | | dnsProxy.proxyPort | int | `0` | Global port on which the in-agent DNS proxy should listen. Default 0 is a OS-assigned port. |
| dnsProxy.proxyResponseMaxDelay | string | `"100ms"` | The maximum time the DNS proxy holds an allowed DNS response before sending it along. Responses are sent as soon as the datapath is updated with the new IP information. | | dnsProxy.proxyResponseMaxDelay | string | `"100ms"` | The maximum time the DNS proxy holds an allowed DNS response before sending it along. Responses are sent as soon as the datapath is updated with the new IP information. |
| egressGateway | object | `{"enabled":false,"installRoutes":false,"reconciliationTriggerInterval":"1s"}` | Enables egress gateway to redirect and SNAT the traffic that leaves the cluster. | | egressGateway.enabled | bool | `false` | Enables egress gateway to redirect and SNAT the traffic that leaves the cluster. |
| egressGateway.installRoutes | bool | `false` | Install egress gateway IP rules and routes in order to properly steer egress gateway traffic to the correct ENI interface | | egressGateway.installRoutes | bool | `false` | Deprecated without a replacement necessary. |
| egressGateway.reconciliationTriggerInterval | string | `"1s"` | Time between triggers of egress gateway state reconciliations | | egressGateway.reconciliationTriggerInterval | string | `"1s"` | Time between triggers of egress gateway state reconciliations |
| enableCiliumEndpointSlice | bool | `false` | Enable CiliumEndpointSlice feature. | | enableCiliumEndpointSlice | bool | `false` | Enable CiliumEndpointSlice feature. |
| enableCnpStatusUpdates | bool | `false` | Whether to enable CNP status updates. |
| enableCriticalPriorityClass | bool | `true` | Explicitly enable or disable priority class. .Capabilities.KubeVersion is unsettable in `helm template` calls, it depends on k8s libraries version that Helm was compiled against. This option allows to explicitly disable setting the priority class, which is useful for rendering charts for gke clusters in advance. | | enableCriticalPriorityClass | bool | `true` | Explicitly enable or disable priority class. .Capabilities.KubeVersion is unsettable in `helm template` calls, it depends on k8s libraries version that Helm was compiled against. This option allows to explicitly disable setting the priority class, which is useful for rendering charts for gke clusters in advance. |
| enableIPv4BIGTCP | bool | `false` | Enables IPv4 BIG TCP support which increases maximum IPv4 GSO/GRO limits for nodes and pods | | enableIPv4BIGTCP | bool | `false` | Enables IPv4 BIG TCP support which increases maximum IPv4 GSO/GRO limits for nodes and pods |
| enableIPv4Masquerade | bool | `true` | Enables masquerading of IPv4 traffic leaving the node from endpoints. | | enableIPv4Masquerade | bool | `true` | Enables masquerading of IPv4 traffic leaving the node from endpoints. |
| enableIPv6BIGTCP | bool | `false` | Enables IPv6 BIG TCP support which increases maximum IPv6 GSO/GRO limits for nodes and pods | | enableIPv6BIGTCP | bool | `false` | Enables IPv6 BIG TCP support which increases maximum IPv6 GSO/GRO limits for nodes and pods |
| enableIPv6Masquerade | bool | `true` | Enables masquerading of IPv6 traffic leaving the node from endpoints. | | enableIPv6Masquerade | bool | `true` | Enables masquerading of IPv6 traffic leaving the node from endpoints. |
| enableK8sEventHandover | bool | `false` | Configures the use of the KVStore to optimize Kubernetes event handling by mirroring it into the KVstore for reduced overhead in large clusters. |
| enableK8sTerminatingEndpoint | bool | `true` | Configure whether to enable auto detect of terminating state for endpoints in order to support graceful termination. | | enableK8sTerminatingEndpoint | bool | `true` | Configure whether to enable auto detect of terminating state for endpoints in order to support graceful termination. |
| enableMasqueradeRouteSource | bool | `false` | Enables masquerading to the source of the route for traffic leaving the node from endpoints. |
| enableRuntimeDeviceDetection | bool | `false` | Enables experimental support for the detection of new and removed datapath devices. When devices change the eBPF datapath is reloaded and services updated. If "devices" is set then only those devices, or devices matching a wildcard will be considered. | | enableRuntimeDeviceDetection | bool | `false` | Enables experimental support for the detection of new and removed datapath devices. When devices change the eBPF datapath is reloaded and services updated. If "devices" is set then only those devices, or devices matching a wildcard will be considered. |
| enableXTSocketFallback | bool | `true` | Enables the fallback compatibility solution for when the xt_socket kernel module is missing and it is needed for the datapath L7 redirection to work properly. See documentation for details on when this can be disabled: https://docs.cilium.io/en/stable/operations/system_requirements/#linux-kernel. | | enableXTSocketFallback | bool | `true` | Enables the fallback compatibility solution for when the xt_socket kernel module is missing and it is needed for the datapath L7 redirection to work properly. See documentation for details on when this can be disabled: https://docs.cilium.io/en/stable/operations/system_requirements/#linux-kernel. |
| encryption.enabled | bool | `false` | Enable transparent network encryption. | | encryption.enabled | bool | `false` | Enable transparent network encryption. |
@@ -283,7 +298,12 @@ contributors across the globe, there is almost always someone available to help.
| encryption.mountPath | string | `"/etc/ipsec"` | Deprecated in favor of encryption.ipsec.mountPath. To be removed in 1.15. Path to mount the secret inside the Cilium pod. This option is only effective when encryption.type is set to ipsec. | | encryption.mountPath | string | `"/etc/ipsec"` | Deprecated in favor of encryption.ipsec.mountPath. To be removed in 1.15. Path to mount the secret inside the Cilium pod. This option is only effective when encryption.type is set to ipsec. |
| encryption.nodeEncryption | bool | `false` | Enable encryption for pure node to node traffic. This option is only effective when encryption.type is set to "wireguard". | | encryption.nodeEncryption | bool | `false` | Enable encryption for pure node to node traffic. This option is only effective when encryption.type is set to "wireguard". |
| encryption.secretName | string | `"cilium-ipsec-keys"` | Deprecated in favor of encryption.ipsec.secretName. To be removed in 1.15. Name of the Kubernetes secret containing the encryption keys. This option is only effective when encryption.type is set to ipsec. | | encryption.secretName | string | `"cilium-ipsec-keys"` | Deprecated in favor of encryption.ipsec.secretName. To be removed in 1.15. Name of the Kubernetes secret containing the encryption keys. This option is only effective when encryption.type is set to ipsec. |
| encryption.strictMode | object | `{"allowRemoteNodeIdentities":false,"cidr":"","enabled":false}` | Configure the WireGuard Pod2Pod strict mode. |
| encryption.strictMode.allowRemoteNodeIdentities | bool | `false` | Allow dynamic lookup of remote node identities. This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap. |
| encryption.strictMode.cidr | string | `""` | CIDR for the WireGuard Pod2Pod strict mode. |
| encryption.strictMode.enabled | bool | `false` | Enable WireGuard Pod2Pod strict mode. |
| encryption.type | string | `"ipsec"` | Encryption method. Can be either ipsec or wireguard. | | encryption.type | string | `"ipsec"` | Encryption method. Can be either ipsec or wireguard. |
| encryption.wireguard.persistentKeepalive | string | `"0s"` | Controls Wireguard PersistentKeepalive option. Set 0s to disable. |
| encryption.wireguard.userspaceFallback | bool | `false` | Enables the fallback to the user-space implementation. | | encryption.wireguard.userspaceFallback | bool | `false` | Enables the fallback to the user-space implementation. |
| endpointHealthChecking.enabled | bool | `true` | Enable connectivity health checking between virtual endpoints. | | endpointHealthChecking.enabled | bool | `true` | Enable connectivity health checking between virtual endpoints. |
| endpointRoutes.enabled | bool | `false` | Enable use of per endpoint routes instead of routing via the cilium_host interface. | | endpointRoutes.enabled | bool | `false` | Enable use of per endpoint routes instead of routing via the cilium_host interface. |
@@ -301,6 +321,7 @@ contributors across the globe, there is almost always someone available to help.
| eni.subnetTagsFilter | list | `[]` | Filter via tags (k=v) which will dictate which subnets are going to be used to create new ENIs Important note: This requires that each instance has an ENI with a matching subnet attached when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium, use the CNI configuration file settings (cni.customConf) instead. | | eni.subnetTagsFilter | list | `[]` | Filter via tags (k=v) which will dictate which subnets are going to be used to create new ENIs Important note: This requires that each instance has an ENI with a matching subnet attached when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium, use the CNI configuration file settings (cni.customConf) instead. |
| eni.updateEC2AdapterLimitViaAPI | bool | `true` | Update ENI Adapter limits from the EC2 API | | eni.updateEC2AdapterLimitViaAPI | bool | `true` | Update ENI Adapter limits from the EC2 API |
| envoy.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium-envoy"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-envoy. | | envoy.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium-envoy"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-envoy. |
| envoy.annotations | object | `{}` | Annotations to be added to all top-level cilium-envoy objects (resources under templates/cilium-envoy) |
| envoy.connectTimeoutSeconds | int | `2` | Time in seconds after which a TCP connection attempt times out | | envoy.connectTimeoutSeconds | int | `2` | Time in seconds after which a TCP connection attempt times out |
| envoy.dnsPolicy | string | `nil` | DNS policy for Cilium envoy pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy | | envoy.dnsPolicy | string | `nil` | DNS policy for Cilium envoy pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy |
| envoy.enabled | bool | `false` | Enable Envoy Proxy in standalone DaemonSet. | | envoy.enabled | bool | `false` | Enable Envoy Proxy in standalone DaemonSet. |
@@ -312,7 +333,7 @@ contributors across the globe, there is almost always someone available to help.
| envoy.extraVolumes | list | `[]` | Additional envoy volumes. | | envoy.extraVolumes | list | `[]` | Additional envoy volumes. |
| envoy.healthPort | int | `9878` | TCP port for the health API. | | envoy.healthPort | int | `9878` | TCP port for the health API. |
| envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s | | envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s |
| envoy.image | object | `{"digest":"sha256:992998398dadfff7117bfa9fdb7c9474fefab7f0237263f7c8114e106c67baca","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.26.6-ad82c7c56e88989992fd25d8d67747de865c823b","useDigest":true}` | Envoy container image. | | envoy.image | object | `{"digest":"sha256:877ead12d08d4c04a9f67f86d3c6e542aeb7bf97e1e401aee74de456f496ac30","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.27.3-99c1c8f42c8de70fc8f6dd594f4a425cd38b6688","useDigest":true}` | Envoy container image. |
| envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe | | envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
| envoy.livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe | | envoy.livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe |
| envoy.log.format | string | `"[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v"` | The format string to use for laying out the log message metadata of Envoy. | | envoy.log.format | string | `"[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v"` | The format string to use for laying out the log message metadata of Envoy. |
@@ -324,14 +345,15 @@ contributors across the globe, there is almost always someone available to help.
| envoy.podLabels | object | `{}` | Labels to be added to envoy pods | | envoy.podLabels | object | `{}` | Labels to be added to envoy pods |
| envoy.podSecurityContext | object | `{}` | Security Context for cilium-envoy pods. | | envoy.podSecurityContext | object | `{}` | Security Context for cilium-envoy pods. |
| envoy.priorityClassName | string | `nil` | The priority class to use for cilium-envoy. | | envoy.priorityClassName | string | `nil` | The priority class to use for cilium-envoy. |
| envoy.prometheus | object | `{"enabled":true,"port":"9964","serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]}}` | Configure Cilium Envoy Prometheus options. Note that some of these apply to either cilium-agent or cilium-envoy. |
| envoy.prometheus.enabled | bool | `true` | Enable prometheus metrics for cilium-envoy | | envoy.prometheus.enabled | bool | `true` | Enable prometheus metrics for cilium-envoy |
| envoy.prometheus.port | string | `"9964"` | Serve prometheus metrics for cilium-envoy on the configured port | | envoy.prometheus.port | string | `"9964"` | Serve prometheus metrics for cilium-envoy on the configured port |
| envoy.prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-envoy | | envoy.prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-envoy |
| envoy.prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) | | envoy.prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) Note that this setting applies to both cilium-envoy _and_ cilium-agent with Envoy enabled. |
| envoy.prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. | | envoy.prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. |
| envoy.prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-envoy | | envoy.prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-envoy |
| envoy.prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-envoy | | envoy.prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-envoy or for cilium-agent with Envoy configured. |
| envoy.prometheus.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor cilium-envoy | | envoy.prometheus.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor cilium-envoy or for cilium-agent with Envoy configured. |
| envoy.readinessProbe.failureThreshold | int | `3` | failure threshold of readiness probe | | envoy.readinessProbe.failureThreshold | int | `3` | failure threshold of readiness probe |
| envoy.readinessProbe.periodSeconds | int | `30` | interval between checks of the readiness probe | | envoy.readinessProbe.periodSeconds | int | `30` | interval between checks of the readiness probe |
| envoy.resources | object | `{}` | Envoy resource limits & requests ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | envoy.resources | object | `{}` | Envoy resource limits & requests ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
@@ -348,6 +370,7 @@ contributors across the globe, there is almost always someone available to help.
| envoyConfig.secretsNamespace | object | `{"create":true,"name":"cilium-secrets"}` | SecretsNamespace is the namespace in which envoy SDS will retrieve secrets from. | | envoyConfig.secretsNamespace | object | `{"create":true,"name":"cilium-secrets"}` | SecretsNamespace is the namespace in which envoy SDS will retrieve secrets from. |
| envoyConfig.secretsNamespace.create | bool | `true` | Create secrets namespace for CiliumEnvoyConfig CRDs. | | envoyConfig.secretsNamespace.create | bool | `true` | Create secrets namespace for CiliumEnvoyConfig CRDs. |
| envoyConfig.secretsNamespace.name | string | `"cilium-secrets"` | The name of the secret namespace to which Cilium agents are given read access. | | envoyConfig.secretsNamespace.name | string | `"cilium-secrets"` | The name of the secret namespace to which Cilium agents are given read access. |
| etcd.annotations | object | `{}` | Annotations to be added to all top-level etcd-operator objects (resources under templates/etcd-operator) |
| etcd.clusterDomain | string | `"cluster.local"` | Cluster domain for cilium-etcd-operator. | | etcd.clusterDomain | string | `"cluster.local"` | Cluster domain for cilium-etcd-operator. |
| etcd.enabled | bool | `false` | Enable etcd mode for the agent. | | etcd.enabled | bool | `false` | Enable etcd mode for the agent. |
| etcd.endpoints | list | `["https://CHANGE-ME:2379"]` | List of etcd endpoints (not needed when using managed=true). | | etcd.endpoints | list | `["https://CHANGE-ME:2379"]` | List of etcd endpoints (not needed when using managed=true). |
@@ -393,24 +416,41 @@ contributors across the globe, there is almost always someone available to help.
| hostFirewall | object | `{"enabled":false}` | Configure the host firewall. | | hostFirewall | object | `{"enabled":false}` | Configure the host firewall. |
| hostFirewall.enabled | bool | `false` | Enables the enforcement of host policies in the eBPF datapath. | | hostFirewall.enabled | bool | `false` | Enables the enforcement of host policies in the eBPF datapath. |
| hostPort.enabled | bool | `false` | Enable hostPort service support. | | hostPort.enabled | bool | `false` | Enable hostPort service support. |
| hubble.annotations | object | `{}` | Annotations to be added to all top-level hubble objects (resources under templates/hubble) |
| hubble.enabled | bool | `true` | Enable Hubble (true by default). | | hubble.enabled | bool | `true` | Enable Hubble (true by default). |
| hubble.export | object | `{"dynamic":{"config":{"configMapName":"cilium-flowlog-config","content":[{"excludeFilters":[],"fieldMask":[],"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}],"createConfigMap":true},"enabled":false},"fileMaxBackups":5,"fileMaxSizeMb":10,"static":{"allowList":[],"denyList":[],"enabled":false,"fieldMask":[],"filePath":"/var/run/cilium/hubble/events.log"}}` | Hubble flows export. |
| hubble.export.dynamic | object | `{"config":{"configMapName":"cilium-flowlog-config","content":[{"excludeFilters":[],"fieldMask":[],"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}],"createConfigMap":true},"enabled":false}` | - Dynamic exporters configuration. Dynamic exporters may be reconfigured without a need of agent restarts. |
| hubble.export.dynamic.config.configMapName | string | `"cilium-flowlog-config"` | -- Name of configmap with configuration that may be altered to reconfigure exporters within a running agents. |
| hubble.export.dynamic.config.content | list | `[{"excludeFilters":[],"fieldMask":[],"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}]` | -- Exporters configuration in YAML format. |
| hubble.export.dynamic.config.createConfigMap | bool | `true` | -- True if helm installer should create config map. Switch to false if you want to self maintain the file content. |
| hubble.export.fileMaxBackups | int | `5` | - Defines max number of backup/rotated files. |
| hubble.export.fileMaxSizeMb | int | `10` | - Defines max file size of output file before it gets rotated. |
| hubble.export.static | object | `{"allowList":[],"denyList":[],"enabled":false,"fieldMask":[],"filePath":"/var/run/cilium/hubble/events.log"}` | - Static exporter configuration. Static exporter is bound to agent lifecycle. |
| hubble.listenAddress | string | `":4244"` | An additional address for Hubble to listen to. Set this field ":4244" if you are enabling Hubble Relay, as it assumes that Hubble is listening on port 4244. | | hubble.listenAddress | string | `":4244"` | An additional address for Hubble to listen to. Set this field ":4244" if you are enabling Hubble Relay, as it assumes that Hubble is listening on port 4244. |
| hubble.metrics | object | `{"dashboards":{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null},"enableOpenMetrics":false,"enabled":null,"port":9965,"serviceAnnotations":{},"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]}}` | Hubble metrics configuration. See https://docs.cilium.io/en/stable/observability/metrics/#hubble-metrics for more comprehensive documentation about Hubble metrics. | | hubble.metrics | object | `{"dashboards":{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null},"enableOpenMetrics":false,"enabled":null,"port":9965,"serviceAnnotations":{},"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","jobLabel":"","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]}}` | Hubble metrics configuration. See https://docs.cilium.io/en/stable/observability/metrics/#hubble-metrics for more comprehensive documentation about Hubble metrics. |
| hubble.metrics.dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for hubble grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards | | hubble.metrics.dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for hubble grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards |
| hubble.metrics.enableOpenMetrics | bool | `false` | Enables exporting hubble metrics in OpenMetrics format. | | hubble.metrics.enableOpenMetrics | bool | `false` | Enables exporting hubble metrics in OpenMetrics format. |
| hubble.metrics.enabled | string | `nil` | Configures the list of metrics to collect. If empty or null, metrics are disabled. Example: enabled: - dns:query;ignoreAAAA - drop - tcp - flow - icmp - http You can specify the list of metrics from the helm CLI: --set metrics.enabled="{dns:query;ignoreAAAA,drop,tcp,flow,icmp,http}" | | hubble.metrics.enabled | string | `nil` | Configures the list of metrics to collect. If empty or null, metrics are disabled. Example: enabled: - dns:query;ignoreAAAA - drop - tcp - flow - icmp - http You can specify the list of metrics from the helm CLI: --set hubble.metrics.enabled="{dns:query;ignoreAAAA,drop,tcp,flow,icmp,http}" |
| hubble.metrics.port | int | `9965` | Configure the port the hubble metric server listens on. | | hubble.metrics.port | int | `9965` | Configure the port the hubble metric server listens on. |
| hubble.metrics.serviceAnnotations | object | `{}` | Annotations to be added to hubble-metrics service. | | hubble.metrics.serviceAnnotations | object | `{}` | Annotations to be added to hubble-metrics service. |
| hubble.metrics.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor hubble | | hubble.metrics.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor hubble |
| hubble.metrics.serviceMonitor.enabled | bool | `false` | Create ServiceMonitor resources for Prometheus Operator. This requires the prometheus CRDs to be available. ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) | | hubble.metrics.serviceMonitor.enabled | bool | `false` | Create ServiceMonitor resources for Prometheus Operator. This requires the prometheus CRDs to be available. ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) |
| hubble.metrics.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. | | hubble.metrics.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. |
| hubble.metrics.serviceMonitor.jobLabel | string | `""` | jobLabel to add for ServiceMonitor hubble |
| hubble.metrics.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor hubble | | hubble.metrics.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor hubble |
| hubble.metrics.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor hubble | | hubble.metrics.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor hubble |
| hubble.metrics.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor hubble | | hubble.metrics.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor hubble |
| hubble.peerService.clusterDomain | string | `"cluster.local"` | The cluster domain to use to query the Hubble Peer service. It should be the local cluster. | | hubble.peerService.clusterDomain | string | `"cluster.local"` | The cluster domain to use to query the Hubble Peer service. It should be the local cluster. |
| hubble.peerService.targetPort | int | `4244` | Target Port for the Peer service, must match the hubble.listenAddress' port. | | hubble.peerService.targetPort | int | `4244` | Target Port for the Peer service, must match the hubble.listenAddress' port. |
| hubble.preferIpv6 | bool | `false` | Whether Hubble should prefer to announce IPv6 or IPv4 addresses if both are available. | | hubble.preferIpv6 | bool | `false` | Whether Hubble should prefer to announce IPv6 or IPv4 addresses if both are available. |
| hubble.redact | object | `{"enabled":false,"http":{"headers":{"allow":[],"deny":[]},"urlQuery":false,"userInfo":true},"kafka":{"apiKey":false}}` | Enables redacting sensitive information present in Layer 7 flows. |
| hubble.redact.http.headers.allow | list | `[]` | List of HTTP headers to allow: headers not matching will be redacted. Note: `allow` and `deny` lists cannot be used both at the same time, only one can be present. Example: redact: enabled: true http: headers: allow: - traceparent - tracestate - Cache-Control You can specify the options from the helm CLI: --set hubble.redact.enabled="true" --set hubble.redact.http.headers.allow="traceparent,tracestate,Cache-Control" |
| hubble.redact.http.headers.deny | list | `[]` | List of HTTP headers to deny: matching headers will be redacted. Note: `allow` and `deny` lists cannot be used both at the same time, only one can be present. Example: redact: enabled: true http: headers: deny: - Authorization - Proxy-Authorization You can specify the options from the helm CLI: --set hubble.redact.enabled="true" --set hubble.redact.http.headers.deny="Authorization,Proxy-Authorization" |
| hubble.redact.http.urlQuery | bool | `false` | Enables redacting URL query (GET) parameters. Example: redact: enabled: true http: urlQuery: true You can specify the options from the helm CLI: --set hubble.redact.enabled="true" --set hubble.redact.http.urlQuery="true" |
| hubble.redact.http.userInfo | bool | `true` | Enables redacting user info, e.g., password when basic auth is used. Example: redact: enabled: true http: userInfo: true You can specify the options from the helm CLI: --set hubble.redact.enabled="true" --set hubble.redact.http.userInfo="true" |
| hubble.redact.kafka.apiKey | bool | `false` | Enables redacting Kafka's API key. Example: redact: enabled: true kafka: apiKey: true You can specify the options from the helm CLI: --set hubble.redact.enabled="true" --set hubble.redact.kafka.apiKey="true" |
| hubble.relay.affinity | object | `{"podAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for hubble-replay | | hubble.relay.affinity | object | `{"podAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for hubble-replay |
| hubble.relay.annotations | object | `{}` | Annotations to be added to all top-level hubble-relay objects (resources under templates/hubble-relay) |
| hubble.relay.dialTimeout | string | `nil` | Dial timeout to connect to the local hubble instance to receive peer information (e.g. "30s"). | | hubble.relay.dialTimeout | string | `nil` | Dial timeout to connect to the local hubble instance to receive peer information (e.g. "30s"). |
| hubble.relay.enabled | bool | `false` | Enable Hubble Relay (requires hubble.enabled=true) | | hubble.relay.enabled | bool | `false` | Enable Hubble Relay (requires hubble.enabled=true) |
| hubble.relay.extraEnv | list | `[]` | Additional hubble-relay environment variables. | | hubble.relay.extraEnv | list | `[]` | Additional hubble-relay environment variables. |
@@ -418,7 +458,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. | | hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. |
| hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay | | hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay |
| hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay | | hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay |
| hubble.relay.image | object | `{"digest":"sha256:dbef89f924a927043d02b40c18e417c1ea0e8f58b44523b80fef7e3652db24d4","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.14.5","useDigest":true}` | Hubble-relay container image. | | hubble.relay.image | object | `{"digest":"sha256:48480053930e884adaeb4141259ff1893a22eb59707906c6d38de2fe01916cb0","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.15.2","useDigest":true}` | Hubble-relay container image. |
| hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. | | hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. |
| hubble.relay.listenPort | string | `"4245"` | Port to listen to. | | hubble.relay.listenPort | string | `"4245"` | Port to listen to. |
| hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector | | hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
@@ -450,9 +490,9 @@ contributors across the globe, there is almost always someone available to help.
| hubble.relay.sortBufferDrainTimeout | string | `nil` | When the per-request flows sort buffer is not full, a flow is drained every time this timeout is reached (only affects requests in follow-mode) (e.g. "1s"). | | hubble.relay.sortBufferDrainTimeout | string | `nil` | When the per-request flows sort buffer is not full, a flow is drained every time this timeout is reached (only affects requests in follow-mode) (e.g. "1s"). |
| hubble.relay.sortBufferLenMax | string | `nil` | Max number of flows that can be buffered for sorting before being sent to the client (per request) (e.g. 100). | | hubble.relay.sortBufferLenMax | string | `nil` | Max number of flows that can be buffered for sorting before being sent to the client (per request) (e.g. 100). |
| hubble.relay.terminationGracePeriodSeconds | int | `1` | Configure termination grace period for hubble relay Deployment. | | hubble.relay.terminationGracePeriodSeconds | int | `1` | Configure termination grace period for hubble relay Deployment. |
| hubble.relay.tls | object | `{"client":{"cert":"","key":""},"server":{"cert":"","enabled":false,"extraDnsNames":[],"extraIpAddresses":[],"key":"","mtls":false}}` | TLS configuration for Hubble Relay | | hubble.relay.tls | object | `{"client":{"cert":"","key":""},"server":{"cert":"","enabled":false,"extraDnsNames":[],"extraIpAddresses":[],"key":"","mtls":false,"relayName":"ui.hubble-relay.cilium.io"}}` | TLS configuration for Hubble Relay |
| hubble.relay.tls.client | object | `{"cert":"","key":""}` | base64 encoded PEM values for the hubble-relay client certificate and private key This keypair is presented to Hubble server instances for mTLS authentication and is required when hubble.tls.enabled is true. These values need to be set manually if hubble.tls.auto.enabled is false. | | hubble.relay.tls.client | object | `{"cert":"","key":""}` | base64 encoded PEM values for the hubble-relay client certificate and private key This keypair is presented to Hubble server instances for mTLS authentication and is required when hubble.tls.enabled is true. These values need to be set manually if hubble.tls.auto.enabled is false. |
| hubble.relay.tls.server | object | `{"cert":"","enabled":false,"extraDnsNames":[],"extraIpAddresses":[],"key":"","mtls":false}` | base64 encoded PEM values for the hubble-relay server certificate and private key | | hubble.relay.tls.server | object | `{"cert":"","enabled":false,"extraDnsNames":[],"extraIpAddresses":[],"key":"","mtls":false,"relayName":"ui.hubble-relay.cilium.io"}` | base64 encoded PEM values for the hubble-relay server certificate and private key |
| hubble.relay.tls.server.extraDnsNames | list | `[]` | extra DNS names added to certificate when its auto gen | | hubble.relay.tls.server.extraDnsNames | list | `[]` | extra DNS names added to certificate when its auto gen |
| hubble.relay.tls.server.extraIpAddresses | list | `[]` | extra IP addresses added to certificate when its auto gen | | hubble.relay.tls.server.extraIpAddresses | list | `[]` | extra IP addresses added to certificate when its auto gen |
| hubble.relay.tolerations | list | `[]` | Node tolerations for pod assignment on nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ | | hubble.relay.tolerations | list | `[]` | Node tolerations for pod assignment on nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
@@ -472,10 +512,13 @@ contributors across the globe, there is almost always someone available to help.
| hubble.tls.server.extraDnsNames | list | `[]` | Extra DNS names added to certificate when it's auto generated | | hubble.tls.server.extraDnsNames | list | `[]` | Extra DNS names added to certificate when it's auto generated |
| hubble.tls.server.extraIpAddresses | list | `[]` | Extra IP addresses added to certificate when it's auto generated | | hubble.tls.server.extraIpAddresses | list | `[]` | Extra IP addresses added to certificate when it's auto generated |
| hubble.ui.affinity | object | `{}` | Affinity for hubble-ui | | hubble.ui.affinity | object | `{}` | Affinity for hubble-ui |
| hubble.ui.annotations | object | `{}` | Annotations to be added to all top-level hubble-ui objects (resources under templates/hubble-ui) |
| hubble.ui.backend.extraEnv | list | `[]` | Additional hubble-ui backend environment variables. | | hubble.ui.backend.extraEnv | list | `[]` | Additional hubble-ui backend environment variables. |
| hubble.ui.backend.extraVolumeMounts | list | `[]` | Additional hubble-ui backend volumeMounts. | | hubble.ui.backend.extraVolumeMounts | list | `[]` | Additional hubble-ui backend volumeMounts. |
| hubble.ui.backend.extraVolumes | list | `[]` | Additional hubble-ui backend volumes. | | hubble.ui.backend.extraVolumes | list | `[]` | Additional hubble-ui backend volumes. |
| hubble.ui.backend.image | object | `{"digest":"sha256:1f86f3400827a0451e6332262467f894eeb7caf0eb8779bd951e2caa9d027cbe","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui-backend","tag":"v0.12.1","useDigest":true}` | Hubble-ui backend image. | | hubble.ui.backend.image | object | `{"digest":"sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui-backend","tag":"v0.13.0","useDigest":true}` | Hubble-ui backend image. |
| hubble.ui.backend.livenessProbe.enabled | bool | `false` | Enable liveness probe for Hubble-ui backend (requires Hubble-ui 0.12+) |
| hubble.ui.backend.readinessProbe.enabled | bool | `false` | Enable readiness probe for Hubble-ui backend (requires Hubble-ui 0.12+) |
| hubble.ui.backend.resources | object | `{}` | Resource requests and limits for the 'backend' container of the 'hubble-ui' deployment. | | hubble.ui.backend.resources | object | `{}` | Resource requests and limits for the 'backend' container of the 'hubble-ui' deployment. |
| hubble.ui.backend.securityContext | object | `{}` | Hubble-ui backend security context. | | hubble.ui.backend.securityContext | object | `{}` | Hubble-ui backend security context. |
| hubble.ui.baseUrl | string | `"/"` | Defines base url prefix for all hubble-ui http requests. It needs to be changed in case if ingress for hubble-ui is configured under some sub-path. Trailing `/` is required for custom path, ex. `/service-map/` | | hubble.ui.baseUrl | string | `"/"` | Defines base url prefix for all hubble-ui http requests. It needs to be changed in case if ingress for hubble-ui is configured under some sub-path. Trailing `/` is required for custom path, ex. `/service-map/` |
@@ -483,7 +526,7 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.frontend.extraEnv | list | `[]` | Additional hubble-ui frontend environment variables. | | hubble.ui.frontend.extraEnv | list | `[]` | Additional hubble-ui frontend environment variables. |
| hubble.ui.frontend.extraVolumeMounts | list | `[]` | Additional hubble-ui frontend volumeMounts. | | hubble.ui.frontend.extraVolumeMounts | list | `[]` | Additional hubble-ui frontend volumeMounts. |
| hubble.ui.frontend.extraVolumes | list | `[]` | Additional hubble-ui frontend volumes. | | hubble.ui.frontend.extraVolumes | list | `[]` | Additional hubble-ui frontend volumes. |
| hubble.ui.frontend.image | object | `{"digest":"sha256:9e5f81ee747866480ea1ac4630eb6975ff9227f9782b7c93919c081c33f38267","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui","tag":"v0.12.1","useDigest":true}` | Hubble-ui frontend image. | | hubble.ui.frontend.image | object | `{"digest":"sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui","tag":"v0.13.0","useDigest":true}` | Hubble-ui frontend image. |
| hubble.ui.frontend.resources | object | `{}` | Resource requests and limits for the 'frontend' container of the 'hubble-ui' deployment. | | hubble.ui.frontend.resources | object | `{}` | Resource requests and limits for the 'frontend' container of the 'hubble-ui' deployment. |
| hubble.ui.frontend.securityContext | object | `{}` | Hubble-ui frontend security context. | | hubble.ui.frontend.securityContext | object | `{}` | Hubble-ui frontend security context. |
| hubble.ui.frontend.server.ipv6 | object | `{"enabled":true}` | Controls server listener for ipv6 | | hubble.ui.frontend.server.ipv6 | object | `{"enabled":true}` | Controls server listener for ipv6 |
@@ -510,14 +553,15 @@ contributors across the globe, there is almost always someone available to help.
| hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. | | hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. |
| identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd` or `kvstore`). | | identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd` or `kvstore`). |
| identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. | | identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. |
| image | object | `{"digest":"sha256:d3b287029755b6a47dee01420e2ea469469f1b174a2089c10af7e5e9289ef05b","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.5","useDigest":true}` | Agent container image. | | image | object | `{"digest":"sha256:bfeb3f1034282444ae8c498dca94044df2b9c9c8e7ac678e0b43c849f0b31746","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.15.2","useDigest":true}` | Agent container image. |
| imagePullSecrets | string | `nil` | Configure image pull secrets for pulling container images | | imagePullSecrets | string | `nil` | Configure image pull secrets for pulling container images |
| ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set | | ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set |
| ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. | | ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. |
| ingressController.defaultSecretNamespace | string | `nil` | Default secret namespace for ingresses without .spec.tls[].secretName set. | | ingressController.defaultSecretNamespace | string | `nil` | Default secret namespace for ingresses without .spec.tls[].secretName set. |
| ingressController.enableProxyProtocol | bool | `false` | Enable proxy protocol for all Ingress listeners. Note that _only_ Proxy protocol traffic will be accepted once this is enabled. |
| ingressController.enabled | bool | `false` | Enable cilium ingress controller This will automatically set enable-envoy-config as well. | | ingressController.enabled | bool | `false` | Enable cilium ingress controller This will automatically set enable-envoy-config as well. |
| ingressController.enforceHttps | bool | `true` | Enforce https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. | | ingressController.enforceHttps | bool | `true` | Enforce https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. |
| ingressController.ingressLBAnnotationPrefixes | list | `["service.beta.kubernetes.io","service.kubernetes.io","cloud.google.com"]` | IngressLBAnnotations are the annotation prefixes, which are used to filter annotations to propagate from Ingress to the Load Balancer service | | ingressController.ingressLBAnnotationPrefixes | list | `["service.beta.kubernetes.io","service.kubernetes.io","cloud.google.com"]` | IngressLBAnnotations are the annotation and label prefixes, which are used to filter annotations and/or labels to propagate from Ingress to the Load Balancer service |
| ingressController.loadbalancerMode | string | `"dedicated"` | Default ingress load balancer mode Supported values: shared, dedicated For granular control, use the following annotations on the ingress resource ingress.cilium.io/loadbalancer-mode: shared|dedicated, | | ingressController.loadbalancerMode | string | `"dedicated"` | Default ingress load balancer mode Supported values: shared, dedicated For granular control, use the following annotations on the ingress resource ingress.cilium.io/loadbalancer-mode: shared|dedicated, |
| ingressController.secretsNamespace | object | `{"create":true,"name":"cilium-secrets","sync":true}` | SecretsNamespace is the namespace in which envoy SDS will retrieve TLS secrets from. | | ingressController.secretsNamespace | object | `{"create":true,"name":"cilium-secrets","sync":true}` | SecretsNamespace is the namespace in which envoy SDS will retrieve TLS secrets from. |
| ingressController.secretsNamespace.create | bool | `true` | Create secrets namespace for Ingress. | | ingressController.secretsNamespace.create | bool | `true` | Create secrets namespace for Ingress. |
@@ -550,9 +594,9 @@ contributors across the globe, there is almost always someone available to help.
| ipv6.enabled | bool | `false` | Enable IPv6 support. | | ipv6.enabled | bool | `false` | Enable IPv6 support. |
| ipv6NativeRoutingCIDR | string | `""` | Allows to explicitly specify the IPv6 CIDR for native routing. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT. Generally speaking, specifying a native routing CIDR implies that Cilium can depend on the underlying networking stack to route packets to their destination. To offer a concrete example, if Cilium is configured to use direct routing and the Kubernetes CIDR is included in the native routing CIDR, the user must configure the routes to reach pods, either manually or by setting the auto-direct-node-routes flag. | | ipv6NativeRoutingCIDR | string | `""` | Allows to explicitly specify the IPv6 CIDR for native routing. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT. Generally speaking, specifying a native routing CIDR implies that Cilium can depend on the underlying networking stack to route packets to their destination. To offer a concrete example, if Cilium is configured to use direct routing and the Kubernetes CIDR is included in the native routing CIDR, the user must configure the routes to reach pods, either manually or by setting the auto-direct-node-routes flag. |
| k8s | object | `{}` | Configure Kubernetes specific configuration | | k8s | object | `{}` | Configure Kubernetes specific configuration |
| k8sClientRateLimit | object | `{"burst":10,"qps":5}` | Configure the client side rate limit for the agent and operator If the amount of requests to the Kubernetes API server exceeds the configured rate limit, the agent and operator will start to throttle requests by delaying them until there is budget or the request times out. | | k8sClientRateLimit | object | `{"burst":null,"qps":null}` | Configure the client side rate limit for the agent and operator If the amount of requests to the Kubernetes API server exceeds the configured rate limit, the agent and operator will start to throttle requests by delaying them until there is budget or the request times out. |
| k8sClientRateLimit.burst | int | `10` | The burst request rate in requests per second. The rate limiter will allow short bursts with a higher rate. | | k8sClientRateLimit.burst | int | 10 for k8s up to 1.26. 20 for k8s version 1.27+ | The burst request rate in requests per second. The rate limiter will allow short bursts with a higher rate. |
| k8sClientRateLimit.qps | int | `5` | The sustained request rate in requests per second. | | k8sClientRateLimit.qps | int | 5 for k8s up to 1.26. 10 for k8s version 1.27+ | The sustained request rate in requests per second. |
| k8sNetworkPolicy.enabled | bool | `true` | Enable support for K8s NetworkPolicy | | k8sNetworkPolicy.enabled | bool | `true` | Enable support for K8s NetworkPolicy |
| k8sServiceHost | string | `""` | Kubernetes service host | | k8sServiceHost | string | `""` | Kubernetes service host |
| k8sServicePort | string | `""` | Kubernetes service port | | k8sServicePort | string | `""` | Kubernetes service port |
@@ -570,7 +614,8 @@ contributors across the globe, there is almost always someone available to help.
| l7Proxy | bool | `true` | Enable Layer 7 network policy. | | l7Proxy | bool | `true` | Enable Layer 7 network policy. |
| livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe | | livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
| livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe | | livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe |
| loadBalancer | object | `{"l7":{"algorithm":"round_robin","backend":"disabled","ports":[]}}` | Configure service load balancing | | loadBalancer | object | `{"acceleration":"disabled","l7":{"algorithm":"round_robin","backend":"disabled","ports":[]}}` | Configure service load balancing |
| loadBalancer.acceleration | string | `"disabled"` | acceleration is the option to accelerate service handling via XDP Applicable values can be: disabled (do not use XDP), native (XDP BPF program is run directly out of the networking driver's early receive path), or best-effort (use native mode XDP acceleration on devices that support it). |
| loadBalancer.l7 | object | `{"algorithm":"round_robin","backend":"disabled","ports":[]}` | L7 LoadBalancer | | loadBalancer.l7 | object | `{"algorithm":"round_robin","backend":"disabled","ports":[]}` | L7 LoadBalancer |
| loadBalancer.l7.algorithm | string | `"round_robin"` | Default LB algorithm The default LB algorithm to be used for services, which can be overridden by the service annotation (e.g. service.cilium.io/lb-l7-algorithm) Applicable values: round_robin, least_request, random | | loadBalancer.l7.algorithm | string | `"round_robin"` | Default LB algorithm The default LB algorithm to be used for services, which can be overridden by the service annotation (e.g. service.cilium.io/lb-l7-algorithm) Applicable values: round_robin, least_request, random |
| loadBalancer.l7.backend | string | `"disabled"` | Enable L7 service load balancing via envoy proxy. The request to a k8s service, which has specific annotation e.g. service.cilium.io/lb-l7, will be forwarded to the local backend proxy to be load balanced to the service endpoints. Please refer to docs for supported annotations for more configuration. Applicable values: - envoy: Enable L7 load balancing via envoy proxy. This will automatically set enable-envoy-config as well. - disabled: Disable L7 load balancing by way of service annotation. | | loadBalancer.l7.backend | string | `"disabled"` | Enable L7 service load balancing via envoy proxy. The request to a k8s service, which has specific annotation e.g. service.cilium.io/lb-l7, will be forwarded to the local backend proxy to be load balanced to the service endpoints. Please refer to docs for supported annotations for more configuration. Applicable values: - envoy: Enable L7 load balancing via envoy proxy. This will automatically set enable-envoy-config as well. - disabled: Disable L7 load balancing by way of service annotation. |
@@ -583,13 +628,15 @@ contributors across the globe, there is almost always someone available to help.
| name | string | `"cilium"` | Agent container name. | | name | string | `"cilium"` | Agent container name. |
| nat46x64Gateway | object | `{"enabled":false}` | Configure standalone NAT46/NAT64 gateway | | nat46x64Gateway | object | `{"enabled":false}` | Configure standalone NAT46/NAT64 gateway |
| nat46x64Gateway.enabled | bool | `false` | Enable RFC8215-prefixed translation | | nat46x64Gateway.enabled | bool | `false` | Enable RFC8215-prefixed translation |
| nodePort | object | `{"autoProtectPortRange":true,"bindProtection":true,"enableHealthCheck":true,"enabled":false}` | Configure N-S k8s service loadbalancing | | nodePort | object | `{"autoProtectPortRange":true,"bindProtection":true,"enableHealthCheck":true,"enableHealthCheckLoadBalancerIP":false,"enabled":false}` | Configure N-S k8s service loadbalancing |
| nodePort.autoProtectPortRange | bool | `true` | Append NodePort range to ip_local_reserved_ports if clash with ephemeral ports is detected. | | nodePort.autoProtectPortRange | bool | `true` | Append NodePort range to ip_local_reserved_ports if clash with ephemeral ports is detected. |
| nodePort.bindProtection | bool | `true` | Set to true to prevent applications binding to service ports. | | nodePort.bindProtection | bool | `true` | Set to true to prevent applications binding to service ports. |
| nodePort.enableHealthCheck | bool | `true` | Enable healthcheck nodePort server for NodePort services | | nodePort.enableHealthCheck | bool | `true` | Enable healthcheck nodePort server for NodePort services |
| nodePort.enableHealthCheckLoadBalancerIP | bool | `false` | Enable access of the healthcheck nodePort on the LoadBalancerIP. Needs EnableHealthCheck to be enabled |
| nodePort.enabled | bool | `false` | Enable the Cilium NodePort service implementation. | | nodePort.enabled | bool | `false` | Enable the Cilium NodePort service implementation. |
| nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node selector for cilium-agent. | | nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node selector for cilium-agent. |
| nodeinit.affinity | object | `{}` | Affinity for cilium-nodeinit | | nodeinit.affinity | object | `{}` | Affinity for cilium-nodeinit |
| nodeinit.annotations | object | `{}` | Annotations to be added to all top-level nodeinit objects (resources under templates/cilium-nodeinit) |
| nodeinit.bootstrapFile | string | `"/tmp/cilium-bootstrap.d/cilium-bootstrap-time"` | bootstrapFile is the location of the file where the bootstrap timestamp is written by the node-init DaemonSet | | nodeinit.bootstrapFile | string | `"/tmp/cilium-bootstrap.d/cilium-bootstrap-time"` | bootstrapFile is the location of the file where the bootstrap timestamp is written by the node-init DaemonSet |
| nodeinit.enabled | bool | `false` | Enable the node initialization DaemonSet | | nodeinit.enabled | bool | `false` | Enable the node initialization DaemonSet |
| nodeinit.extraEnv | list | `[]` | Additional nodeinit environment variables. | | nodeinit.extraEnv | list | `[]` | Additional nodeinit environment variables. |
@@ -607,6 +654,7 @@ contributors across the globe, there is almost always someone available to help.
| nodeinit.tolerations | list | `[{"operator":"Exists"}]` | Node tolerations for nodeinit scheduling to nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ | | nodeinit.tolerations | list | `[{"operator":"Exists"}]` | Node tolerations for nodeinit scheduling to nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
| nodeinit.updateStrategy | object | `{"type":"RollingUpdate"}` | node-init update strategy | | nodeinit.updateStrategy | object | `{"type":"RollingUpdate"}` | node-init update strategy |
| operator.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"io.cilium/app":"operator"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-operator | | operator.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"io.cilium/app":"operator"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-operator |
| operator.annotations | object | `{}` | Annotations to be added to all top-level cilium-operator objects (resources under templates/cilium-operator) |
| operator.dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for cilium-operator grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards | | operator.dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for cilium-operator grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards |
| operator.dnsPolicy | string | `""` | DNS policy for Cilium operator pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy | | operator.dnsPolicy | string | `""` | DNS policy for Cilium operator pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy |
| operator.enabled | bool | `true` | Enable the cilium-operator component (required). | | operator.enabled | bool | `true` | Enable the cilium-operator component (required). |
@@ -618,7 +666,7 @@ contributors across the globe, there is almost always someone available to help.
| operator.extraVolumes | list | `[]` | Additional cilium-operator volumes. | | operator.extraVolumes | list | `[]` | Additional cilium-operator volumes. |
| operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. | | operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. |
| operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. | | operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. |
| operator.image | object | `{"alibabacloudDigest":"sha256:e0152c498ba73c56a82eee2a706c8f400e9a6999c665af31a935bdf08e659bc3","awsDigest":"sha256:785ccf1267d0ed3ba9e4bd8166577cb4f9e4ce996af26b27c9d5c554a0d5b09a","azureDigest":"sha256:9203f5583aa34e716d7a6588ebd144e43ce3b77873f578fc12b2679e33591353","genericDigest":"sha256:303f9076bdc73b3fc32aaedee64a14f6f44c8bb08ee9e3956d443021103ebe7a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.14.5","useDigest":true}` | cilium-operator image. | | operator.image | object | `{"alibabacloudDigest":"sha256:e2dafa4c04ab05392a28561ab003c2894ec1fcc3214a4dfe2efd6b7d58a66650","awsDigest":"sha256:3f459999b753bfd8626f8effdf66720a996b2c15c70f4e418011d00de33552eb","azureDigest":"sha256:568293cebc27c01a39a9341b1b2578ebf445228df437f8b318adbbb2c4db842a","genericDigest":"sha256:4dd8f67630f45fcaf58145eb81780b677ef62d57632d7e4442905ad3226a9088","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.15.2","useDigest":true}` | cilium-operator image. |
| operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. | | operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. |
| operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector | | operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods | | operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods |
@@ -631,10 +679,11 @@ contributors across the globe, there is almost always someone available to help.
| operator.pprof.enabled | bool | `false` | Enable pprof for cilium-operator | | operator.pprof.enabled | bool | `false` | Enable pprof for cilium-operator |
| operator.pprof.port | int | `6061` | Configure pprof listen port for cilium-operator | | operator.pprof.port | int | `6061` | Configure pprof listen port for cilium-operator |
| operator.priorityClassName | string | `""` | The priority class to use for cilium-operator | | operator.priorityClassName | string | `""` | The priority class to use for cilium-operator |
| operator.prometheus | object | `{"enabled":false,"port":9963,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","labels":{},"metricRelabelings":null,"relabelings":null}}` | Enable prometheus metrics for cilium-operator on the configured port at /metrics | | operator.prometheus | object | `{"enabled":true,"port":9963,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","jobLabel":"","labels":{},"metricRelabelings":null,"relabelings":null}}` | Enable prometheus metrics for cilium-operator on the configured port at /metrics |
| operator.prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-operator | | operator.prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-operator |
| operator.prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) | | operator.prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) |
| operator.prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. | | operator.prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. |
| operator.prometheus.serviceMonitor.jobLabel | string | `""` | jobLabel to add for ServiceMonitor cilium-operator |
| operator.prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-operator | | operator.prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-operator |
| operator.prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-operator | | operator.prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-operator |
| operator.prometheus.serviceMonitor.relabelings | string | `nil` | Relabeling configs for the ServiceMonitor cilium-operator | | operator.prometheus.serviceMonitor.relabelings | string | `nil` | Relabeling configs for the ServiceMonitor cilium-operator |
@@ -656,16 +705,18 @@ contributors across the globe, there is almost always someone available to help.
| podAnnotations | object | `{}` | Annotations to be added to agent pods | | podAnnotations | object | `{}` | Annotations to be added to agent pods |
| podLabels | object | `{}` | Labels to be added to agent pods | | podLabels | object | `{}` | Labels to be added to agent pods |
| podSecurityContext | object | `{}` | Security Context for cilium-agent pods. | | podSecurityContext | object | `{}` | Security Context for cilium-agent pods. |
| policyCIDRMatchMode | string | `nil` | policyCIDRMatchMode is a list of entities that may be selected by CIDR selector. The possible value is "nodes". |
| policyEnforcementMode | string | `"default"` | The agent can be put into one of the three policy enforcement modes: default, always and never. ref: https://docs.cilium.io/en/stable/security/policy/intro/#policy-enforcement-modes | | policyEnforcementMode | string | `"default"` | The agent can be put into one of the three policy enforcement modes: default, always and never. ref: https://docs.cilium.io/en/stable/security/policy/intro/#policy-enforcement-modes |
| pprof.address | string | `"localhost"` | Configure pprof listen address for cilium-agent | | pprof.address | string | `"localhost"` | Configure pprof listen address for cilium-agent |
| pprof.enabled | bool | `false` | Enable pprof for cilium-agent | | pprof.enabled | bool | `false` | Enable pprof for cilium-agent |
| pprof.port | int | `6060` | Configure pprof listen port for cilium-agent | | pprof.port | int | `6060` | Configure pprof listen port for cilium-agent |
| preflight.affinity | object | `{"podAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-preflight | | preflight.affinity | object | `{"podAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-preflight |
| preflight.annotations | object | `{}` | Annotations to be added to all top-level preflight objects (resources under templates/cilium-preflight) |
| preflight.enabled | bool | `false` | Enable Cilium pre-flight resources (required for upgrade) | | preflight.enabled | bool | `false` | Enable Cilium pre-flight resources (required for upgrade) |
| preflight.extraEnv | list | `[]` | Additional preflight environment variables. | | preflight.extraEnv | list | `[]` | Additional preflight environment variables. |
| preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. | | preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. |
| preflight.extraVolumes | list | `[]` | Additional preflight volumes. | | preflight.extraVolumes | list | `[]` | Additional preflight volumes. |
| preflight.image | object | `{"digest":"sha256:d3b287029755b6a47dee01420e2ea469469f1b174a2089c10af7e5e9289ef05b","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.5","useDigest":true}` | Cilium pre-flight image. | | preflight.image | object | `{"digest":"sha256:bfeb3f1034282444ae8c498dca94044df2b9c9c8e7ac678e0b43c849f0b31746","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.15.2","useDigest":true}` | Cilium pre-flight image. |
| preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector | | preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
| preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods | | preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods |
| preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ | | preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |
@@ -682,11 +733,13 @@ contributors across the globe, there is almost always someone available to help.
| preflight.updateStrategy | object | `{"type":"RollingUpdate"}` | preflight update strategy | | preflight.updateStrategy | object | `{"type":"RollingUpdate"}` | preflight update strategy |
| preflight.validateCNPs | bool | `true` | By default we should always validate the installed CNPs before upgrading Cilium. This will make sure the user will have the policies deployed in the cluster with the right schema. | | preflight.validateCNPs | bool | `true` | By default we should always validate the installed CNPs before upgrading Cilium. This will make sure the user will have the policies deployed in the cluster with the right schema. |
| priorityClassName | string | `""` | The priority class to use for cilium-agent. | | priorityClassName | string | `""` | The priority class to use for cilium-agent. |
| prometheus | object | `{"enabled":false,"metrics":null,"port":9962,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}],"trustCRDsExist":false}}` | Configure prometheus metrics on the configured port at /metrics | | prometheus | object | `{"controllerGroupMetrics":["write-cni-file","sync-host-ips","sync-lb-maps-with-k8s-services"],"enabled":false,"metrics":null,"port":9962,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","jobLabel":"","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}],"trustCRDsExist":false}}` | Configure prometheus metrics on the configured port at /metrics |
| prometheus.controllerGroupMetrics | list | `["write-cni-file","sync-host-ips","sync-lb-maps-with-k8s-services"]` | - Enable controller group metrics for monitoring specific Cilium subsystems. The list is a list of controller group names. The special values of "all" and "none" are supported. The set of controller group names is not guaranteed to be stable between Cilium versions. |
| prometheus.metrics | string | `nil` | Metrics that should be enabled or disabled from the default metric list. The list is expected to be separated by a space. (+metric_foo to enable metric_foo , -metric_bar to disable metric_bar). ref: https://docs.cilium.io/en/stable/observability/metrics/ | | prometheus.metrics | string | `nil` | Metrics that should be enabled or disabled from the default metric list. The list is expected to be separated by a space. (+metric_foo to enable metric_foo , -metric_bar to disable metric_bar). ref: https://docs.cilium.io/en/stable/observability/metrics/ |
| prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-agent | | prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-agent |
| prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) | | prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) |
| prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. | | prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. |
| prometheus.serviceMonitor.jobLabel | string | `""` | jobLabel to add for ServiceMonitor cilium-agent |
| prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-agent | | prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-agent |
| prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-agent | | prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-agent |
| prometheus.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor cilium-agent | | prometheus.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor cilium-agent |
@@ -698,7 +751,7 @@ contributors across the globe, there is almost always someone available to help.
| rbac.create | bool | `true` | Enable creation of Resource-Based Access Control configuration. | | rbac.create | bool | `true` | Enable creation of Resource-Based Access Control configuration. |
| readinessProbe.failureThreshold | int | `3` | failure threshold of readiness probe | | readinessProbe.failureThreshold | int | `3` | failure threshold of readiness probe |
| readinessProbe.periodSeconds | int | `30` | interval between checks of the readiness probe | | readinessProbe.periodSeconds | int | `30` | interval between checks of the readiness probe |
| remoteNodeIdentity | bool | `true` | Enable use of the remote node identity. ref: https://docs.cilium.io/en/v1.7/install/upgrade/#configmap-remote-node-identity | | remoteNodeIdentity | bool | `true` | Enable use of the remote node identity. ref: https://docs.cilium.io/en/v1.7/install/upgrade/#configmap-remote-node-identity Deprecated without replacement in 1.15. To be removed in 1.16. |
| resourceQuotas | object | `{"cilium":{"hard":{"pods":"10k"}},"enabled":false,"operator":{"hard":{"pods":"15"}}}` | Enable resource quotas for priority classes used in the cluster. | | resourceQuotas | object | `{"cilium":{"hard":{"pods":"10k"}},"enabled":false,"operator":{"hard":{"pods":"15"}}}` | Enable resource quotas for priority classes used in the cluster. |
| resources | object | `{}` | Agent resource limits & requests ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | | resources | object | `{}` | Agent resource limits & requests ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
| rollOutCiliumPods | bool | `false` | Roll out cilium agent pods automatically when configmap is updated. | | rollOutCiliumPods | bool | `false` | Roll out cilium agent pods automatically when configmap is updated. |
@@ -715,6 +768,7 @@ contributors across the globe, there is almost always someone available to help.
| serviceAccounts.clustermeshcertgen | object | `{"annotations":{},"automount":true,"create":true,"name":"clustermesh-apiserver-generate-certs"}` | Clustermeshcertgen is used if clustermesh.apiserver.tls.auto.method=cronJob | | serviceAccounts.clustermeshcertgen | object | `{"annotations":{},"automount":true,"create":true,"name":"clustermesh-apiserver-generate-certs"}` | Clustermeshcertgen is used if clustermesh.apiserver.tls.auto.method=cronJob |
| serviceAccounts.hubblecertgen | object | `{"annotations":{},"automount":true,"create":true,"name":"hubble-generate-certs"}` | Hubblecertgen is used if hubble.tls.auto.method=cronJob | | serviceAccounts.hubblecertgen | object | `{"annotations":{},"automount":true,"create":true,"name":"hubble-generate-certs"}` | Hubblecertgen is used if hubble.tls.auto.method=cronJob |
| serviceAccounts.nodeinit.enabled | bool | `false` | Enabled is temporary until https://github.com/cilium/cilium-cli/issues/1396 is implemented. Cilium CLI doesn't create the SAs for node-init, thus the workaround. Helm is not affected by this issue. Name and automount can be configured, if enabled is set to true. Otherwise, they are ignored. Enabled can be removed once the issue is fixed. Cilium-nodeinit DS must also be fixed. | | serviceAccounts.nodeinit.enabled | bool | `false` | Enabled is temporary until https://github.com/cilium/cilium-cli/issues/1396 is implemented. Cilium CLI doesn't create the SAs for node-init, thus the workaround. Helm is not affected by this issue. Name and automount can be configured, if enabled is set to true. Otherwise, they are ignored. Enabled can be removed once the issue is fixed. Cilium-nodeinit DS must also be fixed. |
| serviceNoBackendResponse | string | `"reject"` | Configure what the response should be to traffic for a service without backends. "reject" only works on kernels >= 5.10, on lower kernels we fallback to "drop". Possible values: - reject (default) - drop |
| sleepAfterInit | bool | `false` | Do not run Cilium agent when running with clean mode. Useful to completely uninstall Cilium as it will stop Cilium from starting and create artifacts in the node. | | sleepAfterInit | bool | `false` | Do not run Cilium agent when running with clean mode. Useful to completely uninstall Cilium as it will stop Cilium from starting and create artifacts in the node. |
| socketLB | object | `{"enabled":false}` | Configure socket LB | | socketLB | object | `{"enabled":false}` | Configure socket LB |
| socketLB.enabled | bool | `false` | Enable socket LB | | socketLB.enabled | bool | `false` | Enable socket LB |
@@ -735,7 +789,6 @@ contributors across the globe, there is almost always someone available to help.
| tls.caBundle.useSecret | bool | `false` | Use a Secret instead of a ConfigMap. | | tls.caBundle.useSecret | bool | `false` | Use a Secret instead of a ConfigMap. |
| tls.secretsBackend | string | `"local"` | This configures how the Cilium agent loads the secrets used TLS-aware CiliumNetworkPolicies (namely the secrets referenced by terminatingTLS and originatingTLS). Possible values: - local - k8s | | tls.secretsBackend | string | `"local"` | This configures how the Cilium agent loads the secrets used TLS-aware CiliumNetworkPolicies (namely the secrets referenced by terminatingTLS and originatingTLS). Possible values: - local - k8s |
| tolerations | list | `[{"operator":"Exists"}]` | Node tolerations for agent scheduling to nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ | | tolerations | list | `[{"operator":"Exists"}]` | Node tolerations for agent scheduling to nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
| tunnel | string | `"vxlan"` | Configure the encapsulation configuration for communication between nodes. Deprecated in favor of tunnelProtocol and routingMode. To be removed in 1.15. Possible values: - disabled - vxlan - geneve |
| tunnelPort | int | Port 8472 for VXLAN, Port 6081 for Geneve | Configure VXLAN and Geneve tunnel port. | | tunnelPort | int | Port 8472 for VXLAN, Port 6081 for Geneve | Configure VXLAN and Geneve tunnel port. |
| tunnelProtocol | string | `"vxlan"` | Tunneling protocol to use in tunneling mode and for ad-hoc tunnels. Possible values: - "" - vxlan - geneve | | tunnelProtocol | string | `"vxlan"` | Tunneling protocol to use in tunneling mode and for ad-hoc tunnels. Possible values: - "" - vxlan - geneve |
| updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":2},"type":"RollingUpdate"}` | Cilium agent update strategy | | updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":2},"type":"RollingUpdate"}` | Cilium agent update strategy |

View File

@@ -11,9 +11,9 @@ set -o nounset
# dependencies on anything that is part of the startup script # dependencies on anything that is part of the startup script
# itself, and can be safely run multiple times per node (e.g. in # itself, and can be safely run multiple times per node (e.g. in
# case of a restart). # case of a restart).
if [[ "$(iptables-save | grep -c 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]]; if [[ "$(iptables-save | grep -E -c 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];
then then
echo 'Deleting iptables rules created by the AWS CNI VPC plugin' echo 'Deleting iptables rules created by the AWS CNI VPC plugin'
iptables-save | grep -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore
fi fi
echo 'Done!' echo 'Done!'

View File

@@ -27,7 +27,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -131,7 +134,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -271,7 +277,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -394,7 +403,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -511,7 +523,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -636,7 +651,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "BPF memory usage in the entire system including components not managed by Cilium.", "description": "BPF memory usage in the entire system including components not managed by Cilium.",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
@@ -759,7 +777,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "Fill percentage of BPF maps, tagged by map name", "description": "Fill percentage of BPF maps, tagged by map name",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
@@ -870,7 +891,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -971,7 +995,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -1072,7 +1099,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -1173,7 +1203,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -1274,7 +1307,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -1375,7 +1411,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -1511,7 +1550,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -1612,7 +1654,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"decimals": 2, "decimals": 2,
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
@@ -1715,7 +1760,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -1816,7 +1864,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -1915,7 +1966,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -2016,7 +2070,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -2117,7 +2174,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -2239,7 +2299,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"decimals": 2, "decimals": 2,
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
@@ -2342,7 +2405,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"decimals": 2, "decimals": 2,
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
@@ -2445,7 +2511,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -2546,7 +2615,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -2647,7 +2719,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -2767,7 +2842,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -2864,7 +2942,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -2984,7 +3065,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -3150,7 +3234,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -3316,7 +3403,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -3482,7 +3572,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -3633,7 +3726,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"decimals": null, "decimals": null,
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
@@ -3740,7 +3836,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -3837,7 +3936,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -3934,7 +4036,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -4047,7 +4152,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -4147,7 +4255,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -4270,7 +4381,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -4370,7 +4484,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -4518,7 +4635,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -4638,7 +4758,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -4740,7 +4863,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -4864,7 +4990,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -4966,7 +5095,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -5102,7 +5234,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -5218,7 +5353,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -5327,7 +5465,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -5455,7 +5596,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -5591,7 +5735,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -5676,7 +5823,7 @@
"refId": "C" "refId": "C"
}, },
{ {
"expr": "sum(cilium_policy_change_total{k8s_app=\"cilium\", pod=~\"$pod\"}, outcome=\"fail\") by (pod)", "expr": "sum(cilium_policy_change_total{k8s_app=\"cilium\", pod=~\"$pod\", outcome=\"fail\"}) by (pod)",
"format": "time_series", "format": "time_series",
"intervalFactor": 1, "intervalFactor": 1,
"legendFormat": "policy change errors", "legendFormat": "policy change errors",
@@ -5733,7 +5880,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -5841,7 +5991,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -5983,7 +6136,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"decimals": null, "decimals": null,
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
@@ -6083,7 +6239,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"decimals": null, "decimals": null,
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
@@ -6188,7 +6347,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -6298,7 +6460,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -6421,7 +6586,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -6542,7 +6710,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -6674,7 +6845,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -6775,7 +6949,10 @@
"bars": false, "bars": false,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -6876,7 +7053,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -6977,7 +7157,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -7078,7 +7261,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -7178,7 +7364,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -7277,7 +7466,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -7376,7 +7568,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -7475,7 +7670,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -7578,7 +7776,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -7681,7 +7882,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -7784,7 +7988,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -7883,7 +8090,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -7982,7 +8192,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -8081,7 +8294,10 @@
"bars": true, "bars": true,
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"custom": {} "custom": {}
@@ -8182,6 +8398,21 @@
"tags": [], "tags": [],
"templating": { "templating": {
"list": [ "list": [
{
"current": {},
"hide": 0,
"includeAll": false,
"label": "Prometheus",
"multi": false,
"name": "DS_PROMETHEUS",
"options": [],
"query": "prometheus",
"queryValue": "",
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"type": "datasource"
},
{ {
"allValue": "cilium.*", "allValue": "cilium.*",
"current": { "current": {
@@ -8189,7 +8420,10 @@
"text": "All", "text": "All",
"value": "$__all" "value": "$__all"
}, },
"datasource": "prometheus", "datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"definition": "label_values(cilium_version, pod)", "definition": "label_values(cilium_version, pod)",
"hide": 0, "hide": 0,
"includeAll": true, "includeAll": true,

View File

@@ -301,6 +301,14 @@
"resourceApiVersion": "V3" "resourceApiVersion": "V3"
} }
}, },
"bootstrapExtensions": [
{
"name": "envoy.bootstrap.internal_listener",
"typed_config": {
"@type": "type.googleapis.com/envoy.extensions.bootstrap.internal_listener.v3.InternalListener"
}
}
],
"layeredRuntime": { "layeredRuntime": {
"layers": [ "layers": [
{ {

View File

@@ -3226,7 +3226,7 @@
] ]
}, },
"timezone": "", "timezone": "",
"title": "Hubble", "title": "Hubble Metrics and Monitoring",
"uid": "5HftnJAWz", "uid": "5HftnJAWz",
"version": 24 "version": 24
} }

View File

@@ -0,0 +1,602 @@
{
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "Prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__elements": {},
"__requires": [
{
"type": "panel",
"id": "bargauge",
"name": "Bar gauge",
"version": ""
},
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "9.4.7"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "timeseries",
"name": "Time series",
"version": ""
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "datasource",
"uid": "grafana"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"target": {
"limit": 100,
"matchAny": false,
"tags": [],
"type": "dashboard"
},
"type": "dashboard"
}
]
},
"description": "",
"editable": true,
"fiscalYearStartMonth": 0,
"gnetId": 16612,
"graphTooltip": 0,
"id": null,
"links": [
{
"asDropdown": true,
"icon": "external link",
"includeVars": true,
"keepTime": true,
"tags": [
"cilium-overview"
],
"targetBlank": false,
"title": "Cilium Overviews",
"tooltip": "",
"type": "dashboards",
"url": ""
},
{
"asDropdown": true,
"icon": "external link",
"includeVars": false,
"keepTime": true,
"tags": [
"hubble"
],
"targetBlank": false,
"title": "Hubble",
"tooltip": "",
"type": "dashboards",
"url": ""
}
],
"liveNow": false,
"panels": [
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 2,
"panels": [],
"title": "DNS",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "normal"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "reqps"
},
"overrides": []
},
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 1
},
"id": 37,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "sum(rate(hubble_dns_queries_total{cluster=~\"$cluster\", source_namespace=~\"$source_namespace\", destination_namespace=~\"$destination_namespace\"}[$__rate_interval])) by (source) > 0",
"legendFormat": "{{source}}",
"range": true,
"refId": "A"
}
],
"title": "DNS queries",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "reqps"
},
"overrides": []
},
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 1
},
"id": 41,
"options": {
"displayMode": "gradient",
"minVizHeight": 10,
"minVizWidth": 0,
"orientation": "horizontal",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showUnfilled": true
},
"pluginVersion": "9.4.7",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "topk(10, sum(rate(hubble_dns_queries_total{cluster=~\"$cluster\", source_namespace=~\"$source_namespace\", destination_namespace=~\"$destination_namespace\"}[$__rate_interval])*60) by (query))",
"legendFormat": "{{query}}",
"range": true,
"refId": "A"
}
],
"title": "Top 10 DNS queries",
"type": "bargauge"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "normal"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "reqps"
},
"overrides": []
},
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 10
},
"id": 39,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "round(sum(rate(hubble_dns_queries_total{cluster=~\"$cluster\", source_namespace=~\"$source_namespace\", destination_namespace=~\"$destination_namespace\"}[$__rate_interval])) by (source) - sum(label_replace(sum(rate(hubble_dns_responses_total{cluster=~\"$cluster\", source_namespace=~\"$destination_namespace\", destination_namespace=~\"$source_namespace\"}[$__rate_interval])) by (destination), \"source\", \"$1\", \"destination\", \"(.*)\")) without (destination), 0.001) > 0",
"legendFormat": "{{source}}",
"range": true,
"refId": "A"
}
],
"title": "Missing DNS responses",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "normal"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "reqps"
},
"overrides": []
},
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 10
},
"id": 43,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "sum(rate(hubble_dns_responses_total{cluster=~\"$cluster\", source_namespace=~\"$destination_namespace\", destination_namespace=~\"$source_namespace\", rcode!=\"No Error\"}[$__rate_interval])) by (destination, rcode) > 0",
"legendFormat": "{{destination}}: {{rcode}}",
"range": true,
"refId": "A"
}
],
"title": "DNS errors",
"type": "timeseries"
}
],
"refresh": "",
"revision": 1,
"schemaVersion": 38,
"style": "dark",
"tags": [
"kubecon-demo"
],
"templating": {
"list": [
{
"current": {
"selected": false,
"text": "default",
"value": "default"
},
"hide": 0,
"includeAll": false,
"label": "Data Source",
"multi": false,
"name": "prometheus_datasource",
"options": [],
"query": "prometheus",
"queryValue": "",
"refresh": 1,
"regex": "(?!grafanacloud-usage|grafanacloud-ml-metrics).+",
"skipUrlSync": false,
"type": "datasource"
},
{
"current": {},
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"definition": "label_values(cilium_version, cluster)",
"hide": 0,
"includeAll": true,
"multi": true,
"name": "cluster",
"options": [],
"query": {
"query": "label_values(cilium_version, cluster)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"type": "query"
},
{
"allValue": ".*",
"current": {},
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"definition": "label_values(source_namespace)",
"hide": 0,
"includeAll": true,
"label": "Source Namespace",
"multi": true,
"name": "source_namespace",
"options": [],
"query": {
"query": "label_values(source_namespace)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"type": "query"
},
{
"allValue": ".*",
"current": {},
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"definition": "label_values(destination_namespace)",
"hide": 0,
"includeAll": true,
"label": "Destination Namespace",
"multi": true,
"name": "destination_namespace",
"options": [],
"query": {
"query": "label_values(destination_namespace)",
"refId": "StandardVariableQuery"
},
"refresh": 1,
"regex": "",
"skipUrlSync": false,
"sort": 0,
"type": "query"
}
]
},
"time": {
"from": "now-1h",
"to": "now"
},
"timepicker": {
"refresh_intervals": [
"10s",
"30s",
"1m",
"5m",
"15m",
"30m",
"1h",
"2h",
"1d"
],
"time_options": [
"5m",
"15m",
"1h",
"6h",
"12h",
"24h",
"2d",
"7d",
"30d"
]
},
"timezone": "",
"title": "Hubble / DNS Overview (Namespace)",
"uid": "_f0DUpY4k",
"version": 26,
"weekStart": ""
}

View File

@@ -100,7 +100,7 @@ then
# Since that version containerd no longer allows missing configuration for the CNI, # Since that version containerd no longer allows missing configuration for the CNI,
# not even for pods with hostNetwork set to true. Thus, we add a temporary one. # not even for pods with hostNetwork set to true. Thus, we add a temporary one.
# This will be replaced with the real config by the agent pod. # This will be replaced with the real config by the agent pod.
echo -e "{\n\t"cniVersion": "0.3.1",\n\t"name": "cilium",\n\t"type": "cilium-cni"\n}" > /etc/cni/net.d/05-cilium.conf echo -e '{\n\t"cniVersion": "0.3.1",\n\t"name": "cilium",\n\t"type": "cilium-cni"\n}' > /etc/cni/net.d/05-cilium.conf
fi fi
# Start containerd. It won't create it's CNI configuration file anymore. # Start containerd. It won't create it's CNI configuration file anymore.

View File

@@ -6,6 +6,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole kind: ClusterRole
metadata: metadata:
name: cilium name: cilium
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
rules: rules:
@@ -82,6 +86,9 @@ rules:
resources: resources:
- ciliumloadbalancerippools - ciliumloadbalancerippools
- ciliumbgppeeringpolicies - ciliumbgppeeringpolicies
- ciliumbgpnodeconfigs
- ciliumbgpadvertisements
- ciliumbgppeerconfigs
- ciliumclusterwideenvoyconfigs - ciliumclusterwideenvoyconfigs
- ciliumclusterwidenetworkpolicies - ciliumclusterwidenetworkpolicies
- ciliumegressgatewaypolicies - ciliumegressgatewaypolicies
@@ -137,6 +144,7 @@ rules:
- ciliumendpoints/status - ciliumendpoints/status
- ciliumendpoints - ciliumendpoints
- ciliuml2announcementpolicies/status - ciliuml2announcementpolicies/status
- ciliumbgpnodeconfigs/status
verbs: verbs:
- patch - patch
{{- end }} {{- end }}

View File

@@ -3,6 +3,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding kind: ClusterRoleBinding
metadata: metadata:
name: cilium name: cilium
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
roleRef: roleRef:

View File

@@ -16,6 +16,10 @@ kind: DaemonSet
metadata: metadata:
name: cilium name: cilium
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
k8s-app: cilium k8s-app: cilium
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
@@ -128,6 +132,7 @@ spec:
failureThreshold: {{ .Values.startupProbe.failureThreshold }} failureThreshold: {{ .Values.startupProbe.failureThreshold }}
periodSeconds: {{ .Values.startupProbe.periodSeconds }} periodSeconds: {{ .Values.startupProbe.periodSeconds }}
successThreshold: 1 successThreshold: 1
initialDelaySeconds: 5
{{- end }} {{- end }}
livenessProbe: livenessProbe:
{{- if or .Values.keepDeprecatedProbes $defaultKeepDeprecatedProbes }} {{- if or .Values.keepDeprecatedProbes $defaultKeepDeprecatedProbes }}
@@ -196,6 +201,11 @@ spec:
fieldPath: metadata.namespace fieldPath: metadata.namespace
- name: CILIUM_CLUSTERMESH_CONFIG - name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/ value: /var/lib/cilium/clustermesh/
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: '1'
{{- if .Values.k8sServiceHost }} {{- if .Values.k8sServiceHost }}
- name: KUBERNETES_SERVICE_HOST - name: KUBERNETES_SERVICE_HOST
value: {{ .Values.k8sServiceHost | quote }} value: {{ .Values.k8sServiceHost | quote }}
@@ -371,6 +381,11 @@ spec:
mountPropagation: {{ .mountPropagation }} mountPropagation: {{ .mountPropagation }}
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- if .Values.hubble.export.dynamic.enabled }}
- name: hubble-flowlog-config
mountPath: /flowlog-config
readOnly: true
{{- end }}
{{- with .Values.extraVolumeMounts }} {{- with .Values.extraVolumeMounts }}
{{- toYaml . | nindent 8 }} {{- toYaml . | nindent 8 }}
{{- end }} {{- end }}
@@ -387,7 +402,7 @@ spec:
for i in {1..5}; do \ for i in {1..5}; do \
[ -S /var/run/cilium/monitor1_2.sock ] && break || sleep 10;\ [ -S /var/run/cilium/monitor1_2.sock ] && break || sleep 10;\
done; \ done; \
cilium monitor cilium-dbg monitor
{{- range $type := .Values.monitor.eventTypes -}} {{- range $type := .Values.monitor.eventTypes -}}
{{ " " }}--type={{ $type }} {{ " " }}--type={{ $type }}
{{- end }} {{- end }}
@@ -411,7 +426,7 @@ spec:
image: {{ include "cilium.image" .Values.image | quote }} image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }} imagePullPolicy: {{ .Values.image.pullPolicy }}
command: command:
- cilium - cilium-dbg
- build-config - build-config
{{- if (not (kindIs "invalid" .Values.daemon.configSources)) }} {{- if (not (kindIs "invalid" .Values.daemon.configSources)) }}
- "--source={{.Values.daemon.configSources}}" - "--source={{.Values.daemon.configSources}}"
@@ -447,6 +462,9 @@ spec:
volumeMounts: volumeMounts:
- name: tmp - name: tmp
mountPath: /tmp mountPath: /tmp
{{- with .Values.extraVolumeMounts }}
{{- toYaml . | nindent 8 }}
{{- end }}
terminationMessagePolicy: FallbackToLogsOnError terminationMessagePolicy: FallbackToLogsOnError
{{- if .Values.cgroup.autoMount.enabled }} {{- if .Values.cgroup.autoMount.enabled }}
# Required to mount cgroup2 filesystem on the underlying Kubernetes node. # Required to mount cgroup2 filesystem on the underlying Kubernetes node.
@@ -609,6 +627,12 @@ spec:
name: cilium-config name: cilium-config
key: clean-cilium-bpf-state key: clean-cilium-bpf-state
optional: true optional: true
- name: WRITE_CNI_CONF_WHEN_READY
valueFrom:
configMapKeyRef:
name: cilium-config
key: write-cni-conf-when-ready
optional: true
{{- if .Values.k8sServiceHost }} {{- if .Values.k8sServiceHost }}
- name: KUBERNETES_SERVICE_HOST - name: KUBERNETES_SERVICE_HOST
value: {{ .Values.k8sServiceHost | quote }} value: {{ .Values.k8sServiceHost | quote }}
@@ -656,7 +680,7 @@ spec:
resources: resources:
{{- toYaml . | trim | nindent 10 }} {{- toYaml . | trim | nindent 10 }}
{{- end }} {{- end }}
{{- if and .Values.waitForKubeProxy (ne $kubeProxyReplacement "strict") }} {{- if and .Values.waitForKubeProxy (and (ne $kubeProxyReplacement "strict") (ne $kubeProxyReplacement "true")) }}
- name: wait-for-kube-proxy - name: wait-for-kube-proxy
image: {{ include "cilium.image" .Values.image | quote }} image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }} imagePullPolicy: {{ .Values.image.pullPolicy }}
@@ -700,10 +724,10 @@ spec:
imagePullPolicy: {{ .Values.image.pullPolicy }} imagePullPolicy: {{ .Values.image.pullPolicy }}
command: command:
- "/install-plugin.sh" - "/install-plugin.sh"
{{- with .Values.cni.resources }}
resources: resources:
requests: {{- toYaml . | trim | nindent 10 }}
cpu: 100m {{- end }}
memory: 10Mi
securityContext: securityContext:
{{- if .Values.securityContext.privileged }} {{- if .Values.securityContext.privileged }}
privileged: true privileged: true
@@ -747,7 +771,7 @@ spec:
tolerations: tolerations:
{{- toYaml . | trim | nindent 8 }} {{- toYaml . | trim | nindent 8 }}
{{- end }} {{- end }}
{{- if and .Values.clustermesh.useAPIServer .Values.clustermesh.config.enabled (not .Values.clustermesh.apiserver.kvstoremesh.enabled) }} {{- if and .Values.clustermesh.config.enabled (not (and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.kvstoremesh.enabled )) }}
hostAliases: hostAliases:
{{- range $cluster := .Values.clustermesh.config.clusters }} {{- range $cluster := .Values.clustermesh.config.clusters }}
{{- range $ip := $cluster.ips }} {{- range $ip := $cluster.ips }}
@@ -941,6 +965,12 @@ spec:
path: client-ca.crt path: client-ca.crt
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- if .Values.hubble.export.dynamic.enabled }}
- name: hubble-flowlog-config
configMap:
name: {{ .Values.hubble.export.dynamic.config.configMapName }}
optional: true
{{- end }}
{{- range .Values.extraHostPathMounts }} {{- range .Values.extraHostPathMounts }}
- name: {{ .name }} - name: {{ .name }}
hostPath: hostPath:

View File

@@ -0,0 +1,981 @@
{{- if and .Values.agent (not .Values.preflight.enabled) }}
{{- /* Default values with backwards compatibility */ -}}
{{- $defaultKeepDeprecatedProbes := true -}}
{{- /* Default values when 1.8 was initially deployed */ -}}
{{- if semverCompare ">=1.8" (default "1.8" .Values.upgradeCompatibility) -}}
{{- $defaultKeepDeprecatedProbes = false -}}
{{- end -}}
{{- $kubeProxyReplacement := (coalesce .Values.kubeProxyReplacement "false") -}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cilium
namespace: {{ .Release.Namespace }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
k8s-app: cilium
app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-agent
{{- if .Values.keepDeprecatedLabels }}
kubernetes.io/cluster-service: "true"
{{- if and .Values.gke.enabled (eq .Release.Namespace "kube-system" ) }}
{{- fail "Invalid configuration: Installing Cilium on GKE with 'kubernetes.io/cluster-service' labels on 'kube-system' namespace causes Cilium DaemonSet to be removed by GKE. Either install Cilium on a different Namespace or install with '--set keepDeprecatedLabels=false'" }}
{{- end }}
{{- end }}
spec:
selector:
matchLabels:
k8s-app: cilium
{{- if .Values.keepDeprecatedLabels }}
kubernetes.io/cluster-service: "true"
{{- end }}
{{- with .Values.updateStrategy }}
updateStrategy:
{{- toYaml . | trim | nindent 4 }}
{{- end }}
template:
metadata:
annotations:
{{- if and .Values.prometheus.enabled (not .Values.prometheus.serviceMonitor.enabled) }}
prometheus.io/port: "{{ .Values.prometheus.port }}"
prometheus.io/scrape: "true"
{{- end }}
{{- if .Values.rollOutCiliumPods }}
# ensure pods roll when configmap updates
cilium.io/cilium-configmap-checksum: {{ include (print $.Template.BasePath "/cilium-configmap.yaml") . | sha256sum | quote }}
{{- end }}
{{- if not .Values.securityContext.privileged }}
# Set app AppArmor's profile to "unconfined". The value of this annotation
# can be modified as long users know which profiles they have available
# in AppArmor.
container.apparmor.security.beta.kubernetes.io/cilium-agent: "unconfined"
container.apparmor.security.beta.kubernetes.io/clean-cilium-state: "unconfined"
{{- if .Values.cgroup.autoMount.enabled }}
container.apparmor.security.beta.kubernetes.io/mount-cgroup: "unconfined"
container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: "unconfined"
{{- end }}
{{- end }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
k8s-app: cilium
app.kubernetes.io/name: cilium-agent
app.kubernetes.io/part-of: cilium
{{- if .Values.keepDeprecatedLabels }}
kubernetes.io/cluster-service: "true"
{{- end }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.podSecurityContext }}
securityContext:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: cilium-agent
image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.sleepAfterInit }}
command:
- /bin/bash
- -c
- --
args:
- |
while true; do
sleep 30;
done
livenessProbe:
exec:
command:
- "true"
readinessProbe:
exec:
command:
- "true"
{{- else }}
command:
- cilium-agent
args:
- --config-dir=/tmp/cilium/config-map
{{- with .Values.extraArgs }}
{{- toYaml . | trim | nindent 8 }}
{{- end }}
{{- if semverCompare ">=1.20-0" .Capabilities.KubeVersion.Version }}
startupProbe:
httpGet:
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
path: /healthz
port: {{ .Values.healthPort }}
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
failureThreshold: {{ .Values.startupProbe.failureThreshold }}
periodSeconds: {{ .Values.startupProbe.periodSeconds }}
successThreshold: 1
initialDelaySeconds: 5
{{- end }}
livenessProbe:
{{- if or .Values.keepDeprecatedProbes $defaultKeepDeprecatedProbes }}
exec:
command:
- cilium
- status
- --brief
{{- else }}
httpGet:
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
path: /healthz
port: {{ .Values.healthPort }}
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
{{- end }}
{{- if semverCompare "<1.20-0" .Capabilities.KubeVersion.Version }}
# The initial delay for the liveness probe is intentionally large to
# avoid an endless kill & restart cycle if in the event that the initial
# bootstrapping takes longer than expected.
# Starting from Kubernetes 1.20, we are using startupProbe instead
# of this field.
initialDelaySeconds: 120
{{- end }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
successThreshold: 1
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
timeoutSeconds: 5
readinessProbe:
{{- if or .Values.keepDeprecatedProbes $defaultKeepDeprecatedProbes }}
exec:
command:
- cilium
- status
- --brief
{{- else }}
httpGet:
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
path: /healthz
port: {{ .Values.healthPort }}
scheme: HTTP
httpHeaders:
- name: "brief"
value: "true"
{{- end }}
{{- if semverCompare "<1.20-0" .Capabilities.KubeVersion.Version }}
initialDelaySeconds: 5
{{- end }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
successThreshold: 1
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
timeoutSeconds: 5
{{- end }}
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CILIUM_CLUSTERMESH_CONFIG
value: /var/lib/cilium/clustermesh/
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: '1'
{{- if .Values.k8sServiceHost }}
- name: KUBERNETES_SERVICE_HOST
value: {{ .Values.k8sServiceHost | quote }}
{{- end }}
{{- if .Values.k8sServicePort }}
- name: KUBERNETES_SERVICE_PORT
value: {{ .Values.k8sServicePort | quote }}
{{- end }}
{{- with .Values.extraEnv }}
{{- toYaml . | trim | nindent 8 }}
{{- end }}
{{- if .Values.cni.install }}
lifecycle:
{{- if ne .Values.cni.chainingMode "aws-cni" }}
postStart:
exec:
command:
- "bash"
- "-c"
- |
{{- tpl (.Files.Get "files/agent/poststart-eni.bash") . | nindent 20 }}
{{- end }}
preStop:
exec:
command:
- /cni-uninstall.sh
{{- end }}
{{- with .Values.resources }}
resources:
{{- toYaml . | trim | nindent 10 }}
{{- end }}
{{- if or .Values.prometheus.enabled .Values.hubble.metrics.enabled }}
ports:
- name: peer-service
containerPort: {{ .Values.hubble.peerService.targetPort }}
hostPort: {{ .Values.hubble.peerService.targetPort }}
protocol: TCP
{{- if .Values.prometheus.enabled }}
- name: prometheus
containerPort: {{ .Values.prometheus.port }}
hostPort: {{ .Values.prometheus.port }}
protocol: TCP
{{- if and .Values.proxy.prometheus.enabled .Values.envoy.prometheus.enabled (not .Values.envoy.enabled) }}
- name: envoy-metrics
containerPort: {{ .Values.proxy.prometheus.port | default .Values.envoy.prometheus.port }}
hostPort: {{ .Values.proxy.prometheus.port | default .Values.envoy.prometheus.port }}
protocol: TCP
{{- end }}
{{- end }}
{{- if .Values.hubble.metrics.enabled }}
- name: hubble-metrics
containerPort: {{ .Values.hubble.metrics.port }}
hostPort: {{ .Values.hubble.metrics.port }}
protocol: TCP
{{- end }}
{{- end }}
securityContext:
{{- if .Values.securityContext.privileged }}
privileged: true
{{- else }}
seLinuxOptions:
{{- with .Values.securityContext.seLinuxOptions }}
{{- toYaml . | nindent 12 }}
{{- end }}
capabilities:
add:
{{- with .Values.securityContext.capabilities.ciliumAgent }}
{{- toYaml . | nindent 14 }}
{{- end }}
drop:
- ALL
{{- end }}
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
{{- if .Values.authentication.mutual.spire.enabled }}
- name: spire-agent-socket
mountPath: {{ dir .Values.authentication.mutual.spire.adminSocketPath }}
readOnly: false
{{- end }}
{{- if .Values.envoy.enabled }}
- name: envoy-sockets
mountPath: /var/run/cilium/envoy/sockets
readOnly: false
{{- end }}
{{- if not .Values.securityContext.privileged }}
# Unprivileged containers need to mount /proc/sys/net from the host
# to have write access
- mountPath: /host/proc/sys/net
name: host-proc-sys-net
# Unprivileged containers need to mount /proc/sys/kernel from the host
# to have write access
- mountPath: /host/proc/sys/kernel
name: host-proc-sys-kernel
{{- end}}
{{- /* CRI-O already mounts the BPF filesystem */ -}}
{{- if and .Values.bpf.autoMount.enabled (not (eq .Values.containerRuntime.integration "crio")) }}
- name: bpf-maps
mountPath: /sys/fs/bpf
{{- if .Values.securityContext.privileged }}
mountPropagation: Bidirectional
{{- else }}
# Unprivileged containers can't set mount propagation to bidirectional
# in this case we will mount the bpf fs from an init container that
# is privileged and set the mount propagation from host to container
# in Cilium.
mountPropagation: HostToContainer
{{- end}}
{{- end }}
{{- if not (contains "/run/cilium/cgroupv2" .Values.cgroup.hostRoot) }}
# Check for duplicate mounts before mounting
- name: cilium-cgroup
mountPath: {{ .Values.cgroup.hostRoot }}
{{- end}}
- name: cilium-run
mountPath: /var/run/cilium
- name: etc-cni-netd
mountPath: {{ .Values.cni.hostConfDirMountPath }}
{{- if .Values.etcd.enabled }}
- name: etcd-config-path
mountPath: /var/lib/etcd-config
readOnly: true
{{- if or .Values.etcd.ssl .Values.etcd.managed }}
- name: etcd-secrets
mountPath: /var/lib/etcd-secrets
readOnly: true
{{- end }}
{{- end }}
- name: clustermesh-secrets
mountPath: /var/lib/cilium/clustermesh
readOnly: true
{{- if .Values.ipMasqAgent.enabled }}
- name: ip-masq-agent
mountPath: /etc/config
readOnly: true
{{- end }}
{{- if .Values.cni.configMap }}
- name: cni-configuration
mountPath: {{ .Values.cni.confFileMountPath }}
readOnly: true
{{- end }}
# Needed to be able to load kernel modules
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: xtables-lock
mountPath: /run/xtables.lock
{{- if and .Values.encryption.enabled (eq .Values.encryption.type "ipsec") }}
- name: cilium-ipsec-secrets
mountPath: {{ .Values.encryption.ipsec.mountPath | default .Values.encryption.mountPath }}
{{- end }}
{{- if .Values.kubeConfigPath }}
- name: kube-config
mountPath: {{ .Values.kubeConfigPath }}
readOnly: true
{{- end }}
{{- if .Values.bgp.enabled }}
- name: bgp-config-path
mountPath: /var/lib/cilium/bgp
readOnly: true
{{- end }}
{{- if and .Values.hubble.enabled .Values.hubble.tls.enabled (hasKey .Values.hubble "listenAddress") }}
- name: hubble-tls
mountPath: /var/lib/cilium/tls/hubble
readOnly: true
{{- end }}
- name: tmp
mountPath: /tmp
{{- range .Values.extraHostPathMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
readOnly: {{ .readOnly }}
{{- if .mountPropagation }}
mountPropagation: {{ .mountPropagation }}
{{- end }}
{{- end }}
{{- if .Values.hubble.export.dynamic.enabled }}
- name: hubble-flowlog-config
mountPath: /flowlog-config
readOnly: true
{{- end }}
{{- with .Values.extraVolumeMounts }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.monitor.enabled }}
- name: cilium-monitor
image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- /bin/bash
- -c
- --
args:
- |-
for i in {1..5}; do \
[ -S /var/run/cilium/monitor1_2.sock ] && break || sleep 10;\
done; \
cilium-dbg monitor
{{- range $type := .Values.monitor.eventTypes -}}
{{ " " }}--type={{ $type }}
{{- end }}
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: cilium-run
mountPath: /var/run/cilium
{{- with .Values.extraVolumeMounts }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.monitor.resources }}
resources:
{{- toYaml . | trim | nindent 10 }}
{{- end }}
{{- end }}
{{- if .Values.extraContainers }}
{{- toYaml .Values.extraContainers | nindent 6 }}
{{- end }}
initContainers:
- name: config
image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- cilium-dbg
- build-config
{{- if (not (kindIs "invalid" .Values.daemon.configSources)) }}
- "--source={{.Values.daemon.configSources}}"
{{- end }}
{{- if (not (kindIs "invalid" .Values.daemon.allowedConfigOverrides)) }}
- "--allow-config-keys={{.Values.daemon.allowedConfigOverrides}}"
{{- end }}
{{- if (not (kindIs "invalid" .Values.daemon.blockedConfigOverrides)) }}
- "--deny-config-keys={{.Values.daemon.blockedConfigOverrides}}"
{{- end }}
env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: CILIUM_K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
{{- if .Values.k8sServiceHost }}
- name: KUBERNETES_SERVICE_HOST
value: {{ .Values.k8sServiceHost | quote }}
{{- end }}
{{- if .Values.k8sServicePort }}
- name: KUBERNETES_SERVICE_PORT
value: {{ .Values.k8sServicePort | quote }}
{{- end }}
{{- with .Values.extraEnv }}
{{- toYaml . | nindent 8 }}
{{- end }}
volumeMounts:
- name: tmp
mountPath: /tmp
{{- with .Values.extraVolumeMounts }}
{{- toYaml . | nindent 8 }}
{{- end }}
terminationMessagePolicy: FallbackToLogsOnError
{{- if .Values.cgroup.autoMount.enabled }}
# Required to mount cgroup2 filesystem on the underlying Kubernetes node.
# We use nsenter command with host's cgroup and mount namespaces enabled.
- name: mount-cgroup
image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: CGROUP_ROOT
value: {{ .Values.cgroup.hostRoot }}
- name: BIN_PATH
value: {{ .Values.cni.binPath }}
{{- with .Values.cgroup.autoMount.resources }}
resources:
{{- toYaml . | trim | nindent 10 }}
{{- end }}
command:
- sh
- -ec
# The statically linked Go program binary is invoked to avoid any
# dependency on utilities like sh and mount that can be missing on certain
# distros installed on the underlying host. Copy the binary to the
# same directory where we install cilium cni plugin so that exec permissions
# are available.
- |
cp /usr/bin/cilium-mount /hostbin/cilium-mount;
nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
rm /hostbin/cilium-mount
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-path
mountPath: /hostbin
terminationMessagePolicy: FallbackToLogsOnError
securityContext:
{{- if .Values.securityContext.privileged }}
privileged: true
{{- else }}
seLinuxOptions:
{{- with .Values.securityContext.seLinuxOptions }}
{{- toYaml . | nindent 12 }}
{{- end }}
capabilities:
add:
{{- with .Values.securityContext.capabilities.mountCgroup }}
{{- toYaml . | nindent 14 }}
{{- end }}
drop:
- ALL
{{- end}}
- name: apply-sysctl-overwrites
image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- with .Values.initResources }}
resources:
{{- toYaml . | trim | nindent 10 }}
{{- end }}
env:
- name: BIN_PATH
value: {{ .Values.cni.binPath }}
command:
- sh
- -ec
# The statically linked Go program binary is invoked to avoid any
# dependency on utilities like sh that can be missing on certain
# distros installed on the underlying host. Copy the binary to the
# same directory where we install cilium cni plugin so that exec permissions
# are available.
- |
cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;
nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";
rm /hostbin/cilium-sysctlfix
volumeMounts:
- name: hostproc
mountPath: /hostproc
- name: cni-path
mountPath: /hostbin
terminationMessagePolicy: FallbackToLogsOnError
securityContext:
{{- if .Values.securityContext.privileged }}
privileged: true
{{- else }}
seLinuxOptions:
{{- with .Values.securityContext.seLinuxOptions }}
{{- toYaml . | nindent 12 }}
{{- end }}
capabilities:
add:
{{- with .Values.securityContext.capabilities.applySysctlOverwrites }}
{{- toYaml . | nindent 14 }}
{{- end }}
drop:
- ALL
{{- end}}
{{- end }}
{{- if and .Values.bpf.autoMount.enabled (not .Values.securityContext.privileged) }}
# Mount the bpf fs if it is not mounted. We will perform this task
# from a privileged container because the mount propagation bidirectional
# only works from privileged containers.
- name: mount-bpf-fs
image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- with .Values.initResources }}
resources:
{{- toYaml . | trim | nindent 10 }}
{{- end }}
args:
- 'mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf'
command:
- /bin/bash
- -c
- --
terminationMessagePolicy: FallbackToLogsOnError
securityContext:
privileged: true
{{- /* CRI-O already mounts the BPF filesystem */ -}}
{{- if and .Values.bpf.autoMount.enabled (not (eq .Values.containerRuntime.integration "crio")) }}
volumeMounts:
- name: bpf-maps
mountPath: /sys/fs/bpf
mountPropagation: Bidirectional
{{- end }}
{{- end }}
{{- if and .Values.nodeinit.enabled .Values.nodeinit.bootstrapFile }}
- name: wait-for-node-init
image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- with .Values.initResources }}
resources:
{{- toYaml . | trim | nindent 10 }}
{{- end }}
command:
- sh
- -c
- |
until test -s {{ (print "/tmp/cilium-bootstrap.d/" (.Values.nodeinit.bootstrapFile | base)) | quote }}; do
echo "Waiting on node-init to run...";
sleep 1;
done
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: cilium-bootstrap-file-dir
mountPath: "/tmp/cilium-bootstrap.d"
{{- end }}
- name: clean-cilium-state
image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- /init-container.sh
env:
- name: CILIUM_ALL_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-state
optional: true
- name: CILIUM_BPF_STATE
valueFrom:
configMapKeyRef:
name: cilium-config
key: clean-cilium-bpf-state
optional: true
- name: WRITE_CNI_CONF_WHEN_READY
valueFrom:
configMapKeyRef:
name: cilium-config
key: write-cni-conf-when-ready
optional: true
{{- if .Values.k8sServiceHost }}
- name: KUBERNETES_SERVICE_HOST
value: {{ .Values.k8sServiceHost | quote }}
{{- end }}
{{- if .Values.k8sServicePort }}
- name: KUBERNETES_SERVICE_PORT
value: {{ .Values.k8sServicePort | quote }}
{{- end }}
{{- with .Values.extraEnv }}
{{- toYaml . | nindent 8 }}
{{- end }}
terminationMessagePolicy: FallbackToLogsOnError
securityContext:
{{- if .Values.securityContext.privileged }}
privileged: true
{{- else }}
seLinuxOptions:
{{- with .Values.securityContext.seLinuxOptions }}
{{- toYaml . | nindent 12 }}
{{- end }}
capabilities:
add:
{{- with .Values.securityContext.capabilities.cleanCiliumState }}
{{- toYaml . | nindent 14 }}
{{- end }}
drop:
- ALL
{{- end}}
volumeMounts:
{{- /* CRI-O already mounts the BPF filesystem */ -}}
{{- if and .Values.bpf.autoMount.enabled (not (eq .Values.containerRuntime.integration "crio")) }}
- name: bpf-maps
mountPath: /sys/fs/bpf
{{- end }}
# Required to mount cgroup filesystem from the host to cilium agent pod
- name: cilium-cgroup
mountPath: {{ .Values.cgroup.hostRoot }}
mountPropagation: HostToContainer
- name: cilium-run
mountPath: /var/run/cilium
{{- with .Values.extraVolumeMounts }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.initResources }}
resources:
{{- toYaml . | trim | nindent 10 }}
{{- end }}
{{- if and .Values.waitForKubeProxy (and (ne $kubeProxyReplacement "strict") (ne $kubeProxyReplacement "true")) }}
- name: wait-for-kube-proxy
image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- with .Values.initResources }}
resources:
{{- toYaml . | trim | nindent 10 }}
{{- end }}
securityContext:
privileged: true
command:
- bash
- -c
- |
while true
do
if iptables-nft-save -t mangle | grep -E '^:(KUBE-IPTABLES-HINT|KUBE-PROXY-CANARY)'; then
echo "Found KUBE-IPTABLES-HINT or KUBE-PROXY-CANARY iptables rule in 'iptables-nft-save -t mangle'"
exit 0
fi
if ip6tables-nft-save -t mangle | grep -E '^:(KUBE-IPTABLES-HINT|KUBE-PROXY-CANARY)'; then
echo "Found KUBE-IPTABLES-HINT or KUBE-PROXY-CANARY iptables rule in 'ip6tables-nft-save -t mangle'"
exit 0
fi
if iptables-legacy-save | grep -E '^:KUBE-PROXY-CANARY'; then
echo "Found KUBE-PROXY-CANARY iptables rule in 'iptables-legacy-save"
exit 0
fi
if ip6tables-legacy-save | grep -E '^:KUBE-PROXY-CANARY'; then
echo "KUBE-PROXY-CANARY iptables rule in 'ip6tables-legacy-save'"
exit 0
fi
echo "Waiting for kube-proxy to create iptables rules...";
sleep 1;
done
terminationMessagePolicy: FallbackToLogsOnError
{{- end }} # wait-for-kube-proxy
{{- if .Values.cni.install }}
# Install the CNI binaries in an InitContainer so we don't have a writable host mount in the agent
- name: install-cni-binaries
image: {{ include "cilium.image" .Values.image | quote }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- "/install-plugin.sh"
{{- with .Values.cni.resources }}
resources:
{{- toYaml . | trim | nindent 10 }}
{{- end }}
securityContext:
{{- if .Values.securityContext.privileged }}
privileged: true
{{- else }}
seLinuxOptions:
{{- with .Values.securityContext.seLinuxOptions }}
{{- toYaml . | nindent 12 }}
{{- end }}
{{- end }}
capabilities:
drop:
- ALL
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- name: cni-path
mountPath: /host/opt/cni/bin
{{- end }} # .Values.cni.install
restartPolicy: Always
priorityClassName: {{ include "cilium.priorityClass" (list $ .Values.priorityClassName "system-node-critical") }}
serviceAccount: {{ .Values.serviceAccounts.cilium.name | quote }}
serviceAccountName: {{ .Values.serviceAccounts.cilium.name | quote }}
automountServiceAccountToken: {{ .Values.serviceAccounts.cilium.automount }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
hostNetwork: true
{{- if and .Values.etcd.managed (not .Values.etcd.k8sService) }}
# In managed etcd mode, Cilium must be able to resolve the DNS name of
# the etcd service
dnsPolicy: ClusterFirstWithHostNet
{{- else if .Values.dnsPolicy }}
dnsPolicy: {{ .Values.dnsPolicy }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | trim | nindent 8 }}
{{- end }}
{{- if and .Values.clustermesh.config.enabled (not (and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.kvstoremesh.enabled )) }}
hostAliases:
{{- range $cluster := .Values.clustermesh.config.clusters }}
{{- range $ip := $cluster.ips }}
- ip: {{ $ip }}
hostnames: [ "{{ $cluster.name }}.{{ $.Values.clustermesh.config.domain }}" ]
{{- end }}
{{- end }}
{{- end }}
volumes:
# For sharing configuration between the "config" initContainer and the agent
- name: tmp
emptyDir: {}
# To keep state between restarts / upgrades
- name: cilium-run
hostPath:
path: {{ .Values.daemon.runPath }}
type: DirectoryOrCreate
{{- /* CRI-O already mounts the BPF filesystem */ -}}
{{- if and .Values.bpf.autoMount.enabled (not (eq .Values.containerRuntime.integration "crio")) }}
# To keep state between restarts / upgrades for bpf maps
- name: bpf-maps
hostPath:
path: /sys/fs/bpf
type: DirectoryOrCreate
{{- end }}
{{- if .Values.cgroup.autoMount.enabled }}
# To mount cgroup2 filesystem on the host
- name: hostproc
hostPath:
path: /proc
type: Directory
{{- end }}
# To keep state between restarts / upgrades for cgroup2 filesystem
- name: cilium-cgroup
hostPath:
path: {{ .Values.cgroup.hostRoot}}
type: DirectoryOrCreate
# To install cilium cni plugin in the host
- name: cni-path
hostPath:
path: {{ .Values.cni.binPath }}
type: DirectoryOrCreate
# To install cilium cni configuration in the host
- name: etc-cni-netd
hostPath:
path: {{ .Values.cni.confPath }}
type: DirectoryOrCreate
# To be able to load kernel modules
- name: lib-modules
hostPath:
path: /lib/modules
# To access iptables concurrently with other processes (e.g. kube-proxy)
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
{{- if .Values.authentication.mutual.spire.enabled }}
- name: spire-agent-socket
hostPath:
path: {{ dir .Values.authentication.mutual.spire.adminSocketPath }}
type: DirectoryOrCreate
{{- end }}
{{- if .Values.envoy.enabled }}
# Sharing socket with Cilium Envoy on the same node by using a host path
- name: envoy-sockets
hostPath:
path: "{{ .Values.daemon.runPath }}/envoy/sockets"
type: DirectoryOrCreate
{{- end }}
{{- if .Values.kubeConfigPath }}
- name: kube-config
hostPath:
path: {{ .Values.kubeConfigPath }}
type: FileOrCreate
{{- end }}
{{- if and .Values.nodeinit.enabled .Values.nodeinit.bootstrapFile }}
- name: cilium-bootstrap-file-dir
hostPath:
path: {{ .Values.nodeinit.bootstrapFile | dir | quote }}
type: DirectoryOrCreate
{{- end }}
{{- if .Values.etcd.enabled }}
# To read the etcd config stored in config maps
- name: etcd-config-path
configMap:
name: cilium-config
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
items:
- key: etcd-config
path: etcd.config
# To read the k8s etcd secrets in case the user might want to use TLS
{{- if or .Values.etcd.ssl .Values.etcd.managed }}
- name: etcd-secrets
secret:
secretName: cilium-etcd-secrets
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
optional: true
{{- end }}
{{- end }}
# To read the clustermesh configuration
- name: clustermesh-secrets
projected:
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
sources:
- secret:
name: cilium-clustermesh
optional: true
# note: items are not explicitly listed here, since the entries of this secret
# depend on the peers configured, and that would cause a restart of all agents
# at every addition/removal. Leaving the field empty makes each secret entry
# to be automatically projected into the volume as a file whose name is the key.
- secret:
name: clustermesh-apiserver-remote-cert
optional: true
items:
- key: tls.key
path: common-etcd-client.key
- key: tls.crt
path: common-etcd-client.crt
{{- if not .Values.tls.caBundle.enabled }}
- key: ca.crt
path: common-etcd-client-ca.crt
{{- else }}
- {{ .Values.tls.caBundle.useSecret | ternary "secret" "configMap" }}:
name: {{ .Values.tls.caBundle.name }}
optional: true
items:
- key: {{ .Values.tls.caBundle.key }}
path: common-etcd-client-ca.crt
{{- end }}
{{- if and .Values.ipMasqAgent .Values.ipMasqAgent.enabled }}
- name: ip-masq-agent
configMap:
name: ip-masq-agent
optional: true
items:
- key: config
path: ip-masq-agent
{{- end }}
{{- if and .Values.encryption.enabled (eq .Values.encryption.type "ipsec") }}
- name: cilium-ipsec-secrets
secret:
secretName: {{ .Values.encryption.ipsec.secretName | default .Values.encryption.secretName }}
{{- end }}
{{- if .Values.cni.configMap }}
- name: cni-configuration
configMap:
name: {{ .Values.cni.configMap }}
{{- end }}
{{- if .Values.bgp.enabled }}
- name: bgp-config-path
configMap:
name: bgp-config
{{- end }}
{{- if not .Values.securityContext.privileged }}
- name: host-proc-sys-net
hostPath:
path: /proc/sys/net
type: Directory
- name: host-proc-sys-kernel
hostPath:
path: /proc/sys/kernel
type: Directory
{{- end }}
{{- if and .Values.hubble.enabled .Values.hubble.tls.enabled (hasKey .Values.hubble "listenAddress") }}
- name: hubble-tls
projected:
# note: the leading zero means this number is in octal representation: do not remove it
defaultMode: 0400
sources:
- secret:
name: hubble-server-certs
optional: true
items:
- key: tls.crt
path: server.crt
- key: tls.key
path: server.key
{{- if not .Values.tls.caBundle.enabled }}
- key: ca.crt
path: client-ca.crt
{{- else }}
- {{ .Values.tls.caBundle.useSecret | ternary "secret" "configMap" }}:
name: {{ .Values.tls.caBundle.name }}
optional: true
items:
- key: {{ .Values.tls.caBundle.key }}
path: client-ca.crt
{{- end }}
{{- end }}
{{- if .Values.hubble.export.dynamic.enabled }}
- name: hubble-flowlog-config
configMap:
name: {{ .Values.hubble.export.dynamic.config.configMapName }}
optional: true
{{- end }}
{{- range .Values.extraHostPathMounts }}
- name: {{ .name }}
hostPath:
path: {{ .hostPath }}
{{- if .hostPathType }}
type: {{ .hostPathType }}
{{- end }}
{{- end }}
{{- with .Values.extraVolumes }}
{{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}

View File

@@ -15,9 +15,14 @@ metadata:
{{- if $.Values.dashboards.label }} {{- if $.Values.dashboards.label }}
{{ $.Values.dashboards.label }}: {{ ternary $.Values.dashboards.labelValue "1" (not (empty $.Values.dashboards.labelValue)) | quote }} {{ $.Values.dashboards.label }}: {{ ternary $.Values.dashboards.labelValue "1" (not (empty $.Values.dashboards.labelValue)) | quote }}
{{- end }} {{- end }}
{{- with $.Values.dashboards.annotations }} {{- if or $.Values.dashboards.annotations $.Values.annotations }}
annotations: annotations:
{{- toYaml . | nindent 4 }} {{- with $.Values.dashboards.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with $.Values.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }} {{- end }}
data: data:
{{ $dashboardName }}.json: {{ $.Files.Get $path | toJson }} {{ $dashboardName }}.json: {{ $.Files.Get $path | toJson }}

View File

@@ -5,6 +5,10 @@ kind: Role
metadata: metadata:
name: cilium-config-agent name: cilium-config-agent
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
rules: rules:
@@ -26,6 +30,10 @@ kind: Role
metadata: metadata:
name: cilium-ingress-secrets name: cilium-ingress-secrets
namespace: {{ .Values.ingressController.secretsNamespace.name | quote }} namespace: {{ .Values.ingressController.secretsNamespace.name | quote }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
rules: rules:
@@ -46,6 +54,10 @@ kind: Role
metadata: metadata:
name: cilium-gateway-secrets name: cilium-gateway-secrets
namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }} namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
rules: rules:
@@ -66,6 +78,30 @@ kind: Role
metadata: metadata:
name: cilium-envoy-config-secrets name: cilium-envoy-config-secrets
namespace: {{ .Values.envoyConfig.secretsNamespace.name | quote }} namespace: {{ .Values.envoyConfig.secretsNamespace.name | quote }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
app.kubernetes.io/part-of: cilium
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
{{- end}}
{{- if and .Values.agent (not .Values.preflight.enabled) .Values.serviceAccounts.cilium.create .Values.bgpControlPlane.enabled .Values.bgpControlPlane.secretsNamespace.name }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cilium-bgp-control-plane-secrets
namespace: {{ .Values.bgpControlPlane.secretsNamespace.name | quote }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
rules: rules:

View File

@@ -5,6 +5,10 @@ kind: RoleBinding
metadata: metadata:
name: cilium-config-agent name: cilium-config-agent
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
roleRef: roleRef:
@@ -24,6 +28,10 @@ kind: RoleBinding
metadata: metadata:
name: cilium-secrets name: cilium-secrets
namespace: {{ .Values.ingressController.secretsNamespace.name | quote }} namespace: {{ .Values.ingressController.secretsNamespace.name | quote }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
roleRef: roleRef:
@@ -43,6 +51,10 @@ kind: RoleBinding
metadata: metadata:
name: cilium-gateway-secrets name: cilium-gateway-secrets
namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }} namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
roleRef: roleRef:
@@ -62,6 +74,10 @@ kind: RoleBinding
metadata: metadata:
name: cilium-envoy-config-secrets name: cilium-envoy-config-secrets
namespace: {{ .Values.envoyConfig.secretsNamespace.name | quote }} namespace: {{ .Values.envoyConfig.secretsNamespace.name | quote }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
roleRef: roleRef:
@@ -73,3 +89,22 @@ subjects:
name: {{ .Values.serviceAccounts.cilium.name | quote }} name: {{ .Values.serviceAccounts.cilium.name | quote }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- end}} {{- end}}
{{- if and .Values.agent (not .Values.preflight.enabled) .Values.serviceAccounts.cilium.create .Values.bgpControlPlane.enabled .Values.bgpControlPlane.secretsNamespace.name}}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cilium-bgp-control-plane-secrets
namespace: {{ .Values.bgpControlPlane.secretsNamespace.name | quote }}
labels:
app.kubernetes.io/part-of: cilium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cilium-bgp-control-plane-secrets
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccounts.cilium.name | quote }}
namespace: {{ .Release.Namespace }}
{{- end}}

View File

@@ -5,6 +5,10 @@ kind: Service
metadata: metadata:
name: cilium-agent name: cilium-agent
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
k8s-app: cilium k8s-app: cilium
app.kubernetes.io/name: cilium-agent app.kubernetes.io/name: cilium-agent

View File

@@ -4,8 +4,13 @@ kind: ServiceAccount
metadata: metadata:
name: {{ .Values.serviceAccounts.cilium.name | quote }} name: {{ .Values.serviceAccounts.cilium.name | quote }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- if .Values.serviceAccounts.cilium.annotations }} {{- if or .Values.serviceAccounts.cilium.annotations .Values.annotations }}
annotations: annotations:
{{- toYaml .Values.serviceAccounts.cilium.annotations | nindent 4 }} {{- with .Values.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.serviceAccounts.cilium.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}

View File

@@ -10,10 +10,15 @@ metadata:
{{- with .Values.prometheus.serviceMonitor.labels }} {{- with .Values.prometheus.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- if or .Values.prometheus.serviceMonitor.annotations .Values.annotations }}
annotations: annotations:
{{- with .Values.prometheus.serviceMonitor.annotations }} {{- with .Values.annotations }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- with .Values.prometheus.serviceMonitor.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
spec: spec:
selector: selector:
matchLabels: matchLabels:
@@ -34,6 +39,23 @@ spec:
metricRelabelings: metricRelabelings:
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- if .Values.envoy.prometheus.serviceMonitor.enabled }}
- port: envoy-metrics
interval: {{ .Values.envoy.prometheus.serviceMonitor.interval | quote }}
honorLabels: true
path: /metrics
{{- with .Values.envoy.prometheus.serviceMonitor.relabelings }}
relabelings:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.envoy.prometheus.serviceMonitor.metricRelabelings }}
metricRelabelings:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
targetLabels: targetLabels:
- k8s-app - k8s-app
{{- if .Values.prometheus.serviceMonitor.jobLabel }}
jobLabel: {{ .Values.prometheus.serviceMonitor.jobLabel | quote }}
{{- end }}
{{- end }} {{- end }}

View File

@@ -1,5 +1,5 @@
{{- if or {{- if or
(and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") (not .Values.clustermesh.apiserver.tls.ca.cert)) (and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm"))
(and (or .Values.agent .Values.hubble.relay.enabled .Values.hubble.ui.enabled) .Values.hubble.enabled .Values.hubble.tls.enabled .Values.hubble.tls.auto.enabled (eq .Values.hubble.tls.auto.method "helm")) (and (or .Values.agent .Values.hubble.relay.enabled .Values.hubble.ui.enabled) .Values.hubble.enabled .Values.hubble.tls.enabled .Values.hubble.tls.auto.enabled (eq .Values.hubble.tls.auto.method "helm"))
(and .Values.tls.ca.key .Values.tls.ca.cert) (and .Values.tls.ca.key .Values.tls.ca.cert)
-}} -}}

View File

@@ -1,6 +1,5 @@
{{- if and (.Values.agent) (not .Values.preflight.enabled) }} {{- if and (.Values.agent) (not .Values.preflight.enabled) }}
{{- /* Default values with backwards compatibility */ -}} {{- /* Default values with backwards compatibility */ -}}
{{- $defaultEnableCnpStatusUpdates := "true" -}}
{{- $defaultBpfMapDynamicSizeRatio := 0.0 -}} {{- $defaultBpfMapDynamicSizeRatio := 0.0 -}}
{{- $defaultBpfMasquerade := "false" -}} {{- $defaultBpfMasquerade := "false" -}}
{{- $defaultBpfClockProbe := "false" -}} {{- $defaultBpfClockProbe := "false" -}}
@@ -13,10 +12,12 @@
{{- $fragmentTracking := "true" -}} {{- $fragmentTracking := "true" -}}
{{- $defaultKubeProxyReplacement := "false" -}} {{- $defaultKubeProxyReplacement := "false" -}}
{{- $azureUsePrimaryAddress := "true" -}} {{- $azureUsePrimaryAddress := "true" -}}
{{- $defaultK8sClientQPS := 5 -}}
{{- $defaultK8sClientBurst := 10 -}}
{{- $defaultDNSProxyEnableTransparentMode := "false" -}}
{{- /* Default values when 1.8 was initially deployed */ -}} {{- /* Default values when 1.8 was initially deployed */ -}}
{{- if semverCompare ">=1.8" (default "1.8" .Values.upgradeCompatibility) -}} {{- if semverCompare ">=1.8" (default "1.8" .Values.upgradeCompatibility) -}}
{{- $defaultEnableCnpStatusUpdates = "false" -}}
{{- $defaultBpfMapDynamicSizeRatio = 0.0025 -}} {{- $defaultBpfMapDynamicSizeRatio = 0.0025 -}}
{{- $defaultBpfMasquerade = "true" -}} {{- $defaultBpfMasquerade = "true" -}}
{{- $defaultBpfClockProbe = "true" -}} {{- $defaultBpfClockProbe = "true" -}}
@@ -48,6 +49,7 @@
{{- $azureUsePrimaryAddress = "false" -}} {{- $azureUsePrimaryAddress = "false" -}}
{{- end }} {{- end }}
{{- $defaultKubeProxyReplacement = "disabled" -}} {{- $defaultKubeProxyReplacement = "disabled" -}}
{{- $defaultDNSProxyEnableTransparentMode = "true" -}}
{{- end -}} {{- end -}}
{{- /* Default values when 1.14 was initially deployed */ -}} {{- /* Default values when 1.14 was initially deployed */ -}}
@@ -76,6 +78,11 @@
{{- else if (not (kindIs "invalid" .Values.cni.chainingTarget)) -}} {{- else if (not (kindIs "invalid" .Values.cni.chainingTarget)) -}}
{{- $cniChainingMode = "generic-veth" -}} {{- $cniChainingMode = "generic-veth" -}}
{{- end -}} {{- end -}}
{{- if semverCompare ">=1.27-0" .Capabilities.KubeVersion.Version -}}
{{- $defaultK8sClientQPS = 10 -}}
{{- $defaultK8sClientBurst = 20 -}}
{{- end -}}
--- ---
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
@@ -189,6 +196,11 @@ data:
enable-policy: "{{ lower .Values.policyEnforcementMode }}" enable-policy: "{{ lower .Values.policyEnforcementMode }}"
{{- end }} {{- end }}
{{- if hasKey .Values "policyCIDRMatchMode" }}
policy-cidr-match-mode: {{ join " " .Values.policyCIDRMatchMode | quote }}
{{- end}}
{{- if .Values.prometheus.enabled }} {{- if .Values.prometheus.enabled }}
# If you want metrics enabled in all of your Cilium agents, set the port for # If you want metrics enabled in all of your Cilium agents, set the port for
# which the Cilium agents will have their metrics exposed. # which the Cilium agents will have their metrics exposed.
@@ -205,6 +217,13 @@ data:
{{ . }} {{ . }}
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- if .Values.prometheus.controllerGroupMetrics }}
# A space-separated list of controller groups for which to enable metrics.
# The special values of "all" and "none" are supported.
controller-group-metrics: {{- range .Values.prometheus.controllerGroupMetrics }}
{{ . }}
{{- end }}
{{- end }}
{{- end }} {{- end }}
{{- if not .Values.envoy.enabled }} {{- if not .Values.envoy.enabled }}
@@ -238,6 +257,7 @@ data:
{{- if .Values.ingressController.enabled }} {{- if .Values.ingressController.enabled }}
enable-ingress-controller: "true" enable-ingress-controller: "true"
enforce-ingress-https: {{ .Values.ingressController.enforceHttps | quote }} enforce-ingress-https: {{ .Values.ingressController.enforceHttps | quote }}
enable-ingress-proxy-protocol: {{ .Values.ingressController.enableProxyProtocol | quote }}
enable-ingress-secrets-sync: {{ .Values.ingressController.secretsNamespace.sync | quote }} enable-ingress-secrets-sync: {{ .Values.ingressController.secretsNamespace.sync | quote }}
ingress-secrets-namespace: {{ .Values.ingressController.secretsNamespace.name | quote }} ingress-secrets-namespace: {{ .Values.ingressController.secretsNamespace.name | quote }}
ingress-lb-annotation-prefixes: {{ .Values.ingressController.ingressLBAnnotationPrefixes | join " " | quote }} ingress-lb-annotation-prefixes: {{ .Values.ingressController.ingressLBAnnotationPrefixes | join " " | quote }}
@@ -430,28 +450,23 @@ data:
# - vxlan (default) # - vxlan (default)
# - geneve # - geneve
{{- if .Values.gke.enabled }} {{- if .Values.gke.enabled }}
{{- if ne (.Values.routingMode | default "native") "native" }}
{{- fail (printf "RoutingMode must be set to native when gke.enabled=true" )}}
{{- end }}
routing-mode: "native" routing-mode: "native"
enable-endpoint-routes: "true" enable-endpoint-routes: "true"
enable-local-node-route: "false"
{{- else if .Values.aksbyocni.enabled }} {{- else if .Values.aksbyocni.enabled }}
{{- if ne (.Values.routingMode | default "tunnel") "tunnel" }}
{{- fail (printf "RoutingMode must be set to tunnel when aksbyocni.enabled=true" )}}
{{- end }}
routing-mode: "tunnel" routing-mode: "tunnel"
tunnel-protocol: "vxlan" tunnel-protocol: "vxlan"
{{- else if .Values.routingMode }} {{- else if .Values.routingMode }}
routing-mode: {{ .Values.routingMode | quote }} routing-mode: {{ .Values.routingMode | quote }}
{{- else }} {{- else }}
{{- if eq .Values.tunnel "disabled" }}
routing-mode: "native"
{{- else if eq .Values.tunnel "vxlan" }}
routing-mode: "tunnel"
tunnel-protocol: "vxlan"
{{- else if eq .Values.tunnel "geneve" }}
routing-mode: "tunnel"
tunnel-protocol: "geneve"
{{- else }}
# Default case # Default case
routing-mode: "tunnel" routing-mode: "tunnel"
tunnel-protocol: "vxlan" tunnel-protocol: "vxlan"
{{- end }}
{{- end }} {{- end }}
{{- if .Values.tunnelProtocol }} {{- if .Values.tunnelProtocol }}
@@ -462,6 +477,10 @@ data:
tunnel-port: {{ .Values.tunnelPort | quote }} tunnel-port: {{ .Values.tunnelPort | quote }}
{{- end }} {{- end }}
{{- if .Values.serviceNoBackendResponse }}
service-no-backend-response: "{{ .Values.serviceNoBackendResponse }}"
{{- end}}
{{- if .Values.MTU }} {{- if .Values.MTU }}
mtu: {{ .Values.MTU | quote }} mtu: {{ .Values.MTU | quote }}
{{- end }} {{- end }}
@@ -500,7 +519,6 @@ data:
{{- if .Values.azure.enabled }} {{- if .Values.azure.enabled }}
enable-endpoint-routes: "true" enable-endpoint-routes: "true"
auto-create-cilium-node-resource: "true" auto-create-cilium-node-resource: "true"
enable-local-node-route: "false"
{{- if .Values.azure.userAssignedIdentityID }} {{- if .Values.azure.userAssignedIdentityID }}
azure-user-assigned-identity-id: {{ .Values.azure.userAssignedIdentityID | quote }} azure-user-assigned-identity-id: {{ .Values.azure.userAssignedIdentityID | quote }}
{{- end }} {{- end }}
@@ -551,6 +569,7 @@ data:
{{- else if eq $defaultBpfMasquerade "true" }} {{- else if eq $defaultBpfMasquerade "true" }}
enable-bpf-masquerade: {{ $defaultBpfMasquerade | quote }} enable-bpf-masquerade: {{ $defaultBpfMasquerade | quote }}
{{- end }} {{- end }}
enable-masquerade-to-route-source: {{ .Values.enableMasqueradeRouteSource | quote }}
{{- if hasKey .Values "egressMasqueradeInterfaces" }} {{- if hasKey .Values "egressMasqueradeInterfaces" }}
egress-masquerade-interfaces: {{ .Values.egressMasqueradeInterfaces }} egress-masquerade-interfaces: {{ .Values.egressMasqueradeInterfaces }}
{{- end }} {{- end }}
@@ -583,8 +602,8 @@ data:
{{- if .Values.encryption.wireguard.userspaceFallback }} {{- if .Values.encryption.wireguard.userspaceFallback }}
enable-wireguard-userspace-fallback: {{ .Values.encryption.wireguard.userspaceFallback | quote }} enable-wireguard-userspace-fallback: {{ .Values.encryption.wireguard.userspaceFallback | quote }}
{{- end }} {{- end }}
{{- if .Values.encryption.wireguard.encapsulate }} {{- if .Values.encryption.wireguard.persistentKeepalive }}
wireguard-encapsulate: {{ .Values.encryption.wireguard.encapsulate | quote }} wireguard-persistent-keepalive: {{ .Values.encryption.wireguard.persistentKeepalive | quote }}
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- if .Values.encryption.nodeEncryption }} {{- if .Values.encryption.nodeEncryption }}
@@ -592,6 +611,14 @@ data:
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- if .Values.encryption.strictMode.enabled }}
enable-encryption-strict-mode: {{ .Values.encryption.strictMode.enabled | quote }}
encryption-strict-mode-cidr: {{ .Values.encryption.strictMode.cidr | quote }}
encryption-strict-mode-allow-remote-node-identities: {{ .Values.encryption.strictMode.allowRemoteNodeIdentities | quote }}
{{- end }}
enable-xt-socket-fallback: {{ .Values.enableXTSocketFallback | quote }} enable-xt-socket-fallback: {{ .Values.enableXTSocketFallback | quote }}
{{- if or (.Values.azure.enabled) (.Values.eni.enabled) (.Values.gke.enabled) (ne $cniChainingMode "none") }} {{- if or (.Values.azure.enabled) (.Values.eni.enabled) (.Values.gke.enabled) (ne $cniChainingMode "none") }}
install-no-conntrack-iptables-rules: "false" install-no-conntrack-iptables-rules: "false"
@@ -693,6 +720,11 @@ data:
{{- end }} {{- end }}
{{- if hasKey .Values.nodePort "enableHealthCheck" }} {{- if hasKey .Values.nodePort "enableHealthCheck" }}
enable-health-check-nodeport: {{ .Values.nodePort.enableHealthCheck | quote}} enable-health-check-nodeport: {{ .Values.nodePort.enableHealthCheck | quote}}
{{- end }}
{{- if .Values.gke.enabled }}
enable-health-check-loadbalancer-ip: "true"
{{- else if hasKey .Values.nodePort "enableHealthCheckLoadBalancerIP" }}
enable-health-check-loadbalancer-ip: {{ .Values.nodePort.enableHealthCheckLoadBalancerIP | quote}}
{{- end }} {{- end }}
node-port-bind-protection: {{ .Values.nodePort.bindProtection | quote }} node-port-bind-protection: {{ .Values.nodePort.bindProtection | quote }}
enable-auto-protect-node-port-range: {{ .Values.nodePort.autoProtectPortRange | quote }} enable-auto-protect-node-port-range: {{ .Values.nodePort.autoProtectPortRange | quote }}
@@ -828,7 +860,7 @@ data:
{{- if .Values.hubble.enabled }} {{- if .Values.hubble.enabled }}
# Enable Hubble gRPC service. # Enable Hubble gRPC service.
enable-hubble: {{ .Values.hubble.enabled | quote }} enable-hubble: {{ .Values.hubble.enabled | quote }}
# UNIX domain socket for Hubble server to listen to. # UNIX domain socket for Hubble server to listen to.
hubble-socket-path: {{ .Values.hubble.socketPath | quote }} hubble-socket-path: {{ .Values.hubble.socketPath | quote }}
{{- if hasKey .Values.hubble "eventQueueSize" }} {{- if hasKey .Values.hubble "eventQueueSize" }}
@@ -852,6 +884,49 @@ data:
{{- end }} {{- end }}
enable-hubble-open-metrics: {{ .Values.hubble.metrics.enableOpenMetrics | quote }} enable-hubble-open-metrics: {{ .Values.hubble.metrics.enableOpenMetrics | quote }}
{{- end }} {{- end }}
{{- if .Values.hubble.redact }}
{{- if eq .Values.hubble.redact.enabled true }}
# Enables hubble redact capabilities
hubble-redact-enabled: "true"
{{- if .Values.hubble.redact.http }}
# Enables redaction of the http URL query part in flows
hubble-redact-http-urlquery: {{ .Values.hubble.redact.http.urlQuery | quote }}
# Enables redaction of the http user info in flows
hubble-redact-http-userinfo: {{ .Values.hubble.redact.http.userInfo | quote }}
{{- if .Values.hubble.redact.http.headers }}
{{- if .Values.hubble.redact.http.headers.allow }}
# Redact all http headers that do not match this list
hubble-redact-http-headers-allow: {{- range .Values.hubble.redact.http.headers.allow }}
{{ . }}
{{- end }}
{{- end }}
{{- if .Values.hubble.redact.http.headers.deny }}
# Redact all http headers that match this list
hubble-redact-http-headers-deny: {{- range .Values.hubble.redact.http.headers.deny }}
{{ . }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.hubble.redact.kafka }}
# Enables redaction of the Kafka API key part in flows
hubble-redact-kafka-apikey: {{ .Values.hubble.redact.kafka.apiKey | quote }}
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.hubble.export }}
hubble-export-file-max-size-mb: {{ .Values.hubble.export.fileMaxSizeMb | quote }}
hubble-export-file-max-backups: {{ .Values.hubble.export.fileMaxBackups | quote }}
{{- if .Values.hubble.export.static.enabled }}
hubble-export-file-path: {{ .Values.hubble.export.static.filePath | quote }}
hubble-export-fieldmask: {{ .Values.hubble.export.static.fieldMask | join " " | quote }}
hubble-export-allowlist: {{ .Values.hubble.export.static.allowList | join "," | quote }}
hubble-export-denylist: {{ .Values.hubble.export.static.denyList | join "," | quote }}
{{- end }}
{{- if .Values.hubble.export.dynamic.enabled }}
hubble-flowlogs-config-path: /flowlog-config/flowlogs.yaml
{{- end }}
{{- end }}
{{- if hasKey .Values.hubble "listenAddress" }} {{- if hasKey .Values.hubble "listenAddress" }}
# An additional address for Hubble server to listen to (e.g. ":4244"). # An additional address for Hubble server to listen to (e.g. ":4244").
hubble-listen-address: {{ .Values.hubble.listenAddress | quote }} hubble-listen-address: {{ .Values.hubble.listenAddress | quote }}
@@ -885,7 +960,7 @@ data:
ipam-cilium-node-update-rate: {{ include "validateDuration" .Values.ipam.ciliumNodeUpdateRate | quote }} ipam-cilium-node-update-rate: {{ include "validateDuration" .Values.ipam.ciliumNodeUpdateRate | quote }}
{{- end }} {{- end }}
{{- if or (eq $ipam "cluster-pool") (eq $ipam "cluster-pool-v2beta") }} {{- if (eq $ipam "cluster-pool") }}
{{- if .Values.ipv4.enabled }} {{- if .Values.ipv4.enabled }}
{{- if hasKey .Values.ipam.operator "clusterPoolIPv4PodCIDR" }} {{- if hasKey .Values.ipam.operator "clusterPoolIPv4PodCIDR" }}
{{- /* ipam.operator.clusterPoolIPv4PodCIDR removed in v1.14, remove this failsafe around v1.17 */ -}} {{- /* ipam.operator.clusterPoolIPv4PodCIDR removed in v1.14, remove this failsafe around v1.17 */ -}}
@@ -927,11 +1002,8 @@ data:
limit-ipam-api-qps: {{ .Values.ipam.operator.externalAPILimitQPS | quote }} limit-ipam-api-qps: {{ .Values.ipam.operator.externalAPILimitQPS | quote }}
{{- end }} {{- end }}
{{- if .Values.enableCnpStatusUpdates }} {{- if .Values.apiRateLimit }}
disable-cnp-status-updates: "false" api-rate-limit: {{ .Values.apiRateLimit | quote }}
{{- else if (eq $defaultEnableCnpStatusUpdates "false") }}
disable-cnp-status-updates: "true"
cnp-node-status-gc-interval: "0s"
{{- end }} {{- end }}
{{- if .Values.egressGateway.enabled }} {{- if .Values.egressGateway.enabled }}
@@ -963,10 +1035,6 @@ data:
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- if .Values.enableK8sEventHandover }}
enable-k8s-event-handover: "true"
{{- end }}
{{- if .Values.crdWaitTimeout }} {{- if .Values.crdWaitTimeout }}
crd-wait-timeout: {{ include "validateDuration" .Values.crdWaitTimeout | quote }} crd-wait-timeout: {{ include "validateDuration" .Values.crdWaitTimeout | quote }}
{{- end }} {{- end }}
@@ -1018,6 +1086,7 @@ data:
{{- if .Values.bgpControlPlane.enabled }} {{- if .Values.bgpControlPlane.enabled }}
enable-bgp-control-plane: "true" enable-bgp-control-plane: "true"
bgp-secrets-namespace: {{ .Values.bgpControlPlane.secretsNamespace.name | quote }}
{{- else }} {{- else }}
enable-bgp-control-plane: "false" enable-bgp-control-plane: "false"
{{- end }} {{- end }}
@@ -1064,10 +1133,8 @@ data:
annotate-k8s-node: "true" annotate-k8s-node: "true"
{{- end }} {{- end }}
{{- if hasKey .Values "k8sClientRateLimit" }} k8s-client-qps: {{ .Values.k8sClientRateLimit.qps | default $defaultK8sClientQPS | quote}}
k8s-client-qps: {{ .Values.k8sClientRateLimit.qps | quote }} k8s-client-burst: {{ .Values.k8sClientRateLimit.burst | default $defaultK8sClientBurst | quote }}
k8s-client-burst: {{ .Values.k8sClientRateLimit.burst | quote }}
{{- end }}
{{- if and .Values.operator.setNodeTaints (not .Values.operator.removeNodeTaints) -}} {{- if and .Values.operator.setNodeTaints (not .Values.operator.removeNodeTaints) -}}
{{ fail "Cannot have operator.setNodeTaintsMaxNodes and not operator.removeNodeTaints = false" }} {{ fail "Cannot have operator.setNodeTaintsMaxNodes and not operator.removeNodeTaints = false" }}
@@ -1092,6 +1159,13 @@ data:
{{- end }} {{- end }}
{{- if .Values.dnsProxy }} {{- if .Values.dnsProxy }}
{{- if hasKey .Values.dnsProxy "enableTransparentMode" }}
# explicit setting gets precedence
dnsproxy-enable-transparent-mode: {{ .Values.dnsProxy.enableTransparentMode | quote }}
{{- else if eq $cniChainingMode "none" }}
# default DNS proxy to transparent mode in non-chaining modes
dnsproxy-enable-transparent-mode: {{ $defaultDNSProxyEnableTransparentMode | quote }}
{{- end }}
{{- if .Values.dnsProxy.dnsRejectResponseCode }} {{- if .Values.dnsProxy.dnsRejectResponseCode }}
tofqdns-dns-reject-response-code: {{ .Values.dnsProxy.dnsRejectResponseCode | quote }} tofqdns-dns-reject-response-code: {{ .Values.dnsProxy.dnsRejectResponseCode | quote }}
{{- end }} {{- end }}
@@ -1121,10 +1195,6 @@ data:
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- if .Values.extraConfig }}
{{ toYaml .Values.extraConfig | nindent 2 }}
{{- end }}
{{- if hasKey .Values "agentNotReadyTaintKey" }} {{- if hasKey .Values "agentNotReadyTaintKey" }}
agent-not-ready-taint-key: {{ .Values.agentNotReadyTaintKey | quote }} agent-not-ready-taint-key: {{ .Values.agentNotReadyTaintKey | quote }}
{{- end }} {{- end }}
@@ -1138,6 +1208,7 @@ data:
mesh-auth-mutual-enabled: "true" mesh-auth-mutual-enabled: "true"
mesh-auth-mutual-listener-port: {{ .Values.authentication.mutual.port | quote }} mesh-auth-mutual-listener-port: {{ .Values.authentication.mutual.port | quote }}
mesh-auth-spire-agent-socket: {{ .Values.authentication.mutual.spire.agentSocketPath | quote }} mesh-auth-spire-agent-socket: {{ .Values.authentication.mutual.spire.agentSocketPath | quote }}
mesh-auth-mutual-connect-timeout: {{ include "validateDuration" .Values.authentication.mutual.connectTimeout | quote }}
{{- if .Values.authentication.mutual.spire.serverAddress }} {{- if .Values.authentication.mutual.spire.serverAddress }}
mesh-auth-spire-server-address: {{ .Values.authentication.mutual.spire.serverAddress | quote }} mesh-auth-spire-server-address: {{ .Values.authentication.mutual.spire.serverAddress | quote }}
{{- else }} {{- else }}
@@ -1158,6 +1229,16 @@ data:
envoy-log: {{ .Values.envoy.log.path | quote }} envoy-log: {{ .Values.envoy.log.path | quote }}
{{- end }} {{- end }}
{{- if hasKey .Values.clustermesh "maxConnectedClusters" }}
max-connected-clusters: {{ .Values.clustermesh.maxConnectedClusters | quote }}
{{- end }}
# Extra config allows adding arbitrary properties to the cilium config.
# By putting it at the end of the ConfigMap, it's also possible to override existing properties.
{{- if .Values.extraConfig }}
{{ toYaml .Values.extraConfig | nindent 2 }}
{{- end }}
{{- end }} {{- end }}
--- ---
{{- if and .Values.ipMasqAgent.enabled .Values.ipMasqAgent.config }} {{- if and .Values.ipMasqAgent.enabled .Values.ipMasqAgent.config }}

View File

@@ -6,6 +6,10 @@ kind: ConfigMap
metadata: metadata:
name: cilium-envoy-config name: cilium-envoy-config
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.envoy.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
data: data:
{{- (tpl (.Files.Glob "files/cilium-envoy/configmap/bootstrap-config.json").AsConfig .) | nindent 2 }} {{- (tpl (.Files.Glob "files/cilium-envoy/configmap/bootstrap-config.json").AsConfig .) | nindent 2 }}

View File

@@ -6,6 +6,10 @@ kind: DaemonSet
metadata: metadata:
name: cilium-envoy name: cilium-envoy
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.envoy.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
k8s-app: cilium-envoy k8s-app: cilium-envoy
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
@@ -61,11 +65,11 @@ spec:
image: {{ include "cilium.image" .Values.envoy.image | quote }} image: {{ include "cilium.image" .Values.envoy.image | quote }}
imagePullPolicy: {{ .Values.envoy.image.pullPolicy }} imagePullPolicy: {{ .Values.envoy.image.pullPolicy }}
command: command:
- /usr/bin/cilium-envoy - /usr/bin/cilium-envoy-starter
args: args:
- '-c /var/run/cilium/envoy/bootstrap-config.json' - '-c /var/run/cilium/envoy/bootstrap-config.json'
- '--base-id 0' - '--base-id 0'
{{- if and (hasKey .Values.debug "verbose") (.Values.debug.verbose) (has "envoy" ( splitList " " .Values.debug.verbose )) }} {{- if and (.Values.debug.enabled) (hasKey .Values.debug "verbose") (.Values.debug.verbose) (has "envoy" ( splitList " " .Values.debug.verbose )) }}
- '--log-level trace' - '--log-level trace'
{{- else if and (.Values.debug.enabled) (hasKey .Values.debug "verbose") (.Values.debug.verbose) (has "flow" ( splitList " " .Values.debug.verbose )) }} {{- else if and (.Values.debug.enabled) (hasKey .Values.debug "verbose") (.Values.debug.verbose) (has "flow" ( splitList " " .Values.debug.verbose )) }}
- '--log-level debug' - '--log-level debug'
@@ -82,17 +86,18 @@ spec:
{{- if semverCompare ">=1.20-0" .Capabilities.KubeVersion.Version }} {{- if semverCompare ">=1.20-0" .Capabilities.KubeVersion.Version }}
startupProbe: startupProbe:
httpGet: httpGet:
host: "localhost" host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
path: /healthz path: /healthz
port: {{ .Values.envoy.healthPort }} port: {{ .Values.envoy.healthPort }}
scheme: HTTP scheme: HTTP
failureThreshold: {{ .Values.envoy.startupProbe.failureThreshold }} failureThreshold: {{ .Values.envoy.startupProbe.failureThreshold }}
periodSeconds: {{ .Values.envoy.startupProbe.periodSeconds }} periodSeconds: {{ .Values.envoy.startupProbe.periodSeconds }}
successThreshold: 1 successThreshold: 1
initialDelaySeconds: 5
{{- end }} {{- end }}
livenessProbe: livenessProbe:
httpGet: httpGet:
host: "localhost" host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
path: /healthz path: /healthz
port: {{ .Values.envoy.healthPort }} port: {{ .Values.envoy.healthPort }}
scheme: HTTP scheme: HTTP
@@ -110,7 +115,7 @@ spec:
timeoutSeconds: 5 timeoutSeconds: 5
readinessProbe: readinessProbe:
httpGet: httpGet:
host: "localhost" host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
path: /healthz path: /healthz
port: {{ .Values.envoy.healthPort }} port: {{ .Values.envoy.healthPort }}
scheme: HTTP scheme: HTTP
@@ -175,6 +180,9 @@ spec:
- name: envoy-sockets - name: envoy-sockets
mountPath: /var/run/cilium/envoy/sockets mountPath: /var/run/cilium/envoy/sockets
readOnly: false readOnly: false
- name: envoy-artifacts
mountPath: /var/run/cilium/envoy/artifacts
readOnly: true
- name: envoy-config - name: envoy-config
mountPath: /var/run/cilium/envoy/ mountPath: /var/run/cilium/envoy/
readOnly: true readOnly: true
@@ -224,6 +232,10 @@ spec:
hostPath: hostPath:
path: "{{ .Values.daemon.runPath }}/envoy/sockets" path: "{{ .Values.daemon.runPath }}/envoy/sockets"
type: DirectoryOrCreate type: DirectoryOrCreate
- name: envoy-artifacts
hostPath:
path: "{{ .Values.daemon.runPath }}/envoy/artifacts"
type: DirectoryOrCreate
- name: envoy-config - name: envoy-config
configMap: configMap:
name: cilium-envoy-config name: cilium-envoy-config

View File

@@ -4,11 +4,16 @@ kind: Service
metadata: metadata:
name: cilium-envoy name: cilium-envoy
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- if not .Values.envoy.prometheus.serviceMonitor.enabled }} {{- if or (not .Values.envoy.prometheus.serviceMonitor.enabled) .Values.envoy.annotations }}
annotations: annotations:
{{- if not .Values.envoy.prometheus.serviceMonitor.enabled }}
prometheus.io/scrape: "true" prometheus.io/scrape: "true"
prometheus.io/port: {{ .Values.proxy.prometheus.port | default .Values.envoy.prometheus.port | quote }} prometheus.io/port: {{ .Values.proxy.prometheus.port | default .Values.envoy.prometheus.port | quote }}
{{- end }} {{- end }}
{{- with .Values.envoy.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
labels: labels:
k8s-app: cilium-envoy k8s-app: cilium-envoy
app.kubernetes.io/name: cilium-envoy app.kubernetes.io/name: cilium-envoy

View File

@@ -4,8 +4,13 @@ kind: ServiceAccount
metadata: metadata:
name: {{ .Values.serviceAccounts.envoy.name | quote }} name: {{ .Values.serviceAccounts.envoy.name | quote }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- if .Values.serviceAccounts.envoy.annotations }} {{- if or .Values.serviceAccounts.envoy.annotations .Values.envoy.annotations }}
annotations: annotations:
{{- toYaml .Values.serviceAccounts.envoy.annotations | nindent 4 }} {{- with .Values.envoy.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.serviceAccounts.envoy.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}

View File

@@ -7,13 +7,19 @@ metadata:
namespace: {{ .Values.envoy.prometheus.serviceMonitor.namespace | default .Release.Namespace }} namespace: {{ .Values.envoy.prometheus.serviceMonitor.namespace | default .Release.Namespace }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-envoy
{{- with .Values.envoy.prometheus.serviceMonitor.labels }} {{- with .Values.envoy.prometheus.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- if or .Values.envoy.prometheus.serviceMonitor.annotations .Values.envoy.annotations }}
annotations: annotations:
{{- with .Values.envoy.prometheus.serviceMonitor.annotations }} {{- with .Values.envoy.annotations }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- with .Values.envoy.prometheus.serviceMonitor.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
spec: spec:
selector: selector:
matchLabels: matchLabels:
@@ -22,7 +28,7 @@ spec:
matchNames: matchNames:
- {{ .Release.Namespace }} - {{ .Release.Namespace }}
endpoints: endpoints:
- port: metrics - port: envoy-metrics
interval: {{ .Values.envoy.prometheus.serviceMonitor.interval | quote }} interval: {{ .Values.envoy.prometheus.serviceMonitor.interval | quote }}
honorLabels: true honorLabels: true
path: /metrics path: /metrics

View File

@@ -0,0 +1,12 @@
{{- if and .Values.hubble.export.dynamic.enabled .Values.hubble.export.dynamic.config.createConfigMap }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.hubble.export.dynamic.config.configMapName }}
namespace: {{ .Release.Namespace }}
data:
flowlogs.yaml: |
flowLogs:
{{ .Values.hubble.export.dynamic.config.content | toYaml | indent 4 }}
{{- end }}

View File

@@ -1,6 +1,6 @@
{{- if .Values.gatewayAPI.enabled -}} {{- if .Values.gatewayAPI.enabled -}}
{{- if .Capabilities.APIVersions.Has "gateway.networking.k8s.io/v1beta1/GatewayClass" }} {{- if .Capabilities.APIVersions.Has "gateway.networking.k8s.io/v1/GatewayClass" }}
apiVersion: gateway.networking.k8s.io/v1beta1 apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass kind: GatewayClass
metadata: metadata:
name: cilium name: cilium

View File

@@ -5,6 +5,10 @@ apiVersion: apps/v1
metadata: metadata:
name: cilium-node-init name: cilium-node-init
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.nodeinit.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app: cilium-node-init app: cilium-node-init
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium

View File

@@ -4,8 +4,13 @@ kind: ServiceAccount
metadata: metadata:
name: {{ .Values.serviceAccounts.nodeinit.name | quote }} name: {{ .Values.serviceAccounts.nodeinit.name | quote }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- if .Values.serviceAccounts.nodeinit.annotations }} {{- if or .Values.serviceAccounts.nodeinit.annotations .Values.nodeinit.annotations }}
annotations: annotations:
{{- toYaml .Values.serviceAccounts.nodeinit.annotations | nindent 4 }} {{- with .Values.nodeinit.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.serviceAccounts.nodeinit.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}

View File

@@ -3,6 +3,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole kind: ClusterRole
metadata: metadata:
name: cilium-operator name: cilium-operator
{{- with .Values.operator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
rules: rules:
@@ -157,6 +161,9 @@ rules:
resources: resources:
- ciliumendpointslices - ciliumendpointslices
- ciliumenvoyconfigs - ciliumenvoyconfigs
- ciliumbgppeerconfigs
- ciliumbgpadvertisements
- ciliumbgpnodeconfigs
verbs: verbs:
- create - create
- update - update
@@ -183,6 +190,11 @@ rules:
resourceNames: resourceNames:
- ciliumloadbalancerippools.cilium.io - ciliumloadbalancerippools.cilium.io
- ciliumbgppeeringpolicies.cilium.io - ciliumbgppeeringpolicies.cilium.io
- ciliumbgpclusterconfigs.cilium.io
- ciliumbgppeerconfigs.cilium.io
- ciliumbgpadvertisements.cilium.io
- ciliumbgpnodeconfigs.cilium.io
- ciliumbgpnodeconfigoverrides.cilium.io
- ciliumclusterwideenvoyconfigs.cilium.io - ciliumclusterwideenvoyconfigs.cilium.io
- ciliumclusterwidenetworkpolicies.cilium.io - ciliumclusterwidenetworkpolicies.cilium.io
- ciliumegressgatewaypolicies.cilium.io - ciliumegressgatewaypolicies.cilium.io
@@ -203,6 +215,8 @@ rules:
resources: resources:
- ciliumloadbalancerippools - ciliumloadbalancerippools
- ciliumpodippools - ciliumpodippools
- ciliumbgpclusterconfigs
- ciliumbgpnodeconfigoverrides
verbs: verbs:
- get - get
- list - list
@@ -258,6 +272,7 @@ rules:
- gateways - gateways
- tlsroutes - tlsroutes
- httproutes - httproutes
- grpcroutes
- referencegrants - referencegrants
- referencepolicies - referencepolicies
verbs: verbs:
@@ -270,6 +285,7 @@ rules:
- gatewayclasses/status - gatewayclasses/status
- gateways/status - gateways/status
- httproutes/status - httproutes/status
- grpcroutes/status
- tlsroutes/status - tlsroutes/status
verbs: verbs:
- update - update

View File

@@ -3,6 +3,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding kind: ClusterRoleBinding
metadata: metadata:
name: cilium-operator name: cilium-operator
{{- with .Values.operator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
roleRef: roleRef:

View File

@@ -15,9 +15,14 @@ metadata:
{{- if $.Values.operator.dashboards.label }} {{- if $.Values.operator.dashboards.label }}
{{ $.Values.operator.dashboards.label }}: {{ ternary $.Values.operator.dashboards.labelValue "1" (not (empty $.Values.operator.dashboards.labelValue)) | quote }} {{ $.Values.operator.dashboards.label }}: {{ ternary $.Values.operator.dashboards.labelValue "1" (not (empty $.Values.operator.dashboards.labelValue)) | quote }}
{{- end }} {{- end }}
{{- with $.Values.operator.dashboards.annotations }} {{- if or $.Values.operator.dashboards.annotations $.Values.operator.annotations }}
annotations: annotations:
{{- toYaml . | nindent 4 }} {{- with $.Values.operator.dashboards.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with $.Values.operator.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }} {{- end }}
data: data:
{{ $dashboardName }}.json: {{ $.Files.Get $path | toJson }} {{ $dashboardName }}.json: {{ $.Files.Get $path | toJson }}

View File

@@ -5,6 +5,10 @@ kind: Deployment
metadata: metadata:
name: cilium-operator name: cilium-operator
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.operator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
io.cilium/app: operator io.cilium/app: operator
name: cilium-operator name: cilium-operator

View File

@@ -5,6 +5,10 @@ kind: PodDisruptionBudget
metadata: metadata:
name: cilium-operator name: cilium-operator
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.operator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
io.cilium/app: operator io.cilium/app: operator
name: cilium-operator name: cilium-operator

View File

@@ -5,6 +5,10 @@ kind: Role
metadata: metadata:
name: cilium-operator-ingress-secrets name: cilium-operator-ingress-secrets
namespace: {{ .Values.ingressController.secretsNamespace.name | quote }} namespace: {{ .Values.ingressController.secretsNamespace.name | quote }}
{{- with .Values.operator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
rules: rules:
@@ -26,6 +30,10 @@ kind: Role
metadata: metadata:
name: cilium-operator-gateway-secrets name: cilium-operator-gateway-secrets
namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }} namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }}
{{- with .Values.operator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
rules: rules:

View File

@@ -5,6 +5,10 @@ kind: RoleBinding
metadata: metadata:
name: cilium-operator-ingress-secrets name: cilium-operator-ingress-secrets
namespace: {{ .Values.ingressController.secretsNamespace.name | quote }} namespace: {{ .Values.ingressController.secretsNamespace.name | quote }}
{{- with .Values.operator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
roleRef: roleRef:
@@ -24,6 +28,10 @@ kind: RoleBinding
metadata: metadata:
name: cilium-operator-gateway-secrets name: cilium-operator-gateway-secrets
namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }} namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }}
{{- with .Values.operator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
roleRef: roleRef:

View File

@@ -5,6 +5,10 @@ kind: Secret
metadata: metadata:
name: cilium-azure name: cilium-azure
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.operator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
type: Opaque type: Opaque
data: data:
AZURE_CLIENT_ID: {{ default "" .Values.azure.clientID | b64enc | quote }} AZURE_CLIENT_ID: {{ default "" .Values.azure.clientID | b64enc | quote }}

View File

@@ -4,6 +4,10 @@ apiVersion: v1
metadata: metadata:
name: cilium-operator name: cilium-operator
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.operator.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
io.cilium/app: operator io.cilium/app: operator
name: cilium-operator name: cilium-operator

View File

@@ -8,8 +8,13 @@ kind: ServiceAccount
metadata: metadata:
name: {{ .Values.serviceAccounts.operator.name | quote }} name: {{ .Values.serviceAccounts.operator.name | quote }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- if .Values.serviceAccounts.operator.annotations }} {{- if or .Values.serviceAccounts.operator.annotations .Values.operator.annotations }}
annotations: annotations:
{{- toYaml .Values.serviceAccounts.operator.annotations | nindent 4 }} {{- with .Values.operator.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.serviceAccounts.operator.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}

View File

@@ -10,10 +10,15 @@ metadata:
{{- with .Values.operator.prometheus.serviceMonitor.labels }} {{- with .Values.operator.prometheus.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- if or .Values.operator.prometheus.serviceMonitor.annotations .Values.operator.annotations }}
annotations: annotations:
{{- with .Values.operator.prometheus.serviceMonitor.annotations }} {{- with .Values.operator.annotations }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- with .Values.operator.prometheus.serviceMonitor.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
spec: spec:
selector: selector:
matchLabels: matchLabels:
@@ -37,4 +42,7 @@ spec:
{{- end }} {{- end }}
targetLabels: targetLabels:
- io.cilium/app - io.cilium/app
{{- if .Values.operator.prometheus.serviceMonitor.jobLabel }}
jobLabel: {{ .Values.operator.prometheus.serviceMonitor.jobLabel | quote }}
{{- end }}
{{- end }} {{- end }}

View File

@@ -6,6 +6,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole kind: ClusterRole
metadata: metadata:
name: cilium-pre-flight name: cilium-pre-flight
{{- with .Values.preflight.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
rules: rules:
@@ -82,6 +86,9 @@ rules:
resources: resources:
- ciliumloadbalancerippools - ciliumloadbalancerippools
- ciliumbgppeeringpolicies - ciliumbgppeeringpolicies
- ciliumbgpnodeconfigs
- ciliumbgpadvertisements
- ciliumbgppeerconfigs
- ciliumclusterwideenvoyconfigs - ciliumclusterwideenvoyconfigs
- ciliumclusterwidenetworkpolicies - ciliumclusterwidenetworkpolicies
- ciliumegressgatewaypolicies - ciliumegressgatewaypolicies
@@ -137,6 +144,7 @@ rules:
- ciliumendpoints/status - ciliumendpoints/status
- ciliumendpoints - ciliumendpoints
- ciliuml2announcementpolicies/status - ciliuml2announcementpolicies/status
- ciliumbgpnodeconfigs/status
verbs: verbs:
- patch - patch
{{- end }} {{- end }}

View File

@@ -3,6 +3,10 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding kind: ClusterRoleBinding
metadata: metadata:
name: cilium-pre-flight name: cilium-pre-flight
{{- with .Values.preflight.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
roleRef: roleRef:

View File

@@ -4,6 +4,10 @@ kind: DaemonSet
metadata: metadata:
name: cilium-pre-flight-check name: cilium-pre-flight-check
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.preflight.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec: spec:
selector: selector:
matchLabels: matchLabels:
@@ -66,8 +70,13 @@ spec:
- /tmp/ready - /tmp/ready
initialDelaySeconds: 5 initialDelaySeconds: 5
periodSeconds: 5 periodSeconds: 5
{{- with .Values.preflight.extraEnv }}
env: env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
{{- with .Values.preflight.extraEnv }}
{{- toYaml . | trim | nindent 12 }} {{- toYaml . | trim | nindent 12 }}
{{- end }} {{- end }}
volumeMounts: volumeMounts:
@@ -104,7 +113,7 @@ spec:
args: args:
- -ec - -ec
- | - |
cilium preflight fqdn-poller --tofqdns-pre-cache {{ .Values.preflight.tofqdnsPreCache }}; cilium-dbg preflight fqdn-poller --tofqdns-pre-cache {{ .Values.preflight.tofqdnsPreCache }};
touch /tmp/ready-tofqdns-precache; touch /tmp/ready-tofqdns-precache;
livenessProbe: livenessProbe:
exec: exec:

View File

@@ -4,6 +4,10 @@ kind: Deployment
metadata: metadata:
name: cilium-pre-flight-check name: cilium-pre-flight-check
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.preflight.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: cilium-pre-flight-check app.kubernetes.io/name: cilium-pre-flight-check
@@ -39,7 +43,7 @@ spec:
args: args:
- -ec - -ec
- | - |
cilium preflight validate-cnp; cilium-dbg preflight validate-cnp;
touch /tmp/ready-validate-cnp; touch /tmp/ready-validate-cnp;
sleep 1h; sleep 1h;
livenessProbe: livenessProbe:

View File

@@ -5,6 +5,10 @@ kind: PodDisruptionBudget
metadata: metadata:
name: cilium-pre-flight-check name: cilium-pre-flight-check
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.preflight.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
k8s-app: cilium-pre-flight-check-deployment k8s-app: cilium-pre-flight-check-deployment
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium

View File

@@ -4,8 +4,13 @@ kind: ServiceAccount
metadata: metadata:
name: {{ .Values.serviceAccounts.preflight.name | quote }} name: {{ .Values.serviceAccounts.preflight.name | quote }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- if .Values.serviceAccounts.preflight.annotations }} {{- if or .Values.serviceAccounts.preflight.annotations .Values.preflight.annotations }}
annotations: annotations:
{{ toYaml .Values.serviceAccounts.preflight.annotations | nindent 4 }} {{- with .Values.preflight.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.serviceAccounts.preflight.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}

View File

@@ -1,32 +1,14 @@
{{- if and .Values.ingressController.enabled .Values.ingressController.secretsNamespace.create .Values.ingressController.secretsNamespace.name }} {{- $secretNamespaces := dict -}}
--- {{- range $cfg := tuple .Values.ingressController .Values.gatewayAPI .Values.envoyConfig .Values.bgpControlPlane -}}
apiVersion: v1 {{- if and $cfg.enabled $cfg.secretsNamespace.create $cfg.secretsNamespace.name -}}
kind: Namespace {{- $_ := set $secretNamespaces $cfg.secretsNamespace.name 1 -}}
metadata: {{- end -}}
name: {{ .Values.ingressController.secretsNamespace.name | quote }} {{- end -}}
{{- end}}
# Only create the namespace if it's different from Ingress secret namespace or Ingress is not enabled. {{- range $name, $_ := $secretNamespaces }}
{{- if and .Values.gatewayAPI.enabled .Values.gatewayAPI.secretsNamespace.create .Values.gatewayAPI.secretsNamespace.name
(or (not (and .Values.ingressController.enabled .Values.ingressController.secretsNamespace.create .Values.ingressController.secretsNamespace.name))
(ne .Values.gatewayAPI.secretsNamespace.name .Values.ingressController.secretsNamespace.name)) }}
--- ---
apiVersion: v1 apiVersion: v1
kind: Namespace kind: Namespace
metadata: metadata:
name: {{ .Values.gatewayAPI.secretsNamespace.name | quote }} name: {{ $name | quote }}
{{- end}}
# Only create the namespace if it's different from Ingress and Gateway API secret namespaces (if enabled).
{{- if and .Values.envoyConfig.enabled .Values.envoyConfig.secretsNamespace.create .Values.envoyConfig.secretsNamespace.name
(and
(or (not (and .Values.ingressController.enabled .Values.ingressController.secretsNamespace.create .Values.ingressController.secretsNamespace.name))
(ne .Values.envoyConfig.secretsNamespace.name .Values.ingressController.secretsNamespace.name))
(or (not (and .Values.gatewayAPI.enabled .Values.gatewayAPI.secretsNamespace.create .Values.gatewayAPI.secretsNamespace.name))
(ne .Values.envoyConfig.secretsNamespace.name .Values.gatewayAPI.secretsNamespace.name))) }}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.envoyConfig.secretsNamespace.name | quote }}
{{- end}} {{- end}}

View File

@@ -5,6 +5,10 @@ metadata:
name: clustermesh-apiserver name: clustermesh-apiserver
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
rules: rules:
- apiGroups: - apiGroups:
- cilium.io - cilium.io

View File

@@ -5,6 +5,10 @@ metadata:
name: clustermesh-apiserver name: clustermesh-apiserver
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
roleRef: roleRef:
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
kind: ClusterRole kind: ClusterRole

View File

@@ -7,6 +7,10 @@ kind: Deployment
metadata: metadata:
name: clustermesh-apiserver name: clustermesh-apiserver
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
k8s-app: clustermesh-apiserver k8s-app: clustermesh-apiserver
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
@@ -44,41 +48,37 @@ spec:
{{- end }} {{- end }}
initContainers: initContainers:
- name: etcd-init - name: etcd-init
image: {{ include "cilium.image" .Values.clustermesh.apiserver.etcd.image | quote }} image: {{ include "cilium.image" .Values.clustermesh.apiserver.image | quote }}
imagePullPolicy: {{ .Values.clustermesh.apiserver.etcd.image.pullPolicy }} imagePullPolicy: {{ .Values.clustermesh.apiserver.image.pullPolicy }}
command: ["/bin/sh", "-c"] command:
- /usr/bin/clustermesh-apiserver
args: args:
- | - etcdinit
rm -rf /var/run/etcd/*; {{- if .Values.debug.enabled }}
/usr/local/bin/etcd --data-dir=/var/run/etcd --name=clustermesh-apiserver --listen-client-urls=http://127.0.0.1:2379 --advertise-client-urls=http://127.0.0.1:2379 --initial-cluster-token=clustermesh-apiserver --initial-cluster-state=new --auto-compaction-retention=1 & - --debug
{{- end }}
# The following key needs to be created before that the cilium agents # These need to match the equivalent arguments to etcd in the main container.
# have the possibility of connecting to etcd. - --etcd-cluster-name=clustermesh-apiserver
etcdctl put cilium/.has-cluster-config true - --etcd-initial-cluster-token=clustermesh-apiserver
- --etcd-data-dir=/var/run/etcd
etcdctl user add root --no-password; {{- with .Values.clustermesh.apiserver.etcd.init.extraArgs }}
etcdctl user grant-role root root; {{- toYaml . | trim | nindent 8 }}
etcdctl user add admin-{{ .Values.cluster.name }} --no-password; {{- end }}
etcdctl user grant-role admin-{{ .Values.cluster.name }} root;
etcdctl user add externalworkload --no-password;
etcdctl role add externalworkload;
etcdctl role grant-permission externalworkload --from-key read '';
etcdctl role grant-permission externalworkload readwrite --prefix cilium/state/noderegister/v1/;
etcdctl role grant-permission externalworkload readwrite --prefix cilium/.initlock/;
etcdctl user grant-role externalworkload externalworkload;
etcdctl user add remote --no-password;
etcdctl role add remote;
etcdctl role grant-permission remote --from-key read '';
etcdctl user grant-role remote remote;
etcdctl auth enable;
exit
env: env:
- name: ETCDCTL_API # The Cilium cluster name (specified via the `CILIUM_CLUSTER_NAME` environment variable) and the etcd cluster
value: "3" # name (specified via the `--etcd-cluster-name` argument) are very different concepts. The Cilium cluster name
- name: HOSTNAME_IP # is the name of the overall Cilium cluster, and is used to set the admin account username. The etcd cluster
# name is a concept that's only relevant for etcd itself. The etcd cluster name must be the same for both this
# command and the actual invocation of etcd in the main containers of this Pod, but it's otherwise not
# relevant to Cilium.
- name: CILIUM_CLUSTER_NAME
valueFrom: valueFrom:
fieldRef: configMapKeyRef:
fieldPath: status.podIP name: cilium-config
key: cluster-name
{{- with .Values.clustermesh.apiserver.etcd.init.extraEnv }}
{{- toYaml . | trim | nindent 8 }}
{{- end }}
volumeMounts: volumeMounts:
- name: etcd-data-dir - name: etcd-data-dir
mountPath: /var/run/etcd mountPath: /var/run/etcd
@@ -92,10 +92,11 @@ spec:
{{- end }} {{- end }}
containers: containers:
- name: etcd - name: etcd
image: {{ include "cilium.image" .Values.clustermesh.apiserver.etcd.image | quote }} # The clustermesh-apiserver container image includes an etcd binary.
imagePullPolicy: {{ .Values.clustermesh.apiserver.etcd.image.pullPolicy }} image: {{ include "cilium.image" .Values.clustermesh.apiserver.image | quote }}
imagePullPolicy: {{ .Values.clustermesh.apiserver.image.pullPolicy }}
command: command:
- /usr/local/bin/etcd - /usr/bin/etcd
args: args:
- --data-dir=/var/run/etcd - --data-dir=/var/run/etcd
- --name=clustermesh-apiserver - --name=clustermesh-apiserver
@@ -147,12 +148,17 @@ spec:
securityContext: securityContext:
{{- toYaml . | nindent 10 }} {{- toYaml . | nindent 10 }}
{{- end }} {{- end }}
{{- with .Values.clustermesh.apiserver.etcd.lifecycle }}
lifecycle:
{{- toYaml . | nindent 10 }}
{{- end }}
- name: apiserver - name: apiserver
image: {{ include "cilium.image" .Values.clustermesh.apiserver.image | quote }} image: {{ include "cilium.image" .Values.clustermesh.apiserver.image | quote }}
imagePullPolicy: {{ .Values.clustermesh.apiserver.image.pullPolicy }} imagePullPolicy: {{ .Values.clustermesh.apiserver.image.pullPolicy }}
command: command:
- /usr/bin/clustermesh-apiserver - /usr/bin/clustermesh-apiserver
args: args:
- clustermesh
{{- if .Values.debug.enabled }} {{- if .Values.debug.enabled }}
- --debug - --debug
{{- end }} {{- end }}
@@ -160,6 +166,9 @@ spec:
- --cluster-id=$(CLUSTER_ID) - --cluster-id=$(CLUSTER_ID)
- --kvstore-opt - --kvstore-opt
- etcd.config=/var/lib/cilium/etcd-config.yaml - etcd.config=/var/lib/cilium/etcd-config.yaml
{{- if hasKey .Values.clustermesh "maxConnectedClusters" }}
- --max-connected-clusters={{ .Values.clustermesh.maxConnectedClusters }}
{{- end }}
{{- if ne .Values.clustermesh.apiserver.tls.authMode "legacy" }} {{- if ne .Values.clustermesh.apiserver.tls.authMode "legacy" }}
- --cluster-users-enabled - --cluster-users-enabled
- --cluster-users-config-path=/var/lib/cilium/etcd-config/users.yaml - --cluster-users-config-path=/var/lib/cilium/etcd-config/users.yaml
@@ -167,6 +176,7 @@ spec:
- --enable-external-workloads={{ .Values.externalWorkloads.enabled }} - --enable-external-workloads={{ .Values.externalWorkloads.enabled }}
{{- if .Values.clustermesh.apiserver.metrics.enabled }} {{- if .Values.clustermesh.apiserver.metrics.enabled }}
- --prometheus-serve-addr=:{{ .Values.clustermesh.apiserver.metrics.port }} - --prometheus-serve-addr=:{{ .Values.clustermesh.apiserver.metrics.port }}
- --controller-group-metrics=all
{{- end }} {{- end }}
{{- with .Values.clustermesh.apiserver.extraArgs }} {{- with .Values.clustermesh.apiserver.extraArgs }}
{{- toYaml . | trim | nindent 8 }} {{- toYaml . | trim | nindent 8 }}
@@ -224,13 +234,18 @@ spec:
securityContext: securityContext:
{{- toYaml . | nindent 10 }} {{- toYaml . | nindent 10 }}
{{- end }} {{- end }}
{{- with .Values.clustermesh.apiserver.lifecycle }}
lifecycle:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- if .Values.clustermesh.apiserver.kvstoremesh.enabled }} {{- if .Values.clustermesh.apiserver.kvstoremesh.enabled }}
- name: kvstoremesh - name: kvstoremesh
image: {{ include "cilium.image" .Values.clustermesh.apiserver.kvstoremesh.image | quote }} image: {{ include "cilium.image" .Values.clustermesh.apiserver.image | quote }}
imagePullPolicy: {{ .Values.clustermesh.apiserver.kvstoremesh.image.pullPolicy }} imagePullPolicy: {{ .Values.clustermesh.apiserver.image.pullPolicy }}
command: command:
- /usr/bin/kvstoremesh - /usr/bin/clustermesh-apiserver
args: args:
- kvstoremesh
{{- if .Values.debug.enabled }} {{- if .Values.debug.enabled }}
- --debug - --debug
{{- end }} {{- end }}
@@ -240,8 +255,12 @@ spec:
- --kvstore-opt=etcd.qps=100 - --kvstore-opt=etcd.qps=100
- --kvstore-opt=etcd.maxInflight=10 - --kvstore-opt=etcd.maxInflight=10
- --clustermesh-config=/var/lib/cilium/clustermesh - --clustermesh-config=/var/lib/cilium/clustermesh
{{- if hasKey .Values.clustermesh "maxConnectedClusters" }}
- --max-connected-clusters={{ .Values.clustermesh.maxConnectedClusters }}
{{- end }}
{{- if .Values.clustermesh.apiserver.metrics.kvstoremesh.enabled }} {{- if .Values.clustermesh.apiserver.metrics.kvstoremesh.enabled }}
- --prometheus-serve-addr=:{{ .Values.clustermesh.apiserver.metrics.kvstoremesh.port }} - --prometheus-serve-addr=:{{ .Values.clustermesh.apiserver.metrics.kvstoremesh.port }}
- --controller-group-metrics=all
{{- end }} {{- end }}
{{- with .Values.clustermesh.apiserver.kvstoremesh.extraArgs }} {{- with .Values.clustermesh.apiserver.kvstoremesh.extraArgs }}
{{- toYaml . | trim | nindent 8 }} {{- toYaml . | trim | nindent 8 }}
@@ -285,6 +304,10 @@ spec:
securityContext: securityContext:
{{- toYaml . | nindent 10 }} {{- toYaml . | nindent 10 }}
{{- end }} {{- end }}
{{- with .Values.clustermesh.apiserver.kvstoremesh.lifecycle }}
lifecycle:
{{- toYaml . | nindent 10 }}
{{- end }}
{{- end }} {{- end }}
volumes: volumes:
- name: etcd-server-secrets - name: etcd-server-secrets
@@ -371,6 +394,7 @@ spec:
priorityClassName: {{ include "cilium.priorityClass" (list $ .Values.clustermesh.apiserver.priorityClassName "system-cluster-critical") }} priorityClassName: {{ include "cilium.priorityClass" (list $ .Values.clustermesh.apiserver.priorityClassName "system-cluster-critical") }}
serviceAccount: {{ .Values.serviceAccounts.clustermeshApiserver.name | quote }} serviceAccount: {{ .Values.serviceAccounts.clustermeshApiserver.name | quote }}
serviceAccountName: {{ .Values.serviceAccounts.clustermeshApiserver.name | quote }} serviceAccountName: {{ .Values.serviceAccounts.clustermeshApiserver.name | quote }}
terminationGracePeriodSeconds: {{ .Values.clustermesh.apiserver.terminationGracePeriodSeconds }}
automountServiceAccountToken: {{ .Values.serviceAccounts.clustermeshApiserver.automount }} automountServiceAccountToken: {{ .Values.serviceAccounts.clustermeshApiserver.automount }}
{{- with .Values.clustermesh.apiserver.affinity }} {{- with .Values.clustermesh.apiserver.affinity }}
affinity: affinity:

View File

@@ -7,6 +7,10 @@ kind: Service
metadata: metadata:
name: clustermesh-apiserver-metrics name: clustermesh-apiserver-metrics
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
k8s-app: clustermesh-apiserver k8s-app: clustermesh-apiserver
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium

View File

@@ -5,6 +5,10 @@ kind: PodDisruptionBudget
metadata: metadata:
name: clustermesh-apiserver name: clustermesh-apiserver
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
k8s-app: clustermesh-apiserver k8s-app: clustermesh-apiserver
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium

View File

@@ -8,9 +8,14 @@ metadata:
k8s-app: clustermesh-apiserver k8s-app: clustermesh-apiserver
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
app.kubernetes.io/name: clustermesh-apiserver app.kubernetes.io/name: clustermesh-apiserver
{{- with .Values.clustermesh.apiserver.service.annotations }} {{- if or .Values.clustermesh.apiserver.service.annotations .Values.clustermesh.annotations }}
annotations: annotations:
{{- toYaml . | nindent 4 }} {{- with .Values.clustermesh.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.clustermesh.apiserver.service.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }} {{- end }}
spec: spec:
type: {{ .Values.clustermesh.apiserver.service.type }} type: {{ .Values.clustermesh.apiserver.service.type }}

View File

@@ -4,8 +4,13 @@ kind: ServiceAccount
metadata: metadata:
name: {{ .Values.serviceAccounts.clustermeshApiserver.name | quote }} name: {{ .Values.serviceAccounts.clustermeshApiserver.name | quote }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.serviceAccounts.clustermeshApiserver.annotations }} {{- if or .Values.serviceAccounts.clustermeshApiserver.annotations .Values.clustermesh.annotations }}
annotations: annotations:
{{- toYaml . | nindent 4 }} {{- with .Values.clustermesh.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.serviceAccounts.clustermeshApiserver.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}

View File

@@ -14,10 +14,15 @@ metadata:
{{- with .Values.clustermesh.apiserver.metrics.serviceMonitor.labels }} {{- with .Values.clustermesh.apiserver.metrics.serviceMonitor.labels }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- if or .Values.clustermesh.apiserver.metrics.serviceMonitor.annotations .Values.clustermesh.annotations }}
annotations: annotations:
{{- with .Values.clustermesh.apiserver.metrics.serviceMonitor.annotations }} {{- with .Values.clustermesh.annotations }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- with .Values.clustermesh.apiserver.metrics.serviceMonitor.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
spec: spec:
selector: selector:
matchLabels: matchLabels:

View File

@@ -5,6 +5,10 @@ kind: Certificate
metadata: metadata:
name: clustermesh-apiserver-admin-cert name: clustermesh-apiserver-admin-cert
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec: spec:
issuerRef: issuerRef:
{{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }} {{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }}

View File

@@ -5,6 +5,10 @@ kind: Certificate
metadata: metadata:
name: clustermesh-apiserver-client-cert name: clustermesh-apiserver-client-cert
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec: spec:
issuerRef: issuerRef:
{{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }} {{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }}

View File

@@ -5,6 +5,10 @@ kind: Certificate
metadata: metadata:
name: clustermesh-apiserver-remote-cert name: clustermesh-apiserver-remote-cert
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec: spec:
issuerRef: issuerRef:
{{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }} {{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }}

View File

@@ -5,6 +5,10 @@ kind: Certificate
metadata: metadata:
name: clustermesh-apiserver-server-cert name: clustermesh-apiserver-server-cert
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec: spec:
issuerRef: issuerRef:
{{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }} {{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }}

View File

@@ -26,12 +26,8 @@ spec:
{{- end }} {{- end }}
- "--ca-generate" - "--ca-generate"
- "--ca-reuse-secret" - "--ca-reuse-secret"
{{- if .Values.clustermesh.apiserver.tls.ca.cert }} {{- if and .Values.tls.ca.cert .Values.tls.ca.key }}
- "--ca-secret-name=clustermesh-apiserver-ca-cert"
{{- else -}}
{{- if and .Values.tls.ca.cert .Values.tls.ca.key }}
- "--ca-secret-name=cilium-ca" - "--ca-secret-name=cilium-ca"
{{- end }}
{{- end }} {{- end }}
- "--clustermesh-apiserver-server-cert-generate" - "--clustermesh-apiserver-server-cert-generate"
- "--clustermesh-apiserver-server-cert-validity-duration={{ $certValiditySecondsStr }}" - "--clustermesh-apiserver-server-cert-validity-duration={{ $certValiditySecondsStr }}"
@@ -69,5 +65,9 @@ spec:
volumes: volumes:
{{- toYaml . | nindent 6 }} {{- toYaml . | nindent 6 }}
{{- end }} {{- end }}
affinity:
{{- with .Values.certgen.affinity }}
{{- toYaml . | nindent 8 }}
{{- end }}
ttlSecondsAfterFinished: {{ .Values.certgen.ttlSecondsAfterFinished }} ttlSecondsAfterFinished: {{ .Values.certgen.ttlSecondsAfterFinished }}
{{- end }} {{- end }}

View File

@@ -1,15 +0,0 @@
{{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "cronJob") }}
{{- $crt := .Values.clustermesh.apiserver.tls.ca.cert | default .Values.tls.ca.cert -}}
{{- $key := .Values.clustermesh.apiserver.tls.ca.key | default .Values.tls.ca.key -}}
{{- if and $crt $key }}
---
apiVersion: v1
kind: Secret
metadata:
name: clustermesh-apiserver-ca-cert
namespace: {{ .Release.Namespace }}
data:
ca.crt: {{ $crt }}
ca.key: {{ $key }}
{{- end }}
{{- end }}

View File

@@ -4,6 +4,10 @@ kind: CronJob
metadata: metadata:
name: clustermesh-apiserver-generate-certs name: clustermesh-apiserver-generate-certs
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
k8s-app: clustermesh-apiserver-generate-certs k8s-app: clustermesh-apiserver-generate-certs
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium

View File

@@ -13,5 +13,8 @@ metadata:
{{- with .Values.certgen.annotations.job }} {{- with .Values.certgen.annotations.job }}
{{- toYaml . | nindent 4 }} {{- toYaml . | nindent 4 }}
{{- end }} {{- end }}
{{- with .Values.clustermesh.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{ include "clustermesh-apiserver-generate-certs.job.spec" . }} {{ include "clustermesh-apiserver-generate-certs.job.spec" . }}
{{- end }} {{- end }}

View File

@@ -4,6 +4,10 @@ kind: Role
metadata: metadata:
name: clustermesh-apiserver-generate-certs name: clustermesh-apiserver-generate-certs
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
rules: rules:
@@ -19,7 +23,6 @@ rules:
- secrets - secrets
resourceNames: resourceNames:
- cilium-ca - cilium-ca
- clustermesh-apiserver-ca-cert
verbs: verbs:
- get - get
- update - update

View File

@@ -4,6 +4,10 @@ kind: RoleBinding
metadata: metadata:
name: clustermesh-apiserver-generate-certs name: clustermesh-apiserver-generate-certs
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
labels: labels:
app.kubernetes.io/part-of: cilium app.kubernetes.io/part-of: cilium
roleRef: roleRef:

View File

@@ -4,8 +4,13 @@ kind: ServiceAccount
metadata: metadata:
name: {{ .Values.serviceAccounts.clustermeshcertgen.name | quote }} name: {{ .Values.serviceAccounts.clustermeshcertgen.name | quote }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.serviceAccounts.clustermeshcertgen.annotations }} {{- if or .Values.serviceAccounts.clustermeshcertgen.annotations .Values.clustermesh.annotations }}
annotations: annotations:
{{- toYaml . | nindent 4 }} {{- with .Values.serviceAccounts.clustermeshcertgen.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- with .Values.clustermesh.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}

View File

@@ -1,37 +0,0 @@
{{/*
Generate TLS certificates for ClusterMesh.
Note: Always use this template as follows:
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}}
The assignment to `$_` is required because we store the generated CI in a global `cmca` variable.
Please, don't try to "simplify" this, as without this trick, every generated
certificate would be signed by a different CA.
*/}}
{{- define "clustermesh-apiserver-generate-certs.helm.setup-ca" }}
{{- if not .cmca }}
{{- $ca := "" -}}
{{- $crt := .Values.clustermesh.apiserver.tls.ca.cert | default .Values.tls.ca.cert -}}
{{- $key := .Values.clustermesh.apiserver.tls.ca.key | default .Values.tls.ca.key -}}
{{- if and $crt $key }}
{{- $ca = buildCustomCert $crt $key -}}
{{- else }}
{{- with lookup "v1" "Secret" .Release.Namespace "clustermesh-apiserver-ca-cert" }}
{{- $crt := index .data "ca.crt" }}
{{- $key := index .data "ca.key" }}
{{- $ca = buildCustomCert $crt $key -}}
{{- else }}
{{- $_ := include "cilium.ca.setup" . -}}
{{- with lookup "v1" "Secret" .Release.Namespace .commonCASecretName }}
{{- $crt := index .data "ca.crt" }}
{{- $key := index .data "ca.key" }}
{{- $ca = buildCustomCert $crt $key -}}
{{- else }}
{{- $ca = .commonCA -}}
{{- end }}
{{- end }}
{{- end }}
{{- $_ := set . "cmca" $ca -}}
{{- end }}
{{- end }}

View File

@@ -1,17 +1,21 @@
{{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }} {{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }}
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}} {{- $_ := include "cilium.ca.setup" . -}}
{{- $cn := include "clustermesh-apiserver-generate-certs.admin-common-name" . -}} {{- $cn := include "clustermesh-apiserver-generate-certs.admin-common-name" . -}}
{{- $dns := list "localhost" }} {{- $dns := list "localhost" }}
{{- $cert := genSignedCert $cn nil $dns (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .cmca -}} {{- $cert := genSignedCert $cn nil $dns (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .commonCA -}}
--- ---
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: clustermesh-apiserver-admin-cert name: clustermesh-apiserver-admin-cert
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
type: kubernetes.io/tls type: kubernetes.io/tls
data: data:
ca.crt: {{ .cmca.Cert | b64enc }} ca.crt: {{ .commonCA.Cert | b64enc }}
tls.crt: {{ $cert.Cert | b64enc }} tls.crt: {{ $cert.Cert | b64enc }}
tls.key: {{ $cert.Key | b64enc }} tls.key: {{ $cert.Key | b64enc }}
{{- end }} {{- end }}

View File

@@ -1,12 +0,0 @@
{{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }}
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}}
---
apiVersion: v1
kind: Secret
metadata:
name: clustermesh-apiserver-ca-cert
namespace: {{ .Release.Namespace }}
data:
ca.crt: {{ .cmca.Cert | b64enc }}
ca.key: {{ .cmca.Key | b64enc }}
{{- end }}

View File

@@ -1,16 +1,20 @@
{{- if and .Values.externalWorkloads.enabled .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }} {{- if and .Values.externalWorkloads.enabled .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }}
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}} {{- $_ := include "cilium.ca.setup" . -}}
{{- $cn := "externalworkload" }} {{- $cn := "externalworkload" }}
{{- $cert := genSignedCert $cn nil nil (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .cmca -}} {{- $cert := genSignedCert $cn nil nil (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .commonCA -}}
--- ---
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: clustermesh-apiserver-client-cert name: clustermesh-apiserver-client-cert
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
type: kubernetes.io/tls type: kubernetes.io/tls
data: data:
ca.crt: {{ .cmca.Cert | b64enc }} ca.crt: {{ .commonCA.Cert | b64enc }}
tls.crt: {{ $cert.Cert | b64enc }} tls.crt: {{ $cert.Cert | b64enc }}
tls.key: {{ $cert.Key | b64enc }} tls.key: {{ $cert.Key | b64enc }}
{{- end }} {{- end }}

View File

@@ -1,16 +1,20 @@
{{- if and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }} {{- if and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }}
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}} {{- $_ := include "cilium.ca.setup" . -}}
{{- $cn := include "clustermesh-apiserver-generate-certs.remote-common-name" . -}} {{- $cn := include "clustermesh-apiserver-generate-certs.remote-common-name" . -}}
{{- $cert := genSignedCert $cn nil nil (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .cmca -}} {{- $cert := genSignedCert $cn nil nil (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .commonCA -}}
--- ---
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: clustermesh-apiserver-remote-cert name: clustermesh-apiserver-remote-cert
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
type: kubernetes.io/tls type: kubernetes.io/tls
data: data:
ca.crt: {{ .cmca.Cert | b64enc }} ca.crt: {{ .commonCA.Cert | b64enc }}
tls.crt: {{ $cert.Cert | b64enc }} tls.crt: {{ $cert.Cert | b64enc }}
tls.key: {{ $cert.Key | b64enc }} tls.key: {{ $cert.Key | b64enc }}
{{- end }} {{- end }}

View File

@@ -1,18 +1,22 @@
{{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }} {{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }}
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}} {{- $_ := include "cilium.ca.setup" . -}}
{{- $cn := "clustermesh-apiserver.cilium.io" }} {{- $cn := "clustermesh-apiserver.cilium.io" }}
{{- $ip := concat (list "127.0.0.1" "::1") .Values.clustermesh.apiserver.tls.server.extraIpAddresses }} {{- $ip := concat (list "127.0.0.1" "::1") .Values.clustermesh.apiserver.tls.server.extraIpAddresses }}
{{- $dns := concat (list $cn "*.mesh.cilium.io" (printf "clustermesh-apiserver.%s.svc" .Release.Namespace)) .Values.clustermesh.apiserver.tls.server.extraDnsNames }} {{- $dns := concat (list $cn "*.mesh.cilium.io" (printf "clustermesh-apiserver.%s.svc" .Release.Namespace)) .Values.clustermesh.apiserver.tls.server.extraDnsNames }}
{{- $cert := genSignedCert $cn $ip $dns (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .cmca -}} {{- $cert := genSignedCert $cn $ip $dns (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .commonCA -}}
--- ---
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: clustermesh-apiserver-server-cert name: clustermesh-apiserver-server-cert
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
{{- with .Values.clustermesh.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
type: kubernetes.io/tls type: kubernetes.io/tls
data: data:
ca.crt: {{ .cmca.Cert | b64enc }} ca.crt: {{ .commonCA.Cert | b64enc }}
tls.crt: {{ $cert.Cert | b64enc }} tls.crt: {{ $cert.Cert | b64enc }}
tls.key: {{ $cert.Key | b64enc }} tls.key: {{ $cert.Key | b64enc }}
{{- end }} {{- end }}

Some files were not shown because too many files have changed in this diff Show More