mirror of
https://github.com/outbackdingo/cozystack.git
synced 2026-01-28 18:18:41 +00:00
Compare commits
10 Commits
project-do
...
cilium
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8055151d32 | ||
|
|
2e3555600d | ||
|
|
98f488fcac | ||
|
|
1c6de1ccf5 | ||
|
|
235a2fcf47 | ||
|
|
24151b09f3 | ||
|
|
b37071f05e | ||
|
|
c64c6b549b | ||
|
|
df47d2f4a6 | ||
|
|
c0aea5a106 |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -1 +1,3 @@
|
||||
_out
|
||||
.git
|
||||
.idea
|
||||
28
ADOPTERS.md
Normal file
28
ADOPTERS.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Adopters
|
||||
|
||||
Below you can find a list of organizations and users who have agreed to
|
||||
tell the world that they are using Cozystack in a production environment.
|
||||
|
||||
The goal of this list is to inspire others to do the same and to grow
|
||||
this open source community and project.
|
||||
|
||||
Please add your organization to this list. It takes 5 minutes of your time,
|
||||
but it means a lot to us.
|
||||
|
||||
## Updating this list
|
||||
|
||||
To add your organization to this list, you can either:
|
||||
|
||||
- [open a pull request](https://github.com/aenix-io/cozystack/pulls) to directly update this file, or
|
||||
- [edit this file](https://github.com/aenix-io/cozystack/blob/main/ADOPTERS.md) directly in GitHub
|
||||
|
||||
Feel free to ask in the Slack chat if you any questions and/or require
|
||||
assistance with updating this list.
|
||||
|
||||
## Cozystack Adopters
|
||||
|
||||
This list is sorted in chronological order, based on the submission date.
|
||||
|
||||
| Organization | Contact | Date | Description of Use |
|
||||
| ------------ | ------- | ---- | ------------------ |
|
||||
| [Ænix](https://aenix.io/) | @kvaps | 2024-02-14 | Ænix provides consulting services for cloud providers and uses Cozystack as the main tool for organizing managed services for them. |
|
||||
3
CODE_OF_CONDUCT.md
Normal file
3
CODE_OF_CONDUCT.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Code of Conduct
|
||||
|
||||
Cozystack follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
45
CONTRIBUTING.md
Normal file
45
CONTRIBUTING.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Contributing to Cozystack
|
||||
|
||||
Welcome! We are glad that you want to contribute to our Cozystack project! 💖
|
||||
|
||||
As you get started, you are in the best position to give us feedbacks on areas of our project that we need help with, including:
|
||||
|
||||
* Problems found while setting up the development environment
|
||||
* Gaps in our documentation
|
||||
* Bugs in our Github actions
|
||||
|
||||
First, though, it is important that you read the [code of conduct](CODE_OF_CONDUCT.md).
|
||||
|
||||
The guidelines below are a starting point. We don't want to limit your
|
||||
creativity, passion, and initiative. If you think there's a better way, please
|
||||
feel free to bring it up in a Github discussion, or open a pull request. We're
|
||||
certain there are always better ways to do things, we just need to start some
|
||||
constructive dialogue!
|
||||
|
||||
## Ways to contribute
|
||||
|
||||
We welcome many types of contributions including:
|
||||
|
||||
* New features
|
||||
* Builds, CI/CD
|
||||
* Bug fixes
|
||||
* [Documentation](https://github.com/aenix-io/cozystack-website/tree/main)
|
||||
* Issue Triage
|
||||
* Answering questions on Slack or Github Discussions
|
||||
* Web design
|
||||
* Communications / Social Media / Blog Posts
|
||||
* Events participation
|
||||
* Release management
|
||||
|
||||
## Ask for Help
|
||||
|
||||
The best way to reach us with a question when contributing is to drop a line in
|
||||
our [Telegram channel](https://t.me/cozystack), or start a new Github discussion.
|
||||
|
||||
## Raising Issues
|
||||
|
||||
When raising issues, please specify the following:
|
||||
|
||||
- A scenario where the issue occurred (with details on how to reproduce it)
|
||||
- Errors and log messages that are displayed by the involved software
|
||||
- Any other detail that might be useful
|
||||
7
MAINTAINERS.md
Normal file
7
MAINTAINERS.md
Normal file
@@ -0,0 +1,7 @@
|
||||
# The Cozystack Maintainers
|
||||
|
||||
| Maintainer | GitHub Username | Company |
|
||||
| ---------- | --------------- | ------- |
|
||||
| Andrei Kvapil | [@kvaps](https://github.com/kvaps) | Ænix |
|
||||
| George Gaál | [@gecube](https://github.com/gecube) | Ænix |
|
||||
| Eduard Generalov | [@egeneralov](https://github.com/egeneralov) | Ænix |
|
||||
553
README.md
553
README.md
@@ -10,7 +10,7 @@
|
||||
|
||||
# Cozystack
|
||||
|
||||
**Cozystack** is an open-source **PaaS platform** for cloud providers.
|
||||
**Cozystack** is a free PaaS platform and framework for building clouds.
|
||||
|
||||
With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
|
||||
|
||||
@@ -18,548 +18,53 @@ You can use Cozystack to build your own cloud or to provide a cost-effective dev
|
||||
|
||||
## Use-Cases
|
||||
|
||||
### As a backend for a public cloud
|
||||
* [**Using Cozystack to build public cloud**](https://cozystack.io/docs/use-cases/public-cloud/)
|
||||
You can use Cozystack as backend for a public cloud
|
||||
|
||||
Cozystack positions itself as a kind of framework for building public clouds. The key word here is framework. In this case, it's important to understand that Cozystack is made for cloud providers, not for end users.
|
||||
* [**Using Cozystack to build private cloud**](https://cozystack.io/docs/use-cases/private-cloud/)
|
||||
You can use Cozystack as platform to build a private cloud powered by Infrastructure-as-Code approach
|
||||
|
||||
Despite having a graphical interface, the current security model does not imply public user access to your management cluster.
|
||||
|
||||
Instead, end users get access to their own Kubernetes clusters, can order LoadBalancers and additional services from it, but they have no access and know nothing about your management cluster powered by Cozystack.
|
||||
|
||||
Thus, to integrate with your billing system, it's enough to teach your system to go to the management Kubernetes and place a YAML file signifying the service you're interested in. Cozystack will do the rest of the work for you.
|
||||
|
||||

|
||||
|
||||
### As a private cloud for Infrastructure-as-Code
|
||||
|
||||
One of the use cases is a self-portal for users within your company, where they can order the service they're interested in or a managed database.
|
||||
|
||||
You can implement best GitOps practices, where users will launch their own Kubernetes clusters and databases for their needs with a simple commit of configuration into your infrastructure Git repository.
|
||||
|
||||
Thanks to the standardization of the approach to deploying applications, you can expand the platform's capabilities using the functionality of standard Helm charts.
|
||||
|
||||
### As a Kubernetes distribution for Bare Metal
|
||||
|
||||
We created Cozystack primarily for our own needs, having vast experience in building reliable systems on bare metal infrastructure. This experience led to the formation of a separate boxed product, which is aimed at standardizing and providing a ready-to-use tool for managing your infrastructure.
|
||||
|
||||
Currently, Cozystack already solves a huge scope of infrastructure tasks: starting from provisioning bare metal servers, having a ready monitoring system, fast and reliable storage, a network fabric with the possibility of interconnect with your infrastructure, the ability to run virtual machines, databases, and much more right out of the box.
|
||||
|
||||
All this makes Cozystack a convenient platform for delivering and launching your application on Bare Metal.
|
||||
* [**Using Cozystack as Kubernetes distribution**](https://cozystack.io/docs/use-cases/kubernetes-distribution/)
|
||||
You can use Cozystack as Kubernetes distribution for Bare Metal
|
||||
|
||||
## Screenshot
|
||||
|
||||

|
||||

|
||||
|
||||
## Core values
|
||||
## Documentation
|
||||
|
||||
### Standardization and unification
|
||||
All components of the platform are based on open source tools and technologies which are widely known in the industry.
|
||||
The documentation is located on official [cozystack.io](cozystack.io) website.
|
||||
|
||||
### Collaborate, not compete
|
||||
If a feature being developed for the platform could be useful to a upstream project, it should be contributed to upstream project, rather than being implemented within the platform.
|
||||
Read [Get Started](https://cozystack.io/docs/get-started/) section for a quick start.
|
||||
|
||||
### API-first
|
||||
Cozystack is based on Kubernetes and involves close interaction with its API. We don't aim to completely hide the all elements behind a pretty UI or any sort of customizations; instead, we provide a standard interface and teach users how to work with basic primitives. The web interface is used solely for deploying applications and quickly diving into basic concepts of platform.
|
||||
If you encounter any difficulties, start with the [troubleshooting guide](https://cozystack.io/docs/troubleshooting/), and work your way through the process that we've outlined.
|
||||
|
||||
## Quick Start
|
||||
## Versioning
|
||||
|
||||
### Prepare infrastructure
|
||||
Versioning adheres to the [Semantic Versioning](http://semver.org/) principles.
|
||||
A full list of the available releases is available in the GitHub repository's [Release](https://github.com/aenix-io/cozystack/releases) section.
|
||||
|
||||
## Contributions
|
||||
|
||||

|
||||
Contributions are highly appreciated and very welcomed!
|
||||
|
||||
You need 3 physical servers or VMs with nested virtualisation:
|
||||
In case of bugs, please, check if the issue has been already opened by checking the [GitHub Issues](https://github.com/aenix-io/cozystack/issues) section.
|
||||
In case it isn't, you can open a new one: a detailed report will help us to replicate it, assess it, and work on a fix.
|
||||
|
||||
```
|
||||
CPU: 4 cores
|
||||
CPU model: host
|
||||
RAM: 8-16 GB
|
||||
HDD1: 32 GB
|
||||
HDD2: 100GB (raw)
|
||||
```
|
||||
You can express your intention in working on the fix on your own.
|
||||
Commits are used to generate the changelog, and their author will be referenced in it.
|
||||
|
||||
And one management VM or physical server connected to the same network.
|
||||
Any Linux system installed on it (eg. Ubuntu should be enough)
|
||||
In case of **Feature Requests** please use the [Discussion's Feature Request section](https://github.com/aenix-io/cozystack/discussions/categories/feature-requests).
|
||||
|
||||
**Note:** The VM should support `x86-64-v2` architecture, the most probably you can achieve this by setting cpu model to `host`
|
||||
## License
|
||||
|
||||
#### Install dependencies:
|
||||
Cozystack is licensed under Apache 2.0.
|
||||
The code is provided as-is with no warranties.
|
||||
|
||||
- `docker`
|
||||
- `talosctl`
|
||||
- `dialog`
|
||||
- `nmap`
|
||||
- `make`
|
||||
- `yq`
|
||||
- `kubectl`
|
||||
- `helm`
|
||||
## Commercial Support
|
||||
|
||||
### Netboot server
|
||||
[**Ænix**](https://aenix.io) offers enterprise-grade support, available 24/7.
|
||||
|
||||
Start matchbox with prebuilt Talos image for Cozystack:
|
||||
We provide all types of assistance, including consultations, development of missing features, design, assistance with installation, and integration.
|
||||
|
||||
```bash
|
||||
sudo docker run --name=matchbox -d --net=host ghcr.io/aenix-io/cozystack/matchbox:v0.0.2 \
|
||||
-address=:8080 \
|
||||
-log-level=debug
|
||||
```
|
||||
|
||||
Start DHCP-Server:
|
||||
```bash
|
||||
sudo docker run --name=dnsmasq -d --cap-add=NET_ADMIN --net=host quay.io/poseidon/dnsmasq \
|
||||
-d -q -p0 \
|
||||
--dhcp-range=192.168.100.3,192.168.100.254 \
|
||||
--dhcp-option=option:router,192.168.100.1 \
|
||||
--enable-tftp \
|
||||
--tftp-root=/var/lib/tftpboot \
|
||||
--dhcp-match=set:bios,option:client-arch,0 \
|
||||
--dhcp-boot=tag:bios,undionly.kpxe \
|
||||
--dhcp-match=set:efi32,option:client-arch,6 \
|
||||
--dhcp-boot=tag:efi32,ipxe.efi \
|
||||
--dhcp-match=set:efibc,option:client-arch,7 \
|
||||
--dhcp-boot=tag:efibc,ipxe.efi \
|
||||
--dhcp-match=set:efi64,option:client-arch,9 \
|
||||
--dhcp-boot=tag:efi64,ipxe.efi \
|
||||
--dhcp-userclass=set:ipxe,iPXE \
|
||||
--dhcp-boot=tag:ipxe,http://192.168.100.254:8080/boot.ipxe \
|
||||
--log-queries \
|
||||
--log-dhcp
|
||||
```
|
||||
|
||||
Where:
|
||||
- `192.168.100.3,192.168.100.254` range to allocate IPs from
|
||||
- `192.168.100.1` your gateway
|
||||
- `192.168.100.254` is address of your management server
|
||||
|
||||
Check status of containers:
|
||||
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
example output:
|
||||
|
||||
```console
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
22044f26f74d quay.io/poseidon/dnsmasq "/usr/sbin/dnsmasq -…" 6 seconds ago Up 5 seconds dnsmasq
|
||||
231ad81ff9e0 ghcr.io/aenix-io/cozystack/matchbox:v0.0.2 "/matchbox -address=…" 58 seconds ago Up 57 seconds matchbox
|
||||
```
|
||||
|
||||
### Bootstrap cluster
|
||||
|
||||
Write configuration for Cozystack:
|
||||
|
||||
```yaml
|
||||
cat > patch.yaml <<\EOT
|
||||
machine:
|
||||
kubelet:
|
||||
nodeIP:
|
||||
validSubnets:
|
||||
- 192.168.100.0/24
|
||||
kernel:
|
||||
modules:
|
||||
- name: openvswitch
|
||||
- name: drbd
|
||||
parameters:
|
||||
- usermode_helper=disabled
|
||||
- name: zfs
|
||||
install:
|
||||
image: ghcr.io/aenix-io/cozystack/talos:v1.6.4
|
||||
files:
|
||||
- content: |
|
||||
[plugins]
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
device_ownership_from_security_context = true
|
||||
path: /etc/cri/conf.d/20-customization.part
|
||||
op: create
|
||||
|
||||
cluster:
|
||||
network:
|
||||
cni:
|
||||
name: none
|
||||
podSubnets:
|
||||
- 10.244.0.0/16
|
||||
serviceSubnets:
|
||||
- 10.96.0.0/16
|
||||
EOT
|
||||
|
||||
cat > patch-controlplane.yaml <<\EOT
|
||||
cluster:
|
||||
allowSchedulingOnControlPlanes: true
|
||||
controllerManager:
|
||||
extraArgs:
|
||||
bind-address: 0.0.0.0
|
||||
scheduler:
|
||||
extraArgs:
|
||||
bind-address: 0.0.0.0
|
||||
apiServer:
|
||||
certSANs:
|
||||
- 127.0.0.1
|
||||
proxy:
|
||||
disabled: true
|
||||
discovery:
|
||||
enabled: false
|
||||
etcd:
|
||||
advertisedSubnets:
|
||||
- 192.168.100.0/24
|
||||
EOT
|
||||
```
|
||||
|
||||
Run [talos-bootstrap](https://github.com/aenix-io/talos-bootstrap/) to deploy cluster:
|
||||
|
||||
```bash
|
||||
talos-bootstrap install
|
||||
```
|
||||
|
||||
Save admin kubeconfig to access your Kubernetes cluster:
|
||||
```bash
|
||||
cp -i kubeconfig ~/.kube/config
|
||||
```
|
||||
|
||||
Check connection:
|
||||
```bash
|
||||
kubectl get ns
|
||||
```
|
||||
|
||||
example output:
|
||||
```console
|
||||
NAME STATUS AGE
|
||||
default Active 7m56s
|
||||
kube-node-lease Active 7m56s
|
||||
kube-public Active 7m56s
|
||||
kube-system Active 7m56s
|
||||
```
|
||||
|
||||
|
||||
**Note:**: All nodes should currently show as "Not Ready", don't worry about that, this is because you disabled the default CNI plugin in the previous step. Cozystack will install it's own CNI-plugin on the next step.
|
||||
|
||||
|
||||
### Install Cozystack
|
||||
|
||||
|
||||
write config for cozystack:
|
||||
|
||||
**Note:** please make sure that you written the same setting specified in `patch.yaml` and `patch-controlplane.yaml` files.
|
||||
|
||||
```yaml
|
||||
cat > cozystack-config.yaml <<\EOT
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cozystack
|
||||
namespace: cozy-system
|
||||
data:
|
||||
cluster-name: "cozystack"
|
||||
ipv4-pod-cidr: "10.244.0.0/16"
|
||||
ipv4-pod-gateway: "10.244.0.1"
|
||||
ipv4-svc-cidr: "10.96.0.0/16"
|
||||
ipv4-join-cidr: "100.64.0.0/16"
|
||||
EOT
|
||||
```
|
||||
|
||||
Create namesapce and install Cozystack system components:
|
||||
|
||||
```bash
|
||||
kubectl create ns cozy-system
|
||||
kubectl apply -f cozystack-config.yaml
|
||||
kubectl apply -f manifests/cozystack-installer.yaml
|
||||
```
|
||||
|
||||
(optional) You can track the logs of installer:
|
||||
```bash
|
||||
kubectl logs -n cozy-system deploy/cozystack -f
|
||||
```
|
||||
|
||||
Wait for a while, then check the status of installation:
|
||||
```bash
|
||||
kubectl get hr -A
|
||||
```
|
||||
|
||||
Wait until all releases become to `Ready` state:
|
||||
```console
|
||||
NAMESPACE NAME AGE READY STATUS
|
||||
cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
|
||||
cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
|
||||
cozy-cilium cilium 4m1s True Release reconciliation succeeded
|
||||
cozy-cluster-api capi-operator 4m1s True Release reconciliation succeeded
|
||||
cozy-cluster-api capi-providers 4m1s True Release reconciliation succeeded
|
||||
cozy-dashboard dashboard 4m1s True Release reconciliation succeeded
|
||||
cozy-fluxcd cozy-fluxcd 4m1s True Release reconciliation succeeded
|
||||
cozy-grafana-operator grafana-operator 4m1s True Release reconciliation succeeded
|
||||
cozy-kamaji kamaji 4m1s True Release reconciliation succeeded
|
||||
cozy-kubeovn kubeovn 4m1s True Release reconciliation succeeded
|
||||
cozy-kubevirt-cdi kubevirt-cdi 4m1s True Release reconciliation succeeded
|
||||
cozy-kubevirt-cdi kubevirt-cdi-operator 4m1s True Release reconciliation succeeded
|
||||
cozy-kubevirt kubevirt 4m1s True Release reconciliation succeeded
|
||||
cozy-kubevirt kubevirt-operator 4m1s True Release reconciliation succeeded
|
||||
cozy-linstor linstor 4m1s True Release reconciliation succeeded
|
||||
cozy-linstor piraeus-operator 4m1s True Release reconciliation succeeded
|
||||
cozy-mariadb-operator mariadb-operator 4m1s True Release reconciliation succeeded
|
||||
cozy-metallb metallb 4m1s True Release reconciliation succeeded
|
||||
cozy-monitoring monitoring 4m1s True Release reconciliation succeeded
|
||||
cozy-postgres-operator postgres-operator 4m1s True Release reconciliation succeeded
|
||||
cozy-rabbitmq-operator rabbitmq-operator 4m1s True Release reconciliation succeeded
|
||||
cozy-redis-operator redis-operator 4m1s True Release reconciliation succeeded
|
||||
cozy-telepresence telepresence 4m1s True Release reconciliation succeeded
|
||||
cozy-victoria-metrics-operator victoria-metrics-operator 4m1s True Release reconciliation succeeded
|
||||
tenant-root tenant-root 4m1s True Release reconciliation succeeded
|
||||
```
|
||||
|
||||
#### Configure Storage
|
||||
|
||||
Setup alias to access LINSTOR:
|
||||
```bash
|
||||
alias linstor='kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor'
|
||||
```
|
||||
|
||||
list your nodes
|
||||
```bash
|
||||
linstor node list
|
||||
```
|
||||
|
||||
example output:
|
||||
|
||||
```console
|
||||
+-------------------------------------------------------+
|
||||
| Node | NodeType | Addresses | State |
|
||||
|=======================================================|
|
||||
| srv1 | SATELLITE | 192.168.100.11:3367 (SSL) | Online |
|
||||
| srv2 | SATELLITE | 192.168.100.12:3367 (SSL) | Online |
|
||||
| srv3 | SATELLITE | 192.168.100.13:3367 (SSL) | Online |
|
||||
+-------------------------------------------------------+
|
||||
```
|
||||
|
||||
list empty devices:
|
||||
|
||||
```bash
|
||||
linstor physical-storage list
|
||||
```
|
||||
|
||||
example output:
|
||||
```console
|
||||
+--------------------------------------------+
|
||||
| Size | Rotational | Nodes |
|
||||
|============================================|
|
||||
| 107374182400 | True | srv3[/dev/sdb] |
|
||||
| | | srv1[/dev/sdb] |
|
||||
| | | srv2[/dev/sdb] |
|
||||
+--------------------------------------------+
|
||||
```
|
||||
|
||||
|
||||
create storage pools:
|
||||
|
||||
```bash
|
||||
linstor ps cdp lvm srv1 /dev/sdb --pool-name data --storage-pool data
|
||||
linstor ps cdp lvm srv2 /dev/sdb --pool-name data --storage-pool data
|
||||
linstor ps cdp lvm srv3 /dev/sdb --pool-name data --storage-pool data
|
||||
```
|
||||
|
||||
list storage pools:
|
||||
|
||||
```bash
|
||||
linstor sp l
|
||||
```
|
||||
|
||||
example output:
|
||||
|
||||
```console
|
||||
+-------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
|
||||
|=====================================================================================================================================|
|
||||
| DfltDisklessStorPool | srv1 | DISKLESS | | | | False | Ok | srv1;DfltDisklessStorPool |
|
||||
| DfltDisklessStorPool | srv2 | DISKLESS | | | | False | Ok | srv2;DfltDisklessStorPool |
|
||||
| DfltDisklessStorPool | srv3 | DISKLESS | | | | False | Ok | srv3;DfltDisklessStorPool |
|
||||
| data | srv1 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv1;data |
|
||||
| data | srv2 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv2;data |
|
||||
| data | srv3 | LVM | data | 100.00 GiB | 100.00 GiB | False | Ok | srv3;data |
|
||||
+-------------------------------------------------------------------------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
|
||||
Create default storage classes:
|
||||
```yaml
|
||||
kubectl create -f- <<EOT
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: local
|
||||
annotations:
|
||||
storageclass.kubernetes.io/is-default-class: "true"
|
||||
provisioner: linstor.csi.linbit.com
|
||||
parameters:
|
||||
linstor.csi.linbit.com/storagePool: "data"
|
||||
linstor.csi.linbit.com/layerList: "storage"
|
||||
linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: replicated
|
||||
provisioner: linstor.csi.linbit.com
|
||||
parameters:
|
||||
linstor.csi.linbit.com/storagePool: "data"
|
||||
linstor.csi.linbit.com/autoPlace: "3"
|
||||
linstor.csi.linbit.com/layerList: "drbd storage"
|
||||
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
|
||||
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
|
||||
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
|
||||
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
|
||||
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
EOT
|
||||
```
|
||||
|
||||
list storageclasses:
|
||||
|
||||
```bash
|
||||
kubectl get storageclasses
|
||||
```
|
||||
|
||||
example output:
|
||||
```console
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
local (default) linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
|
||||
replicated linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
|
||||
```
|
||||
|
||||
#### Configure Networking interconnection
|
||||
|
||||
To access your services select the range of unused IPs, eg. `192.168.100.200-192.168.100.250`
|
||||
|
||||
**Note:** These IPs should be from the same network as nodes or they should have all necessary routes for them.
|
||||
|
||||
Configure MetalLB to use and announce this range:
|
||||
```yaml
|
||||
kubectl create -f- <<EOT
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: cozystack
|
||||
namespace: cozy-metallb
|
||||
spec:
|
||||
ipAddressPools:
|
||||
- cozystack
|
||||
---
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: cozystack
|
||||
namespace: cozy-metallb
|
||||
spec:
|
||||
addresses:
|
||||
- 192.168.100.200-192.168.100.250
|
||||
autoAssign: true
|
||||
avoidBuggyIPs: false
|
||||
EOT
|
||||
```
|
||||
|
||||
#### Setup basic applications
|
||||
|
||||
Get token from `tenant-root`:
|
||||
```bash
|
||||
kubectl get secret -n tenant-root tenant-root -o go-template='{{ printf "%s\n" (index .data "token" | base64decode) }}'
|
||||
```
|
||||
|
||||
Enable port forward to cozy-dashboard:
|
||||
```bash
|
||||
kubectl port-forward -n cozy-dashboard svc/dashboard 8080:80
|
||||
```
|
||||
|
||||
Open: http://localhost:8080/
|
||||
|
||||
- Select `tenant-root`
|
||||
- Click `Upgrade` button
|
||||
- Write a domain into `host` which you wish to use as parent domain for all deployed applications
|
||||
**Note:**
|
||||
- if you have no domain yet, you can use `192.168.100.200.nip.io` where `192.168.100.200` is a first IP address in your network addresses range.
|
||||
- alternatively you can leave the default value, however you'll be need to modify your `/etc/hosts` every time you want to access specific application.
|
||||
- Set `etcd`, `monitoring` and `ingress` to enabled position
|
||||
- Click Deploy
|
||||
|
||||
|
||||
Check persistent volumes provisioned:
|
||||
|
||||
```bash
|
||||
kubectl get pvc -n tenant-root
|
||||
```
|
||||
|
||||
example output:
|
||||
```console
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
|
||||
data-etcd-0 Bound pvc-4cbd29cc-a29f-453d-b412-451647cd04bf 10Gi RWO local <unset> 2m10s
|
||||
data-etcd-1 Bound pvc-1579f95a-a69d-4a26-bcc2-b15ccdbede0d 10Gi RWO local <unset> 115s
|
||||
data-etcd-2 Bound pvc-907009e5-88bf-4d18-91e7-b56b0dbfb97e 10Gi RWO local <unset> 91s
|
||||
grafana-db-1 Bound pvc-7b3f4e23-228a-46fd-b820-d033ef4679af 10Gi RWO local <unset> 2m41s
|
||||
grafana-db-2 Bound pvc-ac9b72a4-f40e-47e8-ad24-f50d843b55e4 10Gi RWO local <unset> 113s
|
||||
vmselect-cachedir-vmselect-longterm-0 Bound pvc-622fa398-2104-459f-8744-565eee0a13f1 2Gi RWO local <unset> 2m21s
|
||||
vmselect-cachedir-vmselect-longterm-1 Bound pvc-fc9349f5-02b2-4e25-8bef-6cbc5cc6d690 2Gi RWO local <unset> 2m21s
|
||||
vmselect-cachedir-vmselect-shortterm-0 Bound pvc-7acc7ff6-6b9b-4676-bd1f-6867ea7165e2 2Gi RWO local <unset> 2m41s
|
||||
vmselect-cachedir-vmselect-shortterm-1 Bound pvc-e514f12b-f1f6-40ff-9838-a6bda3580eb7 2Gi RWO local <unset> 2m40s
|
||||
vmstorage-db-vmstorage-longterm-0 Bound pvc-e8ac7fc3-df0d-4692-aebf-9f66f72f9fef 10Gi RWO local <unset> 2m21s
|
||||
vmstorage-db-vmstorage-longterm-1 Bound pvc-68b5ceaf-3ed1-4e5a-9568-6b95911c7c3a 10Gi RWO local <unset> 2m21s
|
||||
vmstorage-db-vmstorage-shortterm-0 Bound pvc-cee3a2a4-5680-4880-bc2a-85c14dba9380 10Gi RWO local <unset> 2m41s
|
||||
vmstorage-db-vmstorage-shortterm-1 Bound pvc-d55c235d-cada-4c4a-8299-e5fc3f161789 10Gi RWO local <unset> 2m41s
|
||||
```
|
||||
|
||||
Check all pods are running:
|
||||
|
||||
|
||||
```bash
|
||||
kubectl get pod -n tenant-root
|
||||
```
|
||||
|
||||
example output:
|
||||
```console
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
etcd-0 1/1 Running 0 2m1s
|
||||
etcd-1 1/1 Running 0 106s
|
||||
etcd-2 1/1 Running 0 82s
|
||||
grafana-db-1 1/1 Running 0 119s
|
||||
grafana-db-2 1/1 Running 0 13s
|
||||
grafana-deployment-74b5656d6-5dcvn 1/1 Running 0 90s
|
||||
grafana-deployment-74b5656d6-q5589 1/1 Running 1 (105s ago) 111s
|
||||
root-ingress-controller-6ccf55bc6d-pg79l 2/2 Running 0 2m27s
|
||||
root-ingress-controller-6ccf55bc6d-xbs6x 2/2 Running 0 2m29s
|
||||
root-ingress-defaultbackend-686bcbbd6c-5zbvp 1/1 Running 0 2m29s
|
||||
vmalert-vmalert-644986d5c-7hvwk 2/2 Running 0 2m30s
|
||||
vmalertmanager-alertmanager-0 2/2 Running 0 2m32s
|
||||
vmalertmanager-alertmanager-1 2/2 Running 0 2m31s
|
||||
vminsert-longterm-75789465f-hc6cz 1/1 Running 0 2m10s
|
||||
vminsert-longterm-75789465f-m2v4t 1/1 Running 0 2m12s
|
||||
vminsert-shortterm-78456f8fd9-wlwww 1/1 Running 0 2m29s
|
||||
vminsert-shortterm-78456f8fd9-xg7cw 1/1 Running 0 2m28s
|
||||
vmselect-longterm-0 1/1 Running 0 2m12s
|
||||
vmselect-longterm-1 1/1 Running 0 2m12s
|
||||
vmselect-shortterm-0 1/1 Running 0 2m31s
|
||||
vmselect-shortterm-1 1/1 Running 0 2m30s
|
||||
vmstorage-longterm-0 1/1 Running 0 2m12s
|
||||
vmstorage-longterm-1 1/1 Running 0 2m12s
|
||||
vmstorage-shortterm-0 1/1 Running 0 2m32s
|
||||
vmstorage-shortterm-1 1/1 Running 0 2m31s
|
||||
```
|
||||
|
||||
Now you can get public IP of ingress controller:
|
||||
```
|
||||
kubectl get svc -n tenant-root root-ingress-controller
|
||||
```
|
||||
|
||||
example output:
|
||||
```console
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
root-ingress-controller LoadBalancer 10.96.16.141 192.168.100.200 80:31632/TCP,443:30113/TCP 3m33s
|
||||
```
|
||||
|
||||
Use `grafana.example.org` (under 192.168.100.200) to access system monitoring, where `example.org` is your domain specified for `tenant-root`
|
||||
|
||||
- login: `admin`
|
||||
- password:
|
||||
|
||||
```bash
|
||||
kubectl get secret -n tenant-root grafana-admin-password -o go-template='{{ printf "%s\n" (index .data "password" | base64decode) }}'
|
||||
```
|
||||
[Contact us](https://aenix.io/contact/)
|
||||
|
||||
@@ -12,9 +12,6 @@ talos_version=$(awk '/^version:/ {print $2}' packages/core/installer/images/talo
|
||||
|
||||
set -x
|
||||
|
||||
sed -i "s|\(ghcr.io/aenix-io/cozystack/matchbox:\)v[^ ]\+|\1${version}|g" README.md
|
||||
sed -i "s|\(ghcr.io/aenix-io/cozystack/talos:\)v[^ ]\+|\1${talos_version}|g" README.md
|
||||
|
||||
sed -i "/^TAG / s|=.*|= ${version}|" \
|
||||
packages/apps/http-cache/Makefile \
|
||||
packages/apps/kubernetes/Makefile \
|
||||
|
||||
@@ -61,8 +61,6 @@ spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: cozystack
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
@@ -72,14 +70,26 @@ spec:
|
||||
serviceAccountName: cozystack
|
||||
containers:
|
||||
- name: cozystack
|
||||
image: "ghcr.io/aenix-io/cozystack/installer:v0.0.2"
|
||||
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.1.0"
|
||||
env:
|
||||
- name: KUBERNETES_SERVICE_HOST
|
||||
value: localhost
|
||||
- name: KUBERNETES_SERVICE_PORT
|
||||
value: "7445"
|
||||
- name: K8S_AWAIT_ELECTION_ENABLED
|
||||
value: "1"
|
||||
- name: K8S_AWAIT_ELECTION_NAME
|
||||
value: cozystack
|
||||
- name: K8S_AWAIT_ELECTION_LOCK_NAME
|
||||
value: cozystack
|
||||
- name: K8S_AWAIT_ELECTION_LOCK_NAMESPACE
|
||||
value: cozy-system
|
||||
- name: K8S_AWAIT_ELECTION_IDENTITY
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: darkhttpd
|
||||
image: "ghcr.io/aenix-io/cozystack/installer:v0.0.2"
|
||||
image: "ghcr.io/aenix-io/cozystack/cozystack:v0.1.0"
|
||||
command:
|
||||
- /usr/bin/darkhttpd
|
||||
- /cozystack/assets
|
||||
|
||||
@@ -2,7 +2,7 @@ PUSH := 1
|
||||
LOAD := 0
|
||||
REGISTRY := ghcr.io/aenix-io/cozystack
|
||||
NGINX_CACHE_TAG = v0.1.0
|
||||
TAG := v0.0.2
|
||||
TAG := v0.1.0
|
||||
|
||||
image: image-nginx
|
||||
|
||||
|
||||
@@ -1,14 +1,4 @@
|
||||
{
|
||||
"containerimage.config.digest": "sha256:f4ad0559a74749de0d11b1835823bf9c95332962b0909450251d849113f22c19",
|
||||
"containerimage.descriptor": {
|
||||
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
|
||||
"digest": "sha256:3a0e8d791e0ccf681711766387ea9278e7d39f1956509cead2f72aa0001797ef",
|
||||
"size": 1093,
|
||||
"platform": {
|
||||
"architecture": "amd64",
|
||||
"os": "linux"
|
||||
}
|
||||
},
|
||||
"containerimage.digest": "sha256:3a0e8d791e0ccf681711766387ea9278e7d39f1956509cead2f72aa0001797ef",
|
||||
"image.name": "ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0,ghcr.io/aenix-io/cozystack/nginx-cache:v0.1.0-v0.0.2"
|
||||
"containerimage.config.digest": "sha256:318fd8d0d6f6127387042f6ad150e87023d1961c7c5059dd5324188a54b0ab4e",
|
||||
"containerimage.digest": "sha256:e3cf145238e6e45f7f13b9acaea445c94ff29f76a34ba9fa50828401a5a3cc68"
|
||||
}
|
||||
@@ -1,7 +1,7 @@
|
||||
PUSH := 1
|
||||
LOAD := 0
|
||||
REGISTRY := ghcr.io/aenix-io/cozystack
|
||||
TAG := v0.0.2
|
||||
TAG := v0.1.0
|
||||
UBUNTU_CONTAINER_DISK_TAG = v1.29.1
|
||||
|
||||
image: image-ubuntu-container-disk
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{
|
||||
"containerimage.config.digest": "sha256:e982cfa2320d3139ed311ae44bcc5ea18db7e4e76d2746e0af04c516288ff0f1",
|
||||
"containerimage.digest": "sha256:34f6aba5b5a2afbb46bbb891ef4ddc0855c2ffe4f9e5a99e8e553286ddd2c070"
|
||||
"containerimage.config.digest": "sha256:ee8968be63c7c45621ec45f3687211e0875acb24e8d9784e8d2ebcbf46a3538c",
|
||||
"containerimage.digest": "sha256:16c3c07e74212585786dc1f1ae31d3ab90a575014806193e8e37d1d7751cb084"
|
||||
}
|
||||
@@ -3,7 +3,7 @@ NAME=installer
|
||||
PUSH := 1
|
||||
LOAD := 0
|
||||
REGISTRY := ghcr.io/aenix-io/cozystack
|
||||
TAG := v0.0.2
|
||||
TAG := v0.1.0
|
||||
TALOS_VERSION=$(shell awk '/^version:/ {print $$2}' images/talos/profiles/installer.yaml)
|
||||
|
||||
show:
|
||||
@@ -18,18 +18,18 @@ diff:
|
||||
update:
|
||||
hack/gen-profiles.sh
|
||||
|
||||
image: image-installer image-talos image-matchbox
|
||||
image: image-cozystack image-talos image-matchbox
|
||||
|
||||
image-installer:
|
||||
docker buildx build -f images/installer/Dockerfile ../../.. \
|
||||
image-cozystack:
|
||||
docker buildx build -f images/cozystack/Dockerfile ../../.. \
|
||||
--provenance false \
|
||||
--tag $(REGISTRY)/installer:$(TAG) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/installer:$(TAG) \
|
||||
--tag $(REGISTRY)/cozystack:$(TAG) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/cozystack:$(TAG) \
|
||||
--cache-to type=inline \
|
||||
--metadata-file images/installer.json \
|
||||
--metadata-file images/cozystack.json \
|
||||
--push=$(PUSH) \
|
||||
--load=$(LOAD)
|
||||
echo "$(REGISTRY)/installer:$(TAG)" > images/installer.tag
|
||||
echo "$(REGISTRY)/cozystack:$(TAG)" > images/cozystack.tag
|
||||
|
||||
image-talos:
|
||||
test -f ../../../_out/assets/installer-amd64.tar || make talos-installer
|
||||
@@ -43,14 +43,18 @@ image-matchbox:
|
||||
docker buildx build -f images/matchbox/Dockerfile ../../.. \
|
||||
--provenance false \
|
||||
--tag $(REGISTRY)/matchbox:$(TAG) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/matchbox:$(TAG) \
|
||||
--tag $(REGISTRY)/matchbox:$(TALOS_VERSION)-$(TAG) \
|
||||
--cache-from type=registry,ref=$(REGISTRY)/matchbox:$(TALOS_VERSION) \
|
||||
--cache-to type=inline \
|
||||
--metadata-file images/matchbox.json \
|
||||
--push=$(PUSH) \
|
||||
--load=$(LOAD)
|
||||
echo "$(REGISTRY)/matchbox:$(TAG)" > images/matchbox.tag
|
||||
echo "$(REGISTRY)/matchbox:$(TALOS_VERSION)" > images/matchbox.tag
|
||||
|
||||
assets: talos-iso
|
||||
|
||||
talos-initramfs talos-kernel talos-installer talos-iso:
|
||||
cat images/talos/profiles/$(subst talos-,,$@).yaml | docker run --rm -i -v $${PWD}/../../../_out/assets:/out -v /dev:/dev --privileged "ghcr.io/siderolabs/imager:$(TALOS_VERSION)" -
|
||||
mkdir -p ../../../_out/assets
|
||||
cat images/talos/profiles/$(subst talos-,,$@).yaml | \
|
||||
docker run --rm -i -v /dev:/dev --privileged "ghcr.io/siderolabs/imager:$(TALOS_VERSION)" --tar-to-stdout - | \
|
||||
tar -C ../../../_out/assets -xzf-
|
||||
|
||||
4
packages/core/installer/images/cozystack.json
Normal file
4
packages/core/installer/images/cozystack.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"containerimage.config.digest": "sha256:ec8a4983a663f06a1503507482667a206e83e0d8d3663dff60ced9221855d6b0",
|
||||
"containerimage.digest": "sha256:abb7b2fbc1f143c922f2a35afc4423a74b2b63c0bddfe620750613ed835aa861"
|
||||
}
|
||||
1
packages/core/installer/images/cozystack.tag
Normal file
1
packages/core/installer/images/cozystack.tag
Normal file
@@ -0,0 +1 @@
|
||||
ghcr.io/aenix-io/cozystack/cozystack:v0.1.0
|
||||
@@ -1,3 +1,15 @@
|
||||
FROM golang:alpine3.19 as k8s-await-election-builder
|
||||
|
||||
ARG K8S_AWAIT_ELECTION_GITREPO=https://github.com/LINBIT/k8s-await-election
|
||||
ARG K8S_AWAIT_ELECTION_VERSION=0.4.1
|
||||
|
||||
RUN apk add --no-cache git make
|
||||
RUN git clone ${K8S_AWAIT_ELECTION_GITREPO} /usr/local/go/k8s-await-election/ \
|
||||
&& cd /usr/local/go/k8s-await-election \
|
||||
&& git reset --hard v${K8S_AWAIT_ELECTION_VERSION} \
|
||||
&& make \
|
||||
&& mv ./out/k8s-await-election-amd64 /k8s-await-election
|
||||
|
||||
FROM alpine:3.19 AS builder
|
||||
|
||||
RUN apk add --no-cache make git
|
||||
@@ -18,7 +30,8 @@ COPY scripts /cozystack/scripts
|
||||
COPY --from=builder /src/packages/core /cozystack/packages/core
|
||||
COPY --from=builder /src/packages/system /cozystack/packages/system
|
||||
COPY --from=builder /src/_out/repos /cozystack/assets/repos
|
||||
COPY --from=k8s-await-election-builder /k8s-await-election /usr/bin/k8s-await-election
|
||||
COPY dashboards /cozystack/assets/dashboards
|
||||
|
||||
WORKDIR /cozystack
|
||||
ENTRYPOINT [ "/cozystack/scripts/installer.sh" ]
|
||||
ENTRYPOINT ["/usr/bin/k8s-await-election", "/cozystack/scripts/installer.sh" ]
|
||||
@@ -1,14 +0,0 @@
|
||||
{
|
||||
"containerimage.config.digest": "sha256:5c7f51a9cbc945c13d52157035eba6ba4b6f3b68b76280f8e64b4f6ba239db1a",
|
||||
"containerimage.descriptor": {
|
||||
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
|
||||
"digest": "sha256:7cda3480faf0539ed4a3dd252aacc7a997645d3a390ece377c36cf55f9e57e11",
|
||||
"size": 2074,
|
||||
"platform": {
|
||||
"architecture": "amd64",
|
||||
"os": "linux"
|
||||
}
|
||||
},
|
||||
"containerimage.digest": "sha256:7cda3480faf0539ed4a3dd252aacc7a997645d3a390ece377c36cf55f9e57e11",
|
||||
"image.name": "ghcr.io/aenix-io/cozystack/installer:v0.0.2"
|
||||
}
|
||||
@@ -1 +0,0 @@
|
||||
ghcr.io/aenix-io/cozystack/installer:v0.0.2
|
||||
@@ -1,14 +1,4 @@
|
||||
{
|
||||
"containerimage.config.digest": "sha256:cb8cb211017e51f6eb55604287c45cbf6ed8add5df482aaebff3d493a11b5a76",
|
||||
"containerimage.descriptor": {
|
||||
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
|
||||
"digest": "sha256:3be72cdce2f4ab4886a70fb7b66e4518a1fe4ba0771319c96fa19a0d6f409602",
|
||||
"size": 1488,
|
||||
"platform": {
|
||||
"architecture": "amd64",
|
||||
"os": "linux"
|
||||
}
|
||||
},
|
||||
"containerimage.digest": "sha256:3be72cdce2f4ab4886a70fb7b66e4518a1fe4ba0771319c96fa19a0d6f409602",
|
||||
"image.name": "ghcr.io/aenix-io/cozystack/matchbox:v0.0.2"
|
||||
"containerimage.config.digest": "sha256:b869a6324f9c0e6d1dd48eee67cbe3842ee14efd59bdde477736ad2f90568ff7",
|
||||
"containerimage.digest": "sha256:c30b237c5fa4fbbe47e1aba56e8f99569fe865620aa1953f31fc373794123cd7"
|
||||
}
|
||||
@@ -1 +1 @@
|
||||
ghcr.io/aenix-io/cozystack/matchbox:v0.0.2
|
||||
ghcr.io/aenix-io/cozystack/matchbox:v1.6.4
|
||||
|
||||
@@ -41,8 +41,6 @@ spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: cozystack
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
@@ -52,14 +50,26 @@ spec:
|
||||
serviceAccountName: cozystack
|
||||
containers:
|
||||
- name: cozystack
|
||||
image: "{{ .Files.Get "images/installer.tag" | trim }}@{{ index (.Files.Get "images/installer.json" | fromJson) "containerimage.digest" }}"
|
||||
image: "{{ .Files.Get "images/cozystack.tag" | trim }}@{{ index (.Files.Get "images/cozystack.json" | fromJson) "containerimage.digest" }}"
|
||||
env:
|
||||
- name: KUBERNETES_SERVICE_HOST
|
||||
value: localhost
|
||||
- name: KUBERNETES_SERVICE_PORT
|
||||
value: "7445"
|
||||
- name: K8S_AWAIT_ELECTION_ENABLED
|
||||
value: "1"
|
||||
- name: K8S_AWAIT_ELECTION_NAME
|
||||
value: cozystack
|
||||
- name: K8S_AWAIT_ELECTION_LOCK_NAME
|
||||
value: cozystack
|
||||
- name: K8S_AWAIT_ELECTION_LOCK_NAMESPACE
|
||||
value: cozy-system
|
||||
- name: K8S_AWAIT_ELECTION_IDENTITY
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: darkhttpd
|
||||
image: "{{ .Files.Get "images/installer.tag" | trim }}@{{ index (.Files.Get "images/installer.json" | fromJson) "containerimage.digest" }}"
|
||||
image: "{{ .Files.Get "images/cozystack.tag" | trim }}@{{ index (.Files.Get "images/cozystack.json" | fromJson) "containerimage.digest" }}"
|
||||
command:
|
||||
- /usr/bin/darkhttpd
|
||||
- /cozystack/assets
|
||||
|
||||
@@ -646,6 +646,20 @@ spec:
|
||||
namespace: cozy-cilium
|
||||
- name: kubeovn
|
||||
namespace: cozy-kubeovn
|
||||
{{- if .Capabilities.APIVersions.Has "source.toolkit.fluxcd.io/v1beta2" }}
|
||||
{{- with (lookup "source.toolkit.fluxcd.io/v1beta2" "HelmRepository" "cozy-public" "").items }}
|
||||
values:
|
||||
kubeapps:
|
||||
redis:
|
||||
master:
|
||||
podAnnotations:
|
||||
{{- range $index, $repo := . }}
|
||||
{{- with (($repo.status).artifact).revision }}
|
||||
repository.cozystack.io/{{ $repo.metadata.name }}: {{ quote . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
---
|
||||
apiVersion: helm.toolkit.fluxcd.io/v2beta1
|
||||
kind: HelmRelease
|
||||
|
||||
@@ -1,131 +1,88 @@
|
||||
annotations:
|
||||
artifacthub.io/crds: |
|
||||
- kind: CiliumNetworkPolicy
|
||||
version: v2
|
||||
name: ciliumnetworkpolicies.cilium.io
|
||||
displayName: Cilium Network Policy
|
||||
description: |
|
||||
Cilium Network Policies provide additional functionality beyond what
|
||||
is provided by standard Kubernetes NetworkPolicy such as the ability
|
||||
to allow traffic based on FQDNs, or to filter at Layer 7.
|
||||
- kind: CiliumClusterwideNetworkPolicy
|
||||
version: v2
|
||||
name: ciliumclusterwidenetworkpolicies.cilium.io
|
||||
displayName: Cilium Clusterwide Network Policy
|
||||
description: |
|
||||
Cilium Clusterwide Network Policies support configuring network traffic
|
||||
policiies across the entire cluster, including applying node firewalls.
|
||||
- kind: CiliumExternalWorkload
|
||||
version: v2
|
||||
name: ciliumexternalworkloads.cilium.io
|
||||
displayName: Cilium External Workload
|
||||
description: |
|
||||
Cilium External Workload supports configuring the ability for external
|
||||
non-Kubernetes workloads to join the cluster.
|
||||
- kind: CiliumLocalRedirectPolicy
|
||||
version: v2
|
||||
name: ciliumlocalredirectpolicies.cilium.io
|
||||
displayName: Cilium Local Redirect Policy
|
||||
description: |
|
||||
Cilium Local Redirect Policy allows local redirects to be configured
|
||||
within a node to support use cases like Node-Local DNS or KIAM.
|
||||
- kind: CiliumNode
|
||||
version: v2
|
||||
name: ciliumnodes.cilium.io
|
||||
displayName: Cilium Node
|
||||
description: |
|
||||
Cilium Node represents a node managed by Cilium. It contains a
|
||||
specification to control various node specific configuration aspects
|
||||
and a status section to represent the status of the node.
|
||||
- kind: CiliumIdentity
|
||||
version: v2
|
||||
name: ciliumidentities.cilium.io
|
||||
displayName: Cilium Identity
|
||||
description: |
|
||||
Cilium Identity allows introspection into security identities that
|
||||
Cilium allocates which identify sets of labels that are assigned to
|
||||
individual endpoints in the cluster.
|
||||
- kind: CiliumEndpoint
|
||||
version: v2
|
||||
name: ciliumendpoints.cilium.io
|
||||
displayName: Cilium Endpoint
|
||||
description: |
|
||||
Cilium Endpoint represents the status of individual pods or nodes in
|
||||
the cluster which are managed by Cilium, including enforcement status,
|
||||
IP addressing and whether the networking is succesfully operational.
|
||||
- kind: CiliumEndpointSlice
|
||||
version: v2alpha1
|
||||
name: ciliumendpointslices.cilium.io
|
||||
displayName: Cilium Endpoint Slice
|
||||
description: |
|
||||
Cilium Endpoint Slice represents the status of groups of pods or nodes
|
||||
in the cluster which are managed by Cilium, including enforcement status,
|
||||
IP addressing and whether the networking is succesfully operational.
|
||||
- kind: CiliumEgressGatewayPolicy
|
||||
version: v2
|
||||
name: ciliumegressgatewaypolicies.cilium.io
|
||||
displayName: Cilium Egress Gateway Policy
|
||||
description: |
|
||||
Cilium Egress Gateway Policy provides control over the way that traffic
|
||||
leaves the cluster and which source addresses to use for that traffic.
|
||||
- kind: CiliumClusterwideEnvoyConfig
|
||||
version: v2
|
||||
name: ciliumclusterwideenvoyconfigs.cilium.io
|
||||
displayName: Cilium Clusterwide Envoy Config
|
||||
description: |
|
||||
Cilium Clusterwide Envoy Config specifies Envoy resources and K8s service mappings
|
||||
to be provisioned into Cilium host proxy instances in cluster context.
|
||||
- kind: CiliumEnvoyConfig
|
||||
version: v2
|
||||
name: ciliumenvoyconfigs.cilium.io
|
||||
displayName: Cilium Envoy Config
|
||||
description: |
|
||||
Cilium Envoy Config specifies Envoy resources and K8s service mappings
|
||||
to be provisioned into Cilium host proxy instances in namespace context.
|
||||
- kind: CiliumBGPPeeringPolicy
|
||||
version: v2alpha1
|
||||
name: ciliumbgppeeringpolicies.cilium.io
|
||||
displayName: Cilium BGP Peering Policy
|
||||
description: |
|
||||
Cilium BGP Peering Policy instructs Cilium to create specific BGP peering
|
||||
configurations.
|
||||
- kind: CiliumLoadBalancerIPPool
|
||||
version: v2alpha1
|
||||
name: ciliumloadbalancerippools.cilium.io
|
||||
displayName: Cilium Load Balancer IP Pool
|
||||
description: |
|
||||
Defining a Cilium Load Balancer IP Pool instructs Cilium to assign IPs to LoadBalancer Services.
|
||||
- kind: CiliumNodeConfig
|
||||
version: v2alpha1
|
||||
name: ciliumnodeconfigs.cilium.io
|
||||
displayName: Cilium Node Configuration
|
||||
description: |
|
||||
CiliumNodeConfig is a list of configuration key-value pairs. It is applied to
|
||||
nodes indicated by a label selector.
|
||||
- kind: CiliumCIDRGroup
|
||||
version: v2alpha1
|
||||
name: ciliumcidrgroups.cilium.io
|
||||
displayName: Cilium CIDR Group
|
||||
description: |
|
||||
CiliumCIDRGroup is a list of CIDRs that can be referenced as a single entity from CiliumNetworkPolicies.
|
||||
- kind: CiliumL2AnnouncementPolicy
|
||||
version: v2alpha1
|
||||
name: ciliuml2announcementpolicies.cilium.io
|
||||
displayName: Cilium L2 Announcement Policy
|
||||
description: |
|
||||
CiliumL2AnnouncementPolicy is a policy which determines which service IPs will be announced to
|
||||
the local area network, by which nodes, and via which interfaces.
|
||||
- kind: CiliumPodIPPool
|
||||
version: v2alpha1
|
||||
name: ciliumpodippools.cilium.io
|
||||
displayName: Cilium Pod IP Pool
|
||||
description: |
|
||||
CiliumPodIPPool defines an IP pool that can be used for pooled IPAM (i.e. the multi-pool IPAM mode).
|
||||
artifacthub.io/crds: "- kind: CiliumNetworkPolicy\n version: v2\n name: ciliumnetworkpolicies.cilium.io\n
|
||||
\ displayName: Cilium Network Policy\n description: |\n Cilium Network Policies
|
||||
provide additional functionality beyond what\n is provided by standard Kubernetes
|
||||
NetworkPolicy such as the ability\n to allow traffic based on FQDNs, or to
|
||||
filter at Layer 7.\n- kind: CiliumClusterwideNetworkPolicy\n version: v2\n name:
|
||||
ciliumclusterwidenetworkpolicies.cilium.io\n displayName: Cilium Clusterwide
|
||||
Network Policy\n description: |\n Cilium Clusterwide Network Policies support
|
||||
configuring network traffic\n policiies across the entire cluster, including
|
||||
applying node firewalls.\n- kind: CiliumExternalWorkload\n version: v2\n name:
|
||||
ciliumexternalworkloads.cilium.io\n displayName: Cilium External Workload\n description:
|
||||
|\n Cilium External Workload supports configuring the ability for external\n
|
||||
\ non-Kubernetes workloads to join the cluster.\n- kind: CiliumLocalRedirectPolicy\n
|
||||
\ version: v2\n name: ciliumlocalredirectpolicies.cilium.io\n displayName: Cilium
|
||||
Local Redirect Policy\n description: |\n Cilium Local Redirect Policy allows
|
||||
local redirects to be configured\n within a node to support use cases like
|
||||
Node-Local DNS or KIAM.\n- kind: CiliumNode\n version: v2\n name: ciliumnodes.cilium.io\n
|
||||
\ displayName: Cilium Node\n description: |\n Cilium Node represents a node
|
||||
managed by Cilium. It contains a\n specification to control various node specific
|
||||
configuration aspects\n and a status section to represent the status of the
|
||||
node.\n- kind: CiliumIdentity\n version: v2\n name: ciliumidentities.cilium.io\n
|
||||
\ displayName: Cilium Identity\n description: |\n Cilium Identity allows introspection
|
||||
into security identities that\n Cilium allocates which identify sets of labels
|
||||
that are assigned to\n individual endpoints in the cluster.\n- kind: CiliumEndpoint\n
|
||||
\ version: v2\n name: ciliumendpoints.cilium.io\n displayName: Cilium Endpoint\n
|
||||
\ description: |\n Cilium Endpoint represents the status of individual pods
|
||||
or nodes in\n the cluster which are managed by Cilium, including enforcement
|
||||
status,\n IP addressing and whether the networking is successfully operational.\n-
|
||||
kind: CiliumEndpointSlice\n version: v2alpha1\n name: ciliumendpointslices.cilium.io\n
|
||||
\ displayName: Cilium Endpoint Slice\n description: |\n Cilium Endpoint Slice
|
||||
represents the status of groups of pods or nodes\n in the cluster which are
|
||||
managed by Cilium, including enforcement status,\n IP addressing and whether
|
||||
the networking is successfully operational.\n- kind: CiliumEgressGatewayPolicy\n
|
||||
\ version: v2\n name: ciliumegressgatewaypolicies.cilium.io\n displayName: Cilium
|
||||
Egress Gateway Policy\n description: |\n Cilium Egress Gateway Policy provides
|
||||
control over the way that traffic\n leaves the cluster and which source addresses
|
||||
to use for that traffic.\n- kind: CiliumClusterwideEnvoyConfig\n version: v2\n
|
||||
\ name: ciliumclusterwideenvoyconfigs.cilium.io\n displayName: Cilium Clusterwide
|
||||
Envoy Config\n description: |\n Cilium Clusterwide Envoy Config specifies
|
||||
Envoy resources and K8s service mappings\n to be provisioned into Cilium host
|
||||
proxy instances in cluster context.\n- kind: CiliumEnvoyConfig\n version: v2\n
|
||||
\ name: ciliumenvoyconfigs.cilium.io\n displayName: Cilium Envoy Config\n description:
|
||||
|\n Cilium Envoy Config specifies Envoy resources and K8s service mappings\n
|
||||
\ to be provisioned into Cilium host proxy instances in namespace context.\n-
|
||||
kind: CiliumBGPPeeringPolicy\n version: v2alpha1\n name: ciliumbgppeeringpolicies.cilium.io\n
|
||||
\ displayName: Cilium BGP Peering Policy\n description: |\n Cilium BGP Peering
|
||||
Policy instructs Cilium to create specific BGP peering\n configurations.\n-
|
||||
kind: CiliumBGPClusterConfig\n version: v2alpha1\n name: ciliumbgpclusterconfigs.cilium.io\n
|
||||
\ displayName: Cilium BGP Cluster Config\n description: |\n Cilium BGP Cluster
|
||||
Config instructs Cilium operator to create specific BGP cluster\n configurations.\n-
|
||||
kind: CiliumBGPPeerConfig\n version: v2alpha1\n name: ciliumbgppeerconfigs.cilium.io\n
|
||||
\ displayName: Cilium BGP Peer Config\n description: |\n CiliumBGPPeerConfig
|
||||
is a common set of BGP peer configurations. It can be referenced \n by multiple
|
||||
peers from CiliumBGPClusterConfig.\n- kind: CiliumBGPAdvertisement\n version:
|
||||
v2alpha1\n name: ciliumbgpadvertisements.cilium.io\n displayName: Cilium BGP
|
||||
Advertisement\n description: |\n CiliumBGPAdvertisement is used to define
|
||||
source of BGP advertisement as well as BGP attributes \n to be advertised with
|
||||
those prefixes.\n- kind: CiliumBGPNodeConfig\n version: v2alpha1\n name: ciliumbgpnodeconfigs.cilium.io\n
|
||||
\ displayName: Cilium BGP Node Config\n description: |\n CiliumBGPNodeConfig
|
||||
is read only node specific BGP configuration. It is constructed by Cilium operator.\n
|
||||
\ It will also contain node local BGP state information.\n- kind: CiliumBGPNodeConfigOverride\n
|
||||
\ version: v2alpha1\n name: ciliumbgpnodeconfigoverrides.cilium.io\n displayName:
|
||||
Cilium BGP Node Config Override\n description: |\n CiliumBGPNodeConfigOverride
|
||||
can be used to override node specific BGP configuration.\n- kind: CiliumLoadBalancerIPPool\n
|
||||
\ version: v2alpha1\n name: ciliumloadbalancerippools.cilium.io\n displayName:
|
||||
Cilium Load Balancer IP Pool\n description: |\n Defining a Cilium Load Balancer
|
||||
IP Pool instructs Cilium to assign IPs to LoadBalancer Services.\n- kind: CiliumNodeConfig\n
|
||||
\ version: v2alpha1\n name: ciliumnodeconfigs.cilium.io\n displayName: Cilium
|
||||
Node Configuration\n description: |\n CiliumNodeConfig is a list of configuration
|
||||
key-value pairs. It is applied to\n nodes indicated by a label selector.\n-
|
||||
kind: CiliumCIDRGroup\n version: v2alpha1\n name: ciliumcidrgroups.cilium.io\n
|
||||
\ displayName: Cilium CIDR Group\n description: |\n CiliumCIDRGroup is a list
|
||||
of CIDRs that can be referenced as a single entity from CiliumNetworkPolicies.\n-
|
||||
kind: CiliumL2AnnouncementPolicy\n version: v2alpha1\n name: ciliuml2announcementpolicies.cilium.io\n
|
||||
\ displayName: Cilium L2 Announcement Policy\n description: |\n CiliumL2AnnouncementPolicy
|
||||
is a policy which determines which service IPs will be announced to\n the local
|
||||
area network, by which nodes, and via which interfaces.\n- kind: CiliumPodIPPool\n
|
||||
\ version: v2alpha1\n name: ciliumpodippools.cilium.io\n displayName: Cilium
|
||||
Pod IP Pool\n description: |\n CiliumPodIPPool defines an IP pool that can
|
||||
be used for pooled IPAM (i.e. the multi-pool IPAM mode).\n"
|
||||
apiVersion: v2
|
||||
appVersion: 1.14.5
|
||||
appVersion: 1.15.2
|
||||
description: eBPF-based Networking, Security, and Observability
|
||||
home: https://cilium.io/
|
||||
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@v1.14/Documentation/images/logo-solo.svg
|
||||
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@v1.15/Documentation/images/logo-solo.svg
|
||||
keywords:
|
||||
- BPF
|
||||
- eBPF
|
||||
@@ -138,4 +95,4 @@ kubeVersion: '>= 1.16.0-0'
|
||||
name: cilium
|
||||
sources:
|
||||
- https://github.com/cilium/cilium
|
||||
version: 1.14.5
|
||||
version: 1.15.2
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# cilium
|
||||
|
||||
 
|
||||
 
|
||||
|
||||
Cilium is open source software for providing and transparently securing
|
||||
network connectivity and loadbalancing between application workloads such as
|
||||
@@ -60,24 +60,30 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| aksbyocni.enabled | bool | `false` | Enable AKS BYOCNI integration. Note that this is incompatible with AKS clusters not created in BYOCNI mode: use Azure integration (`azure.enabled`) instead. |
|
||||
| alibabacloud.enabled | bool | `false` | Enable AlibabaCloud ENI integration |
|
||||
| annotateK8sNode | bool | `false` | Annotate k8s node upon initialization with Cilium's metadata. |
|
||||
| annotations | object | `{}` | Annotations to be added to all top-level cilium-agent objects (resources under templates/cilium-agent) |
|
||||
| apiRateLimit | string | `nil` | The api-rate-limit option can be used to overwrite individual settings of the default configuration for rate limiting calls to the Cilium Agent API |
|
||||
| authentication.enabled | bool | `true` | Enable authentication processing and garbage collection. Note that if disabled, policy enforcement will still block requests that require authentication. But the resulting authentication requests for these requests will not be processed, therefore the requests not be allowed. |
|
||||
| authentication.gcInterval | string | `"5m0s"` | Interval for garbage collection of auth map entries. |
|
||||
| authentication.mutual.connectTimeout | string | `"5s"` | Timeout for connecting to the remote node TCP socket |
|
||||
| authentication.mutual.port | int | `4250` | Port on the agent where mutual authentication handshakes between agents will be performed |
|
||||
| authentication.mutual.spire.adminSocketPath | string | `"/run/spire/sockets/admin.sock"` | SPIRE socket path where the SPIRE delegated api agent is listening |
|
||||
| authentication.mutual.spire.agentSocketPath | string | `"/run/spire/sockets/agent/agent.sock"` | SPIRE socket path where the SPIRE workload agent is listening. Applies to both the Cilium Agent and Operator |
|
||||
| authentication.mutual.spire.annotations | object | `{}` | Annotations to be added to all top-level spire objects (resources under templates/spire) |
|
||||
| authentication.mutual.spire.connectionTimeout | string | `"30s"` | SPIRE connection timeout |
|
||||
| authentication.mutual.spire.enabled | bool | `false` | Enable SPIRE integration (beta) |
|
||||
| authentication.mutual.spire.install.agent.affinity | object | `{}` | SPIRE agent affinity configuration |
|
||||
| authentication.mutual.spire.install.agent.annotations | object | `{}` | SPIRE agent annotations |
|
||||
| authentication.mutual.spire.install.agent.image | string | `"ghcr.io/spiffe/spire-agent:1.6.3@sha256:8eef9857bf223181ecef10d9bbcd2f7838f3689e9bd2445bede35066a732e823"` | SPIRE agent image |
|
||||
| authentication.mutual.spire.install.agent.image | object | `{"digest":"sha256:99405637647968245ff9fe215f8bd2bd0ea9807be9725f8bf19fe1b21471e52b","override":null,"pullPolicy":"IfNotPresent","repository":"ghcr.io/spiffe/spire-agent","tag":"1.8.5","useDigest":true}` | SPIRE agent image |
|
||||
| authentication.mutual.spire.install.agent.labels | object | `{}` | SPIRE agent labels |
|
||||
| authentication.mutual.spire.install.agent.nodeSelector | object | `{}` | SPIRE agent nodeSelector configuration ref: ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| authentication.mutual.spire.install.agent.podSecurityContext | object | `{}` | Security context to be added to spire agent pods. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod |
|
||||
| authentication.mutual.spire.install.agent.securityContext | object | `{}` | Security context to be added to spire agent containers. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container |
|
||||
| authentication.mutual.spire.install.agent.serviceAccount | object | `{"create":true,"name":"spire-agent"}` | SPIRE agent service account |
|
||||
| authentication.mutual.spire.install.agent.skipKubeletVerification | bool | `true` | SPIRE Workload Attestor kubelet verification. |
|
||||
| authentication.mutual.spire.install.agent.tolerations | list | `[]` | SPIRE agent tolerations configuration ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| authentication.mutual.spire.install.agent.tolerations | list | `[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"},{"key":"CriticalAddonsOnly","operator":"Exists"}]` | SPIRE agent tolerations configuration By default it follows the same tolerations as the agent itself to allow the Cilium agent on this node to connect to SPIRE. ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true |
|
||||
| authentication.mutual.spire.install.existingNamespace | bool | `false` | SPIRE namespace already exists. Set to true if Helm should not create, manage, and import the SPIRE namespace. |
|
||||
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:223ae047b1065bd069aac01ae3ac8088b3ca4a527827e283b85112f29385fb1b","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.36.1","useDigest":true}` | init container image of SPIRE agent and server |
|
||||
| authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into |
|
||||
| authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration |
|
||||
| authentication.mutual.spire.install.server.annotations | object | `{}` | SPIRE server annotations |
|
||||
@@ -87,10 +93,12 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| authentication.mutual.spire.install.server.dataStorage.enabled | bool | `true` | Enable SPIRE server data storage |
|
||||
| authentication.mutual.spire.install.server.dataStorage.size | string | `"1Gi"` | Size of the SPIRE server data storage |
|
||||
| authentication.mutual.spire.install.server.dataStorage.storageClass | string | `nil` | StorageClass of the SPIRE server data storage |
|
||||
| authentication.mutual.spire.install.server.image | string | `"ghcr.io/spiffe/spire-server:1.6.3@sha256:f4bc49fb0bd1d817a6c46204cc7ce943c73fb0a5496a78e0e4dc20c9a816ad7f"` | SPIRE server image |
|
||||
| authentication.mutual.spire.install.server.image | object | `{"digest":"sha256:28269265882048dcf0fed32fe47663cd98613727210b8d1a55618826f9bf5428","override":null,"pullPolicy":"IfNotPresent","repository":"ghcr.io/spiffe/spire-server","tag":"1.8.5","useDigest":true}` | SPIRE server image |
|
||||
| authentication.mutual.spire.install.server.initContainers | list | `[]` | SPIRE server init containers |
|
||||
| authentication.mutual.spire.install.server.labels | object | `{}` | SPIRE server labels |
|
||||
| authentication.mutual.spire.install.server.nodeSelector | object | `{}` | SPIRE server nodeSelector configuration ref: ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| authentication.mutual.spire.install.server.podSecurityContext | object | `{}` | Security context to be added to spire server pods. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod |
|
||||
| authentication.mutual.spire.install.server.securityContext | object | `{}` | Security context to be added to spire server containers. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container |
|
||||
| authentication.mutual.spire.install.server.service.annotations | object | `{}` | Annotations to be added to the SPIRE server service |
|
||||
| authentication.mutual.spire.install.server.service.labels | object | `{}` | Labels to be added to the SPIRE server service |
|
||||
| authentication.mutual.spire.install.server.service.type | string | `"ClusterIP"` | Service type for the SPIRE server service |
|
||||
@@ -109,8 +117,11 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| bgp.announce.loadbalancerIP | bool | `false` | Enable allocation and announcement of service LoadBalancer IPs |
|
||||
| bgp.announce.podCIDR | bool | `false` | Enable announcement of node pod CIDR |
|
||||
| bgp.enabled | bool | `false` | Enable BGP support inside Cilium; embeds a new ConfigMap for BGP inside cilium-agent and cilium-operator |
|
||||
| bgpControlPlane | object | `{"enabled":false}` | This feature set enables virtual BGP routers to be created via CiliumBGPPeeringPolicy CRDs. |
|
||||
| bgpControlPlane | object | `{"enabled":false,"secretsNamespace":{"create":false,"name":"kube-system"}}` | This feature set enables virtual BGP routers to be created via CiliumBGPPeeringPolicy CRDs. |
|
||||
| bgpControlPlane.enabled | bool | `false` | Enables the BGP control plane. |
|
||||
| bgpControlPlane.secretsNamespace | object | `{"create":false,"name":"kube-system"}` | SecretsNamespace is the namespace which BGP support will retrieve secrets from. |
|
||||
| bgpControlPlane.secretsNamespace.create | bool | `false` | Create secrets namespace for BGP secrets. |
|
||||
| bgpControlPlane.secretsNamespace.name | string | `"kube-system"` | The name of the secret namespace to which Cilium agents are given read access |
|
||||
| bpf.authMapMax | int | `524288` | Configure the maximum number of entries in auth map. |
|
||||
| bpf.autoMount.enabled | bool | `true` | Enable automatic mount of BPF filesystem When `autoMount` is enabled, the BPF filesystem is mounted at `bpf.root` path on the underlying host and inside the cilium agent pod. If users disable `autoMount`, it's expected that users have mounted bpffs filesystem at the specified `bpf.root` volume, and then the volume will be mounted inside the cilium agent pod at the same path. |
|
||||
| bpf.ctAnyMax | int | `262144` | Configure the maximum number of entries for the non-TCP connection tracking table. |
|
||||
@@ -131,7 +142,8 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| bpf.tproxy | bool | `false` | Configure the eBPF-based TPROXY to reduce reliance on iptables rules for implementing Layer 7 policy. |
|
||||
| bpf.vlanBypass | list | `[]` | Configure explicitly allowed VLAN id's for bpf logic bypass. [0] will allow all VLAN id's without any filtering. |
|
||||
| bpfClockProbe | bool | `false` | Enable BPF clock source probing for more efficient tick retrieval. |
|
||||
| certgen | object | `{"annotations":{"cronJob":{},"job":{}},"extraVolumeMounts":[],"extraVolumes":[],"image":{"digest":"sha256:89a0847753686444daabde9474b48340993bd19c7bea66a46e45b2974b82041f","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/certgen","tag":"v0.1.9","useDigest":true},"podLabels":{},"tolerations":[],"ttlSecondsAfterFinished":1800}` | Configure certificate generation for Hubble integration. If hubble.tls.auto.method=cronJob, these values are used for the Kubernetes CronJob which will be scheduled regularly to (re)generate any certificates not provided manually. |
|
||||
| certgen | object | `{"affinity":{},"annotations":{"cronJob":{},"job":{}},"extraVolumeMounts":[],"extraVolumes":[],"image":{"digest":"sha256:89a0847753686444daabde9474b48340993bd19c7bea66a46e45b2974b82041f","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/certgen","tag":"v0.1.9","useDigest":true},"podLabels":{},"tolerations":[],"ttlSecondsAfterFinished":1800}` | Configure certificate generation for Hubble integration. If hubble.tls.auto.method=cronJob, these values are used for the Kubernetes CronJob which will be scheduled regularly to (re)generate any certificates not provided manually. |
|
||||
| certgen.affinity | object | `{}` | Affinity for certgen |
|
||||
| certgen.annotations | object | `{"cronJob":{},"job":{}}` | Annotations to be added to the hubble-certgen initial Job and CronJob |
|
||||
| certgen.extraVolumeMounts | list | `[]` | Additional certgen volumeMounts. |
|
||||
| certgen.extraVolumes | list | `[]` | Additional certgen volumes. |
|
||||
@@ -146,25 +158,29 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| cleanState | bool | `false` | Clean all local Cilium state from the initContainer of the cilium-agent DaemonSet. Implies cleanBpfState: true. WARNING: Use with care! |
|
||||
| cluster.id | int | `0` | Unique ID of the cluster. Must be unique across all connected clusters and in the range of 1 to 255. Only required for Cluster Mesh, may be 0 if Cluster Mesh is not used. |
|
||||
| cluster.name | string | `"default"` | Name of the cluster. Only required for Cluster Mesh and mutual authentication with SPIRE. |
|
||||
| clustermesh.annotations | object | `{}` | Annotations to be added to all top-level clustermesh objects (resources under templates/clustermesh-apiserver and templates/clustermesh-config) |
|
||||
| clustermesh.apiserver.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"clustermesh-apiserver"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for clustermesh.apiserver |
|
||||
| clustermesh.apiserver.etcd.image | object | `{"digest":"sha256:795d8660c48c439a7c3764c2330ed9222ab5db5bb524d8d0607cac76f7ba82a3","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/coreos/etcd","tag":"v3.5.4","useDigest":true}` | Clustermesh API server etcd image. |
|
||||
| clustermesh.apiserver.etcd.init.extraArgs | list | `[]` | Additional arguments to `clustermesh-apiserver etcdinit`. |
|
||||
| clustermesh.apiserver.etcd.init.extraEnv | list | `[]` | Additional environment variables to `clustermesh-apiserver etcdinit`. |
|
||||
| clustermesh.apiserver.etcd.init.resources | object | `{}` | Specifies the resources for etcd init container in the apiserver |
|
||||
| clustermesh.apiserver.etcd.lifecycle | object | `{}` | lifecycle setting for the etcd container |
|
||||
| clustermesh.apiserver.etcd.resources | object | `{}` | Specifies the resources for etcd container in the apiserver |
|
||||
| clustermesh.apiserver.etcd.securityContext | object | `{}` | Security context to be added to clustermesh-apiserver etcd containers |
|
||||
| clustermesh.apiserver.extraArgs | list | `[]` | Additional clustermesh-apiserver arguments. |
|
||||
| clustermesh.apiserver.extraEnv | list | `[]` | Additional clustermesh-apiserver environment variables. |
|
||||
| clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. |
|
||||
| clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. |
|
||||
| clustermesh.apiserver.image | object | `{"digest":"sha256:7eaa35cf5452c43b1f7d0cde0d707823ae7e49965bcb54c053e31ea4e04c3d96","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.14.5","useDigest":true}` | Clustermesh API server image. |
|
||||
| clustermesh.apiserver.image | object | `{"digest":"sha256:478c77371f34d6fe5251427ff90c3912567c69b2bdc87d72377e42a42054f1c2","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.15.2","useDigest":true}` | Clustermesh API server image. |
|
||||
| clustermesh.apiserver.kvstoremesh.enabled | bool | `false` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance. |
|
||||
| clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. |
|
||||
| clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. |
|
||||
| clustermesh.apiserver.kvstoremesh.extraVolumeMounts | list | `[]` | Additional KVStoreMesh volumeMounts. |
|
||||
| clustermesh.apiserver.kvstoremesh.image | object | `{"digest":"sha256:d7137edd0efa2b1407b20088af3980a9993bb616d85bf9b55ea2891d1b99023a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/kvstoremesh","tag":"v1.14.5","useDigest":true}` | KVStoreMesh image. |
|
||||
| clustermesh.apiserver.kvstoremesh.lifecycle | object | `{}` | lifecycle setting for the KVStoreMesh container |
|
||||
| clustermesh.apiserver.kvstoremesh.resources | object | `{}` | Resource requests and limits for the KVStoreMesh container |
|
||||
| clustermesh.apiserver.kvstoremesh.securityContext | object | `{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]}}` | KVStoreMesh Security context |
|
||||
| clustermesh.apiserver.lifecycle | object | `{}` | lifecycle setting for the apiserver container |
|
||||
| clustermesh.apiserver.metrics.enabled | bool | `true` | Enables exporting apiserver metrics in OpenMetrics format. |
|
||||
| clustermesh.apiserver.metrics.etcd.enabled | bool | `false` | Enables exporting etcd metrics in OpenMetrics format. |
|
||||
| clustermesh.apiserver.metrics.etcd.enabled | bool | `true` | Enables exporting etcd metrics in OpenMetrics format. |
|
||||
| clustermesh.apiserver.metrics.etcd.mode | string | `"basic"` | Set level of detail for etcd metrics; specify 'extensive' to include server side gRPC histogram metrics. |
|
||||
| clustermesh.apiserver.metrics.etcd.port | int | `9963` | Configure the port the etcd metric server listens on. |
|
||||
| clustermesh.apiserver.metrics.kvstoremesh.enabled | bool | `true` | Enables exporting KVStoreMesh metrics in OpenMetrics format. |
|
||||
@@ -198,15 +214,13 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| clustermesh.apiserver.service.internalTrafficPolicy | string | `nil` | The internalTrafficPolicy of service used for apiserver access. |
|
||||
| clustermesh.apiserver.service.nodePort | int | `32379` | Optional port to use as the node port for apiserver access. WARNING: make sure to configure a different NodePort in each cluster if kube-proxy replacement is enabled, as Cilium is currently affected by a known bug (#24692) when NodePorts are handled by the KPR implementation. If a service with the same NodePort exists both in the local and the remote cluster, all traffic originating from inside the cluster and targeting the corresponding NodePort will be redirected to a local backend, regardless of whether the destination node belongs to the local or the remote cluster. |
|
||||
| clustermesh.apiserver.service.type | string | `"NodePort"` | The type of service used for apiserver access. |
|
||||
| clustermesh.apiserver.terminationGracePeriodSeconds | int | `30` | terminationGracePeriodSeconds for the clustermesh-apiserver deployment |
|
||||
| clustermesh.apiserver.tls.admin | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver admin certificate and private key. Used if 'auto' is not enabled. |
|
||||
| clustermesh.apiserver.tls.authMode | string | `"legacy"` | Configure the clustermesh authentication mode. Supported values: - legacy: All clusters access remote clustermesh instances with the same username (i.e., remote). The "remote" certificate must be generated with CN=remote if provided manually. - migration: Intermediate mode required to upgrade from legacy to cluster (and vice versa) with no disruption. Specifically, it enables the creation of the per-cluster usernames, while still using the common one for authentication. The "remote" certificate must be generated with CN=remote if provided manually (same as legacy). - cluster: Each cluster accesses remote etcd instances with a username depending on the local cluster name (i.e., remote-<cluster-name>). The "remote" certificate must be generated with CN=remote-<cluster-name> if provided manually. Cluster mode is meaningful only when the same CA is shared across all clusters part of the mesh. |
|
||||
| clustermesh.apiserver.tls.auto | object | `{"certManagerIssuerRef":{},"certValidityDuration":1095,"enabled":true,"method":"helm"}` | Configure automatic TLS certificates generation. A Kubernetes CronJob is used the generate any certificates not provided by the user at installation time. |
|
||||
| clustermesh.apiserver.tls.auto.certManagerIssuerRef | object | `{}` | certmanager issuer used when clustermesh.apiserver.tls.auto.method=certmanager. |
|
||||
| clustermesh.apiserver.tls.auto.certValidityDuration | int | `1095` | Generated certificates validity duration in days. |
|
||||
| clustermesh.apiserver.tls.auto.enabled | bool | `true` | When set to true, automatically generate a CA and certificates to enable mTLS between clustermesh-apiserver and external workload instances. If set to false, the certs to be provided by setting appropriate values below. |
|
||||
| clustermesh.apiserver.tls.ca | object | `{"cert":"","key":""}` | Deprecated in favor of tls.ca. To be removed in 1.15. base64 encoded PEM values for the ExternalWorkload CA certificate and private key. |
|
||||
| clustermesh.apiserver.tls.ca.cert | string | `""` | Deprecated in favor of tls.ca.cert. To be removed in 1.15. Optional CA cert. If it is provided, it will be used by the 'cronJob' method to generate all other certificates. Otherwise, an ephemeral CA is generated. |
|
||||
| clustermesh.apiserver.tls.ca.key | string | `""` | Deprecated in favor of tls.ca.key. To be removed in 1.15. Optional CA private key. If it is provided, it will be used by the 'cronJob' method to generate all other certificates. Otherwise, an ephemeral CA is generated. |
|
||||
| clustermesh.apiserver.tls.client | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver client certificate and private key. Used if 'auto' is not enabled. |
|
||||
| clustermesh.apiserver.tls.remote | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver remote cluster certificate and private key. Used if 'auto' is not enabled. |
|
||||
| clustermesh.apiserver.tls.server | object | `{"cert":"","extraDnsNames":[],"extraIpAddresses":[],"key":""}` | base64 encoded PEM values for the clustermesh-apiserver server certificate and private key. Used if 'auto' is not enabled. |
|
||||
@@ -219,6 +233,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| clustermesh.config.clusters | list | `[]` | List of clusters to be peered in the mesh. |
|
||||
| clustermesh.config.domain | string | `"mesh.cilium.io"` | Default dns domain for the Clustermesh API servers This is used in the case cluster addresses are not provided and IPs are used. |
|
||||
| clustermesh.config.enabled | bool | `false` | Enable the Clustermesh explicit configuration. |
|
||||
| clustermesh.maxConnectedClusters | int | `255` | The maximum number of clusters to support in a ClusterMesh. This value cannot be changed on running clusters, and all clusters in a ClusterMesh must be configured with the same value. Values > 255 will decrease the maximum allocatable cluster-local identities. Supported values are 255 and 511. |
|
||||
| clustermesh.useAPIServer | bool | `false` | Deploy clustermesh-apiserver for clustermesh |
|
||||
| cni.binPath | string | `"/opt/cni/bin"` | Configure the path to the CNI binary directory on the host. |
|
||||
| cni.chainingMode | string | `nil` | Configure chaining on top of other CNI plugins. Possible values: - none - aws-cni - flannel - generic-veth - portmap |
|
||||
@@ -231,6 +246,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| cni.hostConfDirMountPath | string | `"/host/etc/cni/net.d"` | Configure the path to where the CNI configuration directory is mounted inside the agent pod. |
|
||||
| cni.install | bool | `true` | Install the CNI configuration and binary files into the filesystem. |
|
||||
| cni.logFile | string | `"/var/run/cilium/cilium-cni.log"` | Configure the log file for CNI logging with retention policy of 7 days. Disable CNI file logging by setting this field to empty explicitly. |
|
||||
| cni.resources | object | `{"requests":{"cpu":"100m","memory":"10Mi"}}` | Specifies the resources for the cni initContainer |
|
||||
| cni.uninstall | bool | `false` | Remove the CNI configuration and binary files on agent shutdown. Enable this if you're removing Cilium from the cluster. Disable this to prevent the CNI configuration file from being removed during agent upgrade, which can cause nodes to go unmanageable. |
|
||||
| conntrackGCInterval | string | `"0s"` | Configure how frequently garbage collection should occur for the datapath connection tracking table. |
|
||||
| conntrackGCMaxInterval | string | `""` | Configure the maximum frequency for the garbage collection of the connection tracking table. Only affects the automatic computation for the frequency and has no effect when 'conntrackGCInterval' is set. This can be set to more frequently clean up unused identities created from ToFQDN policies. |
|
||||
@@ -245,7 +261,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| daemon.runPath | string | `"/var/run/cilium"` | Configure where Cilium runtime state should be stored. |
|
||||
| dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for cilium-agent grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards |
|
||||
| debug.enabled | bool | `false` | Enable debug logging |
|
||||
| debug.verbose | string | `nil` | Configure verbosity levels for debug logging This option is used to enable debug messages for operations related to such sub-system such as (e.g. kvstore, envoy, datapath or policy), and flow is for enabling debug messages emitted per request, message and connection. Applicable values: - flow - kvstore - envoy - datapath - policy |
|
||||
| debug.verbose | string | `nil` | Configure verbosity levels for debug logging This option is used to enable debug messages for operations related to such sub-system such as (e.g. kvstore, envoy, datapath or policy), and flow is for enabling debug messages emitted per request, message and connection. Multiple values can be set via a space-separated string (e.g. "datapath envoy"). Applicable values: - flow - kvstore - envoy - datapath - policy |
|
||||
| disableEndpointCRD | bool | `false` | Disable the usage of CiliumEndpoint CRD. |
|
||||
| dnsPolicy | string | `""` | DNS policy for Cilium agent pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy |
|
||||
| dnsProxy.dnsRejectResponseCode | string | `"refused"` | DNS response code for rejecting DNS requests, available options are '[nameError refused]'. |
|
||||
@@ -257,18 +273,17 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| dnsProxy.preCache | string | `""` | DNS cache data at this path is preloaded on agent startup. |
|
||||
| dnsProxy.proxyPort | int | `0` | Global port on which the in-agent DNS proxy should listen. Default 0 is a OS-assigned port. |
|
||||
| dnsProxy.proxyResponseMaxDelay | string | `"100ms"` | The maximum time the DNS proxy holds an allowed DNS response before sending it along. Responses are sent as soon as the datapath is updated with the new IP information. |
|
||||
| egressGateway | object | `{"enabled":false,"installRoutes":false,"reconciliationTriggerInterval":"1s"}` | Enables egress gateway to redirect and SNAT the traffic that leaves the cluster. |
|
||||
| egressGateway.installRoutes | bool | `false` | Install egress gateway IP rules and routes in order to properly steer egress gateway traffic to the correct ENI interface |
|
||||
| egressGateway.enabled | bool | `false` | Enables egress gateway to redirect and SNAT the traffic that leaves the cluster. |
|
||||
| egressGateway.installRoutes | bool | `false` | Deprecated without a replacement necessary. |
|
||||
| egressGateway.reconciliationTriggerInterval | string | `"1s"` | Time between triggers of egress gateway state reconciliations |
|
||||
| enableCiliumEndpointSlice | bool | `false` | Enable CiliumEndpointSlice feature. |
|
||||
| enableCnpStatusUpdates | bool | `false` | Whether to enable CNP status updates. |
|
||||
| enableCriticalPriorityClass | bool | `true` | Explicitly enable or disable priority class. .Capabilities.KubeVersion is unsettable in `helm template` calls, it depends on k8s libraries version that Helm was compiled against. This option allows to explicitly disable setting the priority class, which is useful for rendering charts for gke clusters in advance. |
|
||||
| enableIPv4BIGTCP | bool | `false` | Enables IPv4 BIG TCP support which increases maximum IPv4 GSO/GRO limits for nodes and pods |
|
||||
| enableIPv4Masquerade | bool | `true` | Enables masquerading of IPv4 traffic leaving the node from endpoints. |
|
||||
| enableIPv6BIGTCP | bool | `false` | Enables IPv6 BIG TCP support which increases maximum IPv6 GSO/GRO limits for nodes and pods |
|
||||
| enableIPv6Masquerade | bool | `true` | Enables masquerading of IPv6 traffic leaving the node from endpoints. |
|
||||
| enableK8sEventHandover | bool | `false` | Configures the use of the KVStore to optimize Kubernetes event handling by mirroring it into the KVstore for reduced overhead in large clusters. |
|
||||
| enableK8sTerminatingEndpoint | bool | `true` | Configure whether to enable auto detect of terminating state for endpoints in order to support graceful termination. |
|
||||
| enableMasqueradeRouteSource | bool | `false` | Enables masquerading to the source of the route for traffic leaving the node from endpoints. |
|
||||
| enableRuntimeDeviceDetection | bool | `false` | Enables experimental support for the detection of new and removed datapath devices. When devices change the eBPF datapath is reloaded and services updated. If "devices" is set then only those devices, or devices matching a wildcard will be considered. |
|
||||
| enableXTSocketFallback | bool | `true` | Enables the fallback compatibility solution for when the xt_socket kernel module is missing and it is needed for the datapath L7 redirection to work properly. See documentation for details on when this can be disabled: https://docs.cilium.io/en/stable/operations/system_requirements/#linux-kernel. |
|
||||
| encryption.enabled | bool | `false` | Enable transparent network encryption. |
|
||||
@@ -283,7 +298,12 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| encryption.mountPath | string | `"/etc/ipsec"` | Deprecated in favor of encryption.ipsec.mountPath. To be removed in 1.15. Path to mount the secret inside the Cilium pod. This option is only effective when encryption.type is set to ipsec. |
|
||||
| encryption.nodeEncryption | bool | `false` | Enable encryption for pure node to node traffic. This option is only effective when encryption.type is set to "wireguard". |
|
||||
| encryption.secretName | string | `"cilium-ipsec-keys"` | Deprecated in favor of encryption.ipsec.secretName. To be removed in 1.15. Name of the Kubernetes secret containing the encryption keys. This option is only effective when encryption.type is set to ipsec. |
|
||||
| encryption.strictMode | object | `{"allowRemoteNodeIdentities":false,"cidr":"","enabled":false}` | Configure the WireGuard Pod2Pod strict mode. |
|
||||
| encryption.strictMode.allowRemoteNodeIdentities | bool | `false` | Allow dynamic lookup of remote node identities. This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap. |
|
||||
| encryption.strictMode.cidr | string | `""` | CIDR for the WireGuard Pod2Pod strict mode. |
|
||||
| encryption.strictMode.enabled | bool | `false` | Enable WireGuard Pod2Pod strict mode. |
|
||||
| encryption.type | string | `"ipsec"` | Encryption method. Can be either ipsec or wireguard. |
|
||||
| encryption.wireguard.persistentKeepalive | string | `"0s"` | Controls Wireguard PersistentKeepalive option. Set 0s to disable. |
|
||||
| encryption.wireguard.userspaceFallback | bool | `false` | Enables the fallback to the user-space implementation. |
|
||||
| endpointHealthChecking.enabled | bool | `true` | Enable connectivity health checking between virtual endpoints. |
|
||||
| endpointRoutes.enabled | bool | `false` | Enable use of per endpoint routes instead of routing via the cilium_host interface. |
|
||||
@@ -301,6 +321,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| eni.subnetTagsFilter | list | `[]` | Filter via tags (k=v) which will dictate which subnets are going to be used to create new ENIs Important note: This requires that each instance has an ENI with a matching subnet attached when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium, use the CNI configuration file settings (cni.customConf) instead. |
|
||||
| eni.updateEC2AdapterLimitViaAPI | bool | `true` | Update ENI Adapter limits from the EC2 API |
|
||||
| envoy.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium-envoy"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-envoy. |
|
||||
| envoy.annotations | object | `{}` | Annotations to be added to all top-level cilium-envoy objects (resources under templates/cilium-envoy) |
|
||||
| envoy.connectTimeoutSeconds | int | `2` | Time in seconds after which a TCP connection attempt times out |
|
||||
| envoy.dnsPolicy | string | `nil` | DNS policy for Cilium envoy pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy |
|
||||
| envoy.enabled | bool | `false` | Enable Envoy Proxy in standalone DaemonSet. |
|
||||
@@ -312,7 +333,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| envoy.extraVolumes | list | `[]` | Additional envoy volumes. |
|
||||
| envoy.healthPort | int | `9878` | TCP port for the health API. |
|
||||
| envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s |
|
||||
| envoy.image | object | `{"digest":"sha256:992998398dadfff7117bfa9fdb7c9474fefab7f0237263f7c8114e106c67baca","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.26.6-ad82c7c56e88989992fd25d8d67747de865c823b","useDigest":true}` | Envoy container image. |
|
||||
| envoy.image | object | `{"digest":"sha256:877ead12d08d4c04a9f67f86d3c6e542aeb7bf97e1e401aee74de456f496ac30","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.27.3-99c1c8f42c8de70fc8f6dd594f4a425cd38b6688","useDigest":true}` | Envoy container image. |
|
||||
| envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
|
||||
| envoy.livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe |
|
||||
| envoy.log.format | string | `"[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v"` | The format string to use for laying out the log message metadata of Envoy. |
|
||||
@@ -324,14 +345,15 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| envoy.podLabels | object | `{}` | Labels to be added to envoy pods |
|
||||
| envoy.podSecurityContext | object | `{}` | Security Context for cilium-envoy pods. |
|
||||
| envoy.priorityClassName | string | `nil` | The priority class to use for cilium-envoy. |
|
||||
| envoy.prometheus | object | `{"enabled":true,"port":"9964","serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]}}` | Configure Cilium Envoy Prometheus options. Note that some of these apply to either cilium-agent or cilium-envoy. |
|
||||
| envoy.prometheus.enabled | bool | `true` | Enable prometheus metrics for cilium-envoy |
|
||||
| envoy.prometheus.port | string | `"9964"` | Serve prometheus metrics for cilium-envoy on the configured port |
|
||||
| envoy.prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-envoy |
|
||||
| envoy.prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) |
|
||||
| envoy.prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) Note that this setting applies to both cilium-envoy _and_ cilium-agent with Envoy enabled. |
|
||||
| envoy.prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. |
|
||||
| envoy.prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-envoy |
|
||||
| envoy.prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-envoy |
|
||||
| envoy.prometheus.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor cilium-envoy |
|
||||
| envoy.prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-envoy or for cilium-agent with Envoy configured. |
|
||||
| envoy.prometheus.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor cilium-envoy or for cilium-agent with Envoy configured. |
|
||||
| envoy.readinessProbe.failureThreshold | int | `3` | failure threshold of readiness probe |
|
||||
| envoy.readinessProbe.periodSeconds | int | `30` | interval between checks of the readiness probe |
|
||||
| envoy.resources | object | `{}` | Envoy resource limits & requests ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
|
||||
@@ -348,6 +370,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| envoyConfig.secretsNamespace | object | `{"create":true,"name":"cilium-secrets"}` | SecretsNamespace is the namespace in which envoy SDS will retrieve secrets from. |
|
||||
| envoyConfig.secretsNamespace.create | bool | `true` | Create secrets namespace for CiliumEnvoyConfig CRDs. |
|
||||
| envoyConfig.secretsNamespace.name | string | `"cilium-secrets"` | The name of the secret namespace to which Cilium agents are given read access. |
|
||||
| etcd.annotations | object | `{}` | Annotations to be added to all top-level etcd-operator objects (resources under templates/etcd-operator) |
|
||||
| etcd.clusterDomain | string | `"cluster.local"` | Cluster domain for cilium-etcd-operator. |
|
||||
| etcd.enabled | bool | `false` | Enable etcd mode for the agent. |
|
||||
| etcd.endpoints | list | `["https://CHANGE-ME:2379"]` | List of etcd endpoints (not needed when using managed=true). |
|
||||
@@ -393,24 +416,41 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hostFirewall | object | `{"enabled":false}` | Configure the host firewall. |
|
||||
| hostFirewall.enabled | bool | `false` | Enables the enforcement of host policies in the eBPF datapath. |
|
||||
| hostPort.enabled | bool | `false` | Enable hostPort service support. |
|
||||
| hubble.annotations | object | `{}` | Annotations to be added to all top-level hubble objects (resources under templates/hubble) |
|
||||
| hubble.enabled | bool | `true` | Enable Hubble (true by default). |
|
||||
| hubble.export | object | `{"dynamic":{"config":{"configMapName":"cilium-flowlog-config","content":[{"excludeFilters":[],"fieldMask":[],"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}],"createConfigMap":true},"enabled":false},"fileMaxBackups":5,"fileMaxSizeMb":10,"static":{"allowList":[],"denyList":[],"enabled":false,"fieldMask":[],"filePath":"/var/run/cilium/hubble/events.log"}}` | Hubble flows export. |
|
||||
| hubble.export.dynamic | object | `{"config":{"configMapName":"cilium-flowlog-config","content":[{"excludeFilters":[],"fieldMask":[],"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}],"createConfigMap":true},"enabled":false}` | - Dynamic exporters configuration. Dynamic exporters may be reconfigured without a need of agent restarts. |
|
||||
| hubble.export.dynamic.config.configMapName | string | `"cilium-flowlog-config"` | -- Name of configmap with configuration that may be altered to reconfigure exporters within a running agents. |
|
||||
| hubble.export.dynamic.config.content | list | `[{"excludeFilters":[],"fieldMask":[],"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}]` | -- Exporters configuration in YAML format. |
|
||||
| hubble.export.dynamic.config.createConfigMap | bool | `true` | -- True if helm installer should create config map. Switch to false if you want to self maintain the file content. |
|
||||
| hubble.export.fileMaxBackups | int | `5` | - Defines max number of backup/rotated files. |
|
||||
| hubble.export.fileMaxSizeMb | int | `10` | - Defines max file size of output file before it gets rotated. |
|
||||
| hubble.export.static | object | `{"allowList":[],"denyList":[],"enabled":false,"fieldMask":[],"filePath":"/var/run/cilium/hubble/events.log"}` | - Static exporter configuration. Static exporter is bound to agent lifecycle. |
|
||||
| hubble.listenAddress | string | `":4244"` | An additional address for Hubble to listen to. Set this field ":4244" if you are enabling Hubble Relay, as it assumes that Hubble is listening on port 4244. |
|
||||
| hubble.metrics | object | `{"dashboards":{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null},"enableOpenMetrics":false,"enabled":null,"port":9965,"serviceAnnotations":{},"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]}}` | Hubble metrics configuration. See https://docs.cilium.io/en/stable/observability/metrics/#hubble-metrics for more comprehensive documentation about Hubble metrics. |
|
||||
| hubble.metrics | object | `{"dashboards":{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null},"enableOpenMetrics":false,"enabled":null,"port":9965,"serviceAnnotations":{},"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","jobLabel":"","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]}}` | Hubble metrics configuration. See https://docs.cilium.io/en/stable/observability/metrics/#hubble-metrics for more comprehensive documentation about Hubble metrics. |
|
||||
| hubble.metrics.dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for hubble grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards |
|
||||
| hubble.metrics.enableOpenMetrics | bool | `false` | Enables exporting hubble metrics in OpenMetrics format. |
|
||||
| hubble.metrics.enabled | string | `nil` | Configures the list of metrics to collect. If empty or null, metrics are disabled. Example: enabled: - dns:query;ignoreAAAA - drop - tcp - flow - icmp - http You can specify the list of metrics from the helm CLI: --set metrics.enabled="{dns:query;ignoreAAAA,drop,tcp,flow,icmp,http}" |
|
||||
| hubble.metrics.enabled | string | `nil` | Configures the list of metrics to collect. If empty or null, metrics are disabled. Example: enabled: - dns:query;ignoreAAAA - drop - tcp - flow - icmp - http You can specify the list of metrics from the helm CLI: --set hubble.metrics.enabled="{dns:query;ignoreAAAA,drop,tcp,flow,icmp,http}" |
|
||||
| hubble.metrics.port | int | `9965` | Configure the port the hubble metric server listens on. |
|
||||
| hubble.metrics.serviceAnnotations | object | `{}` | Annotations to be added to hubble-metrics service. |
|
||||
| hubble.metrics.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor hubble |
|
||||
| hubble.metrics.serviceMonitor.enabled | bool | `false` | Create ServiceMonitor resources for Prometheus Operator. This requires the prometheus CRDs to be available. ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) |
|
||||
| hubble.metrics.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. |
|
||||
| hubble.metrics.serviceMonitor.jobLabel | string | `""` | jobLabel to add for ServiceMonitor hubble |
|
||||
| hubble.metrics.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor hubble |
|
||||
| hubble.metrics.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor hubble |
|
||||
| hubble.metrics.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor hubble |
|
||||
| hubble.peerService.clusterDomain | string | `"cluster.local"` | The cluster domain to use to query the Hubble Peer service. It should be the local cluster. |
|
||||
| hubble.peerService.targetPort | int | `4244` | Target Port for the Peer service, must match the hubble.listenAddress' port. |
|
||||
| hubble.preferIpv6 | bool | `false` | Whether Hubble should prefer to announce IPv6 or IPv4 addresses if both are available. |
|
||||
| hubble.redact | object | `{"enabled":false,"http":{"headers":{"allow":[],"deny":[]},"urlQuery":false,"userInfo":true},"kafka":{"apiKey":false}}` | Enables redacting sensitive information present in Layer 7 flows. |
|
||||
| hubble.redact.http.headers.allow | list | `[]` | List of HTTP headers to allow: headers not matching will be redacted. Note: `allow` and `deny` lists cannot be used both at the same time, only one can be present. Example: redact: enabled: true http: headers: allow: - traceparent - tracestate - Cache-Control You can specify the options from the helm CLI: --set hubble.redact.enabled="true" --set hubble.redact.http.headers.allow="traceparent,tracestate,Cache-Control" |
|
||||
| hubble.redact.http.headers.deny | list | `[]` | List of HTTP headers to deny: matching headers will be redacted. Note: `allow` and `deny` lists cannot be used both at the same time, only one can be present. Example: redact: enabled: true http: headers: deny: - Authorization - Proxy-Authorization You can specify the options from the helm CLI: --set hubble.redact.enabled="true" --set hubble.redact.http.headers.deny="Authorization,Proxy-Authorization" |
|
||||
| hubble.redact.http.urlQuery | bool | `false` | Enables redacting URL query (GET) parameters. Example: redact: enabled: true http: urlQuery: true You can specify the options from the helm CLI: --set hubble.redact.enabled="true" --set hubble.redact.http.urlQuery="true" |
|
||||
| hubble.redact.http.userInfo | bool | `true` | Enables redacting user info, e.g., password when basic auth is used. Example: redact: enabled: true http: userInfo: true You can specify the options from the helm CLI: --set hubble.redact.enabled="true" --set hubble.redact.http.userInfo="true" |
|
||||
| hubble.redact.kafka.apiKey | bool | `false` | Enables redacting Kafka's API key. Example: redact: enabled: true kafka: apiKey: true You can specify the options from the helm CLI: --set hubble.redact.enabled="true" --set hubble.redact.kafka.apiKey="true" |
|
||||
| hubble.relay.affinity | object | `{"podAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for hubble-replay |
|
||||
| hubble.relay.annotations | object | `{}` | Annotations to be added to all top-level hubble-relay objects (resources under templates/hubble-relay) |
|
||||
| hubble.relay.dialTimeout | string | `nil` | Dial timeout to connect to the local hubble instance to receive peer information (e.g. "30s"). |
|
||||
| hubble.relay.enabled | bool | `false` | Enable Hubble Relay (requires hubble.enabled=true) |
|
||||
| hubble.relay.extraEnv | list | `[]` | Additional hubble-relay environment variables. |
|
||||
@@ -418,7 +458,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. |
|
||||
| hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay |
|
||||
| hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay |
|
||||
| hubble.relay.image | object | `{"digest":"sha256:dbef89f924a927043d02b40c18e417c1ea0e8f58b44523b80fef7e3652db24d4","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.14.5","useDigest":true}` | Hubble-relay container image. |
|
||||
| hubble.relay.image | object | `{"digest":"sha256:48480053930e884adaeb4141259ff1893a22eb59707906c6d38de2fe01916cb0","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.15.2","useDigest":true}` | Hubble-relay container image. |
|
||||
| hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. |
|
||||
| hubble.relay.listenPort | string | `"4245"` | Port to listen to. |
|
||||
| hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
@@ -450,9 +490,9 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.relay.sortBufferDrainTimeout | string | `nil` | When the per-request flows sort buffer is not full, a flow is drained every time this timeout is reached (only affects requests in follow-mode) (e.g. "1s"). |
|
||||
| hubble.relay.sortBufferLenMax | string | `nil` | Max number of flows that can be buffered for sorting before being sent to the client (per request) (e.g. 100). |
|
||||
| hubble.relay.terminationGracePeriodSeconds | int | `1` | Configure termination grace period for hubble relay Deployment. |
|
||||
| hubble.relay.tls | object | `{"client":{"cert":"","key":""},"server":{"cert":"","enabled":false,"extraDnsNames":[],"extraIpAddresses":[],"key":"","mtls":false}}` | TLS configuration for Hubble Relay |
|
||||
| hubble.relay.tls | object | `{"client":{"cert":"","key":""},"server":{"cert":"","enabled":false,"extraDnsNames":[],"extraIpAddresses":[],"key":"","mtls":false,"relayName":"ui.hubble-relay.cilium.io"}}` | TLS configuration for Hubble Relay |
|
||||
| hubble.relay.tls.client | object | `{"cert":"","key":""}` | base64 encoded PEM values for the hubble-relay client certificate and private key This keypair is presented to Hubble server instances for mTLS authentication and is required when hubble.tls.enabled is true. These values need to be set manually if hubble.tls.auto.enabled is false. |
|
||||
| hubble.relay.tls.server | object | `{"cert":"","enabled":false,"extraDnsNames":[],"extraIpAddresses":[],"key":"","mtls":false}` | base64 encoded PEM values for the hubble-relay server certificate and private key |
|
||||
| hubble.relay.tls.server | object | `{"cert":"","enabled":false,"extraDnsNames":[],"extraIpAddresses":[],"key":"","mtls":false,"relayName":"ui.hubble-relay.cilium.io"}` | base64 encoded PEM values for the hubble-relay server certificate and private key |
|
||||
| hubble.relay.tls.server.extraDnsNames | list | `[]` | extra DNS names added to certificate when its auto gen |
|
||||
| hubble.relay.tls.server.extraIpAddresses | list | `[]` | extra IP addresses added to certificate when its auto gen |
|
||||
| hubble.relay.tolerations | list | `[]` | Node tolerations for pod assignment on nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
@@ -472,10 +512,13 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.tls.server.extraDnsNames | list | `[]` | Extra DNS names added to certificate when it's auto generated |
|
||||
| hubble.tls.server.extraIpAddresses | list | `[]` | Extra IP addresses added to certificate when it's auto generated |
|
||||
| hubble.ui.affinity | object | `{}` | Affinity for hubble-ui |
|
||||
| hubble.ui.annotations | object | `{}` | Annotations to be added to all top-level hubble-ui objects (resources under templates/hubble-ui) |
|
||||
| hubble.ui.backend.extraEnv | list | `[]` | Additional hubble-ui backend environment variables. |
|
||||
| hubble.ui.backend.extraVolumeMounts | list | `[]` | Additional hubble-ui backend volumeMounts. |
|
||||
| hubble.ui.backend.extraVolumes | list | `[]` | Additional hubble-ui backend volumes. |
|
||||
| hubble.ui.backend.image | object | `{"digest":"sha256:1f86f3400827a0451e6332262467f894eeb7caf0eb8779bd951e2caa9d027cbe","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui-backend","tag":"v0.12.1","useDigest":true}` | Hubble-ui backend image. |
|
||||
| hubble.ui.backend.image | object | `{"digest":"sha256:1e7657d997c5a48253bb8dc91ecee75b63018d16ff5e5797e5af367336bc8803","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui-backend","tag":"v0.13.0","useDigest":true}` | Hubble-ui backend image. |
|
||||
| hubble.ui.backend.livenessProbe.enabled | bool | `false` | Enable liveness probe for Hubble-ui backend (requires Hubble-ui 0.12+) |
|
||||
| hubble.ui.backend.readinessProbe.enabled | bool | `false` | Enable readiness probe for Hubble-ui backend (requires Hubble-ui 0.12+) |
|
||||
| hubble.ui.backend.resources | object | `{}` | Resource requests and limits for the 'backend' container of the 'hubble-ui' deployment. |
|
||||
| hubble.ui.backend.securityContext | object | `{}` | Hubble-ui backend security context. |
|
||||
| hubble.ui.baseUrl | string | `"/"` | Defines base url prefix for all hubble-ui http requests. It needs to be changed in case if ingress for hubble-ui is configured under some sub-path. Trailing `/` is required for custom path, ex. `/service-map/` |
|
||||
@@ -483,7 +526,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.ui.frontend.extraEnv | list | `[]` | Additional hubble-ui frontend environment variables. |
|
||||
| hubble.ui.frontend.extraVolumeMounts | list | `[]` | Additional hubble-ui frontend volumeMounts. |
|
||||
| hubble.ui.frontend.extraVolumes | list | `[]` | Additional hubble-ui frontend volumes. |
|
||||
| hubble.ui.frontend.image | object | `{"digest":"sha256:9e5f81ee747866480ea1ac4630eb6975ff9227f9782b7c93919c081c33f38267","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui","tag":"v0.12.1","useDigest":true}` | Hubble-ui frontend image. |
|
||||
| hubble.ui.frontend.image | object | `{"digest":"sha256:7d663dc16538dd6e29061abd1047013a645e6e69c115e008bee9ea9fef9a6666","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-ui","tag":"v0.13.0","useDigest":true}` | Hubble-ui frontend image. |
|
||||
| hubble.ui.frontend.resources | object | `{}` | Resource requests and limits for the 'frontend' container of the 'hubble-ui' deployment. |
|
||||
| hubble.ui.frontend.securityContext | object | `{}` | Hubble-ui frontend security context. |
|
||||
| hubble.ui.frontend.server.ipv6 | object | `{"enabled":true}` | Controls server listener for ipv6 |
|
||||
@@ -510,14 +553,15 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. |
|
||||
| identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd` or `kvstore`). |
|
||||
| identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. |
|
||||
| image | object | `{"digest":"sha256:d3b287029755b6a47dee01420e2ea469469f1b174a2089c10af7e5e9289ef05b","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.5","useDigest":true}` | Agent container image. |
|
||||
| image | object | `{"digest":"sha256:bfeb3f1034282444ae8c498dca94044df2b9c9c8e7ac678e0b43c849f0b31746","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.15.2","useDigest":true}` | Agent container image. |
|
||||
| imagePullSecrets | string | `nil` | Configure image pull secrets for pulling container images |
|
||||
| ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set |
|
||||
| ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. |
|
||||
| ingressController.defaultSecretNamespace | string | `nil` | Default secret namespace for ingresses without .spec.tls[].secretName set. |
|
||||
| ingressController.enableProxyProtocol | bool | `false` | Enable proxy protocol for all Ingress listeners. Note that _only_ Proxy protocol traffic will be accepted once this is enabled. |
|
||||
| ingressController.enabled | bool | `false` | Enable cilium ingress controller This will automatically set enable-envoy-config as well. |
|
||||
| ingressController.enforceHttps | bool | `true` | Enforce https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. |
|
||||
| ingressController.ingressLBAnnotationPrefixes | list | `["service.beta.kubernetes.io","service.kubernetes.io","cloud.google.com"]` | IngressLBAnnotations are the annotation prefixes, which are used to filter annotations to propagate from Ingress to the Load Balancer service |
|
||||
| ingressController.ingressLBAnnotationPrefixes | list | `["service.beta.kubernetes.io","service.kubernetes.io","cloud.google.com"]` | IngressLBAnnotations are the annotation and label prefixes, which are used to filter annotations and/or labels to propagate from Ingress to the Load Balancer service |
|
||||
| ingressController.loadbalancerMode | string | `"dedicated"` | Default ingress load balancer mode Supported values: shared, dedicated For granular control, use the following annotations on the ingress resource ingress.cilium.io/loadbalancer-mode: shared|dedicated, |
|
||||
| ingressController.secretsNamespace | object | `{"create":true,"name":"cilium-secrets","sync":true}` | SecretsNamespace is the namespace in which envoy SDS will retrieve TLS secrets from. |
|
||||
| ingressController.secretsNamespace.create | bool | `true` | Create secrets namespace for Ingress. |
|
||||
@@ -550,9 +594,9 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| ipv6.enabled | bool | `false` | Enable IPv6 support. |
|
||||
| ipv6NativeRoutingCIDR | string | `""` | Allows to explicitly specify the IPv6 CIDR for native routing. When specified, Cilium assumes networking for this CIDR is preconfigured and hands traffic destined for that range to the Linux network stack without applying any SNAT. Generally speaking, specifying a native routing CIDR implies that Cilium can depend on the underlying networking stack to route packets to their destination. To offer a concrete example, if Cilium is configured to use direct routing and the Kubernetes CIDR is included in the native routing CIDR, the user must configure the routes to reach pods, either manually or by setting the auto-direct-node-routes flag. |
|
||||
| k8s | object | `{}` | Configure Kubernetes specific configuration |
|
||||
| k8sClientRateLimit | object | `{"burst":10,"qps":5}` | Configure the client side rate limit for the agent and operator If the amount of requests to the Kubernetes API server exceeds the configured rate limit, the agent and operator will start to throttle requests by delaying them until there is budget or the request times out. |
|
||||
| k8sClientRateLimit.burst | int | `10` | The burst request rate in requests per second. The rate limiter will allow short bursts with a higher rate. |
|
||||
| k8sClientRateLimit.qps | int | `5` | The sustained request rate in requests per second. |
|
||||
| k8sClientRateLimit | object | `{"burst":null,"qps":null}` | Configure the client side rate limit for the agent and operator If the amount of requests to the Kubernetes API server exceeds the configured rate limit, the agent and operator will start to throttle requests by delaying them until there is budget or the request times out. |
|
||||
| k8sClientRateLimit.burst | int | 10 for k8s up to 1.26. 20 for k8s version 1.27+ | The burst request rate in requests per second. The rate limiter will allow short bursts with a higher rate. |
|
||||
| k8sClientRateLimit.qps | int | 5 for k8s up to 1.26. 10 for k8s version 1.27+ | The sustained request rate in requests per second. |
|
||||
| k8sNetworkPolicy.enabled | bool | `true` | Enable support for K8s NetworkPolicy |
|
||||
| k8sServiceHost | string | `""` | Kubernetes service host |
|
||||
| k8sServicePort | string | `""` | Kubernetes service port |
|
||||
@@ -570,7 +614,8 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| l7Proxy | bool | `true` | Enable Layer 7 network policy. |
|
||||
| livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
|
||||
| livenessProbe.periodSeconds | int | `30` | interval between checks of the liveness probe |
|
||||
| loadBalancer | object | `{"l7":{"algorithm":"round_robin","backend":"disabled","ports":[]}}` | Configure service load balancing |
|
||||
| loadBalancer | object | `{"acceleration":"disabled","l7":{"algorithm":"round_robin","backend":"disabled","ports":[]}}` | Configure service load balancing |
|
||||
| loadBalancer.acceleration | string | `"disabled"` | acceleration is the option to accelerate service handling via XDP Applicable values can be: disabled (do not use XDP), native (XDP BPF program is run directly out of the networking driver's early receive path), or best-effort (use native mode XDP acceleration on devices that support it). |
|
||||
| loadBalancer.l7 | object | `{"algorithm":"round_robin","backend":"disabled","ports":[]}` | L7 LoadBalancer |
|
||||
| loadBalancer.l7.algorithm | string | `"round_robin"` | Default LB algorithm The default LB algorithm to be used for services, which can be overridden by the service annotation (e.g. service.cilium.io/lb-l7-algorithm) Applicable values: round_robin, least_request, random |
|
||||
| loadBalancer.l7.backend | string | `"disabled"` | Enable L7 service load balancing via envoy proxy. The request to a k8s service, which has specific annotation e.g. service.cilium.io/lb-l7, will be forwarded to the local backend proxy to be load balanced to the service endpoints. Please refer to docs for supported annotations for more configuration. Applicable values: - envoy: Enable L7 load balancing via envoy proxy. This will automatically set enable-envoy-config as well. - disabled: Disable L7 load balancing by way of service annotation. |
|
||||
@@ -583,13 +628,15 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| name | string | `"cilium"` | Agent container name. |
|
||||
| nat46x64Gateway | object | `{"enabled":false}` | Configure standalone NAT46/NAT64 gateway |
|
||||
| nat46x64Gateway.enabled | bool | `false` | Enable RFC8215-prefixed translation |
|
||||
| nodePort | object | `{"autoProtectPortRange":true,"bindProtection":true,"enableHealthCheck":true,"enabled":false}` | Configure N-S k8s service loadbalancing |
|
||||
| nodePort | object | `{"autoProtectPortRange":true,"bindProtection":true,"enableHealthCheck":true,"enableHealthCheckLoadBalancerIP":false,"enabled":false}` | Configure N-S k8s service loadbalancing |
|
||||
| nodePort.autoProtectPortRange | bool | `true` | Append NodePort range to ip_local_reserved_ports if clash with ephemeral ports is detected. |
|
||||
| nodePort.bindProtection | bool | `true` | Set to true to prevent applications binding to service ports. |
|
||||
| nodePort.enableHealthCheck | bool | `true` | Enable healthcheck nodePort server for NodePort services |
|
||||
| nodePort.enableHealthCheckLoadBalancerIP | bool | `false` | Enable access of the healthcheck nodePort on the LoadBalancerIP. Needs EnableHealthCheck to be enabled |
|
||||
| nodePort.enabled | bool | `false` | Enable the Cilium NodePort service implementation. |
|
||||
| nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node selector for cilium-agent. |
|
||||
| nodeinit.affinity | object | `{}` | Affinity for cilium-nodeinit |
|
||||
| nodeinit.annotations | object | `{}` | Annotations to be added to all top-level nodeinit objects (resources under templates/cilium-nodeinit) |
|
||||
| nodeinit.bootstrapFile | string | `"/tmp/cilium-bootstrap.d/cilium-bootstrap-time"` | bootstrapFile is the location of the file where the bootstrap timestamp is written by the node-init DaemonSet |
|
||||
| nodeinit.enabled | bool | `false` | Enable the node initialization DaemonSet |
|
||||
| nodeinit.extraEnv | list | `[]` | Additional nodeinit environment variables. |
|
||||
@@ -607,6 +654,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| nodeinit.tolerations | list | `[{"operator":"Exists"}]` | Node tolerations for nodeinit scheduling to nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| nodeinit.updateStrategy | object | `{"type":"RollingUpdate"}` | node-init update strategy |
|
||||
| operator.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"io.cilium/app":"operator"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-operator |
|
||||
| operator.annotations | object | `{}` | Annotations to be added to all top-level cilium-operator objects (resources under templates/cilium-operator) |
|
||||
| operator.dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for cilium-operator grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards |
|
||||
| operator.dnsPolicy | string | `""` | DNS policy for Cilium operator pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy |
|
||||
| operator.enabled | bool | `true` | Enable the cilium-operator component (required). |
|
||||
@@ -618,7 +666,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| operator.extraVolumes | list | `[]` | Additional cilium-operator volumes. |
|
||||
| operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. |
|
||||
| operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. |
|
||||
| operator.image | object | `{"alibabacloudDigest":"sha256:e0152c498ba73c56a82eee2a706c8f400e9a6999c665af31a935bdf08e659bc3","awsDigest":"sha256:785ccf1267d0ed3ba9e4bd8166577cb4f9e4ce996af26b27c9d5c554a0d5b09a","azureDigest":"sha256:9203f5583aa34e716d7a6588ebd144e43ce3b77873f578fc12b2679e33591353","genericDigest":"sha256:303f9076bdc73b3fc32aaedee64a14f6f44c8bb08ee9e3956d443021103ebe7a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.14.5","useDigest":true}` | cilium-operator image. |
|
||||
| operator.image | object | `{"alibabacloudDigest":"sha256:e2dafa4c04ab05392a28561ab003c2894ec1fcc3214a4dfe2efd6b7d58a66650","awsDigest":"sha256:3f459999b753bfd8626f8effdf66720a996b2c15c70f4e418011d00de33552eb","azureDigest":"sha256:568293cebc27c01a39a9341b1b2578ebf445228df437f8b318adbbb2c4db842a","genericDigest":"sha256:4dd8f67630f45fcaf58145eb81780b677ef62d57632d7e4442905ad3226a9088","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.15.2","useDigest":true}` | cilium-operator image. |
|
||||
| operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. |
|
||||
| operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods |
|
||||
@@ -631,10 +679,11 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| operator.pprof.enabled | bool | `false` | Enable pprof for cilium-operator |
|
||||
| operator.pprof.port | int | `6061` | Configure pprof listen port for cilium-operator |
|
||||
| operator.priorityClassName | string | `""` | The priority class to use for cilium-operator |
|
||||
| operator.prometheus | object | `{"enabled":false,"port":9963,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","labels":{},"metricRelabelings":null,"relabelings":null}}` | Enable prometheus metrics for cilium-operator on the configured port at /metrics |
|
||||
| operator.prometheus | object | `{"enabled":true,"port":9963,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","jobLabel":"","labels":{},"metricRelabelings":null,"relabelings":null}}` | Enable prometheus metrics for cilium-operator on the configured port at /metrics |
|
||||
| operator.prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-operator |
|
||||
| operator.prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) |
|
||||
| operator.prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. |
|
||||
| operator.prometheus.serviceMonitor.jobLabel | string | `""` | jobLabel to add for ServiceMonitor cilium-operator |
|
||||
| operator.prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-operator |
|
||||
| operator.prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-operator |
|
||||
| operator.prometheus.serviceMonitor.relabelings | string | `nil` | Relabeling configs for the ServiceMonitor cilium-operator |
|
||||
@@ -656,16 +705,18 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| podAnnotations | object | `{}` | Annotations to be added to agent pods |
|
||||
| podLabels | object | `{}` | Labels to be added to agent pods |
|
||||
| podSecurityContext | object | `{}` | Security Context for cilium-agent pods. |
|
||||
| policyCIDRMatchMode | string | `nil` | policyCIDRMatchMode is a list of entities that may be selected by CIDR selector. The possible value is "nodes". |
|
||||
| policyEnforcementMode | string | `"default"` | The agent can be put into one of the three policy enforcement modes: default, always and never. ref: https://docs.cilium.io/en/stable/security/policy/intro/#policy-enforcement-modes |
|
||||
| pprof.address | string | `"localhost"` | Configure pprof listen address for cilium-agent |
|
||||
| pprof.enabled | bool | `false` | Enable pprof for cilium-agent |
|
||||
| pprof.port | int | `6060` | Configure pprof listen port for cilium-agent |
|
||||
| preflight.affinity | object | `{"podAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-preflight |
|
||||
| preflight.annotations | object | `{}` | Annotations to be added to all top-level preflight objects (resources under templates/cilium-preflight) |
|
||||
| preflight.enabled | bool | `false` | Enable Cilium pre-flight resources (required for upgrade) |
|
||||
| preflight.extraEnv | list | `[]` | Additional preflight environment variables. |
|
||||
| preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. |
|
||||
| preflight.extraVolumes | list | `[]` | Additional preflight volumes. |
|
||||
| preflight.image | object | `{"digest":"sha256:d3b287029755b6a47dee01420e2ea469469f1b174a2089c10af7e5e9289ef05b","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.14.5","useDigest":true}` | Cilium pre-flight image. |
|
||||
| preflight.image | object | `{"digest":"sha256:bfeb3f1034282444ae8c498dca94044df2b9c9c8e7ac678e0b43c849f0b31746","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.15.2","useDigest":true}` | Cilium pre-flight image. |
|
||||
| preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods |
|
||||
| preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |
|
||||
@@ -682,11 +733,13 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| preflight.updateStrategy | object | `{"type":"RollingUpdate"}` | preflight update strategy |
|
||||
| preflight.validateCNPs | bool | `true` | By default we should always validate the installed CNPs before upgrading Cilium. This will make sure the user will have the policies deployed in the cluster with the right schema. |
|
||||
| priorityClassName | string | `""` | The priority class to use for cilium-agent. |
|
||||
| prometheus | object | `{"enabled":false,"metrics":null,"port":9962,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}],"trustCRDsExist":false}}` | Configure prometheus metrics on the configured port at /metrics |
|
||||
| prometheus | object | `{"controllerGroupMetrics":["write-cni-file","sync-host-ips","sync-lb-maps-with-k8s-services"],"enabled":false,"metrics":null,"port":9962,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","jobLabel":"","labels":{},"metricRelabelings":null,"relabelings":[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}],"trustCRDsExist":false}}` | Configure prometheus metrics on the configured port at /metrics |
|
||||
| prometheus.controllerGroupMetrics | list | `["write-cni-file","sync-host-ips","sync-lb-maps-with-k8s-services"]` | - Enable controller group metrics for monitoring specific Cilium subsystems. The list is a list of controller group names. The special values of "all" and "none" are supported. The set of controller group names is not guaranteed to be stable between Cilium versions. |
|
||||
| prometheus.metrics | string | `nil` | Metrics that should be enabled or disabled from the default metric list. The list is expected to be separated by a space. (+metric_foo to enable metric_foo , -metric_bar to disable metric_bar). ref: https://docs.cilium.io/en/stable/observability/metrics/ |
|
||||
| prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-agent |
|
||||
| prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) |
|
||||
| prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. |
|
||||
| prometheus.serviceMonitor.jobLabel | string | `""` | jobLabel to add for ServiceMonitor cilium-agent |
|
||||
| prometheus.serviceMonitor.labels | object | `{}` | Labels to add to ServiceMonitor cilium-agent |
|
||||
| prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-agent |
|
||||
| prometheus.serviceMonitor.relabelings | list | `[{"replacement":"${1}","sourceLabels":["__meta_kubernetes_pod_node_name"],"targetLabel":"node"}]` | Relabeling configs for the ServiceMonitor cilium-agent |
|
||||
@@ -698,7 +751,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| rbac.create | bool | `true` | Enable creation of Resource-Based Access Control configuration. |
|
||||
| readinessProbe.failureThreshold | int | `3` | failure threshold of readiness probe |
|
||||
| readinessProbe.periodSeconds | int | `30` | interval between checks of the readiness probe |
|
||||
| remoteNodeIdentity | bool | `true` | Enable use of the remote node identity. ref: https://docs.cilium.io/en/v1.7/install/upgrade/#configmap-remote-node-identity |
|
||||
| remoteNodeIdentity | bool | `true` | Enable use of the remote node identity. ref: https://docs.cilium.io/en/v1.7/install/upgrade/#configmap-remote-node-identity Deprecated without replacement in 1.15. To be removed in 1.16. |
|
||||
| resourceQuotas | object | `{"cilium":{"hard":{"pods":"10k"}},"enabled":false,"operator":{"hard":{"pods":"15"}}}` | Enable resource quotas for priority classes used in the cluster. |
|
||||
| resources | object | `{}` | Agent resource limits & requests ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
|
||||
| rollOutCiliumPods | bool | `false` | Roll out cilium agent pods automatically when configmap is updated. |
|
||||
@@ -715,6 +768,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| serviceAccounts.clustermeshcertgen | object | `{"annotations":{},"automount":true,"create":true,"name":"clustermesh-apiserver-generate-certs"}` | Clustermeshcertgen is used if clustermesh.apiserver.tls.auto.method=cronJob |
|
||||
| serviceAccounts.hubblecertgen | object | `{"annotations":{},"automount":true,"create":true,"name":"hubble-generate-certs"}` | Hubblecertgen is used if hubble.tls.auto.method=cronJob |
|
||||
| serviceAccounts.nodeinit.enabled | bool | `false` | Enabled is temporary until https://github.com/cilium/cilium-cli/issues/1396 is implemented. Cilium CLI doesn't create the SAs for node-init, thus the workaround. Helm is not affected by this issue. Name and automount can be configured, if enabled is set to true. Otherwise, they are ignored. Enabled can be removed once the issue is fixed. Cilium-nodeinit DS must also be fixed. |
|
||||
| serviceNoBackendResponse | string | `"reject"` | Configure what the response should be to traffic for a service without backends. "reject" only works on kernels >= 5.10, on lower kernels we fallback to "drop". Possible values: - reject (default) - drop |
|
||||
| sleepAfterInit | bool | `false` | Do not run Cilium agent when running with clean mode. Useful to completely uninstall Cilium as it will stop Cilium from starting and create artifacts in the node. |
|
||||
| socketLB | object | `{"enabled":false}` | Configure socket LB |
|
||||
| socketLB.enabled | bool | `false` | Enable socket LB |
|
||||
@@ -735,7 +789,6 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| tls.caBundle.useSecret | bool | `false` | Use a Secret instead of a ConfigMap. |
|
||||
| tls.secretsBackend | string | `"local"` | This configures how the Cilium agent loads the secrets used TLS-aware CiliumNetworkPolicies (namely the secrets referenced by terminatingTLS and originatingTLS). Possible values: - local - k8s |
|
||||
| tolerations | list | `[{"operator":"Exists"}]` | Node tolerations for agent scheduling to nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| tunnel | string | `"vxlan"` | Configure the encapsulation configuration for communication between nodes. Deprecated in favor of tunnelProtocol and routingMode. To be removed in 1.15. Possible values: - disabled - vxlan - geneve |
|
||||
| tunnelPort | int | Port 8472 for VXLAN, Port 6081 for Geneve | Configure VXLAN and Geneve tunnel port. |
|
||||
| tunnelProtocol | string | `"vxlan"` | Tunneling protocol to use in tunneling mode and for ad-hoc tunnels. Possible values: - "" - vxlan - geneve |
|
||||
| updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":2},"type":"RollingUpdate"}` | Cilium agent update strategy |
|
||||
|
||||
@@ -11,9 +11,9 @@ set -o nounset
|
||||
# dependencies on anything that is part of the startup script
|
||||
# itself, and can be safely run multiple times per node (e.g. in
|
||||
# case of a restart).
|
||||
if [[ "$(iptables-save | grep -c 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];
|
||||
if [[ "$(iptables-save | grep -E -c 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];
|
||||
then
|
||||
echo 'Deleting iptables rules created by the AWS CNI VPC plugin'
|
||||
iptables-save | grep -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore
|
||||
iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore
|
||||
fi
|
||||
echo 'Done!'
|
||||
|
||||
@@ -27,7 +27,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -131,7 +134,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -271,7 +277,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -394,7 +403,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -511,7 +523,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -636,7 +651,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"description": "BPF memory usage in the entire system including components not managed by Cilium.",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
@@ -759,7 +777,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"description": "Fill percentage of BPF maps, tagged by map name",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
@@ -870,7 +891,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -971,7 +995,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -1072,7 +1099,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -1173,7 +1203,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -1274,7 +1307,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -1375,7 +1411,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -1511,7 +1550,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -1612,7 +1654,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"decimals": 2,
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
@@ -1715,7 +1760,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -1816,7 +1864,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -1915,7 +1966,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -2016,7 +2070,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -2117,7 +2174,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -2239,7 +2299,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"decimals": 2,
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
@@ -2342,7 +2405,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"decimals": 2,
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
@@ -2445,7 +2511,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -2546,7 +2615,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -2647,7 +2719,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -2767,7 +2842,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -2864,7 +2942,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -2984,7 +3065,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -3150,7 +3234,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -3316,7 +3403,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -3482,7 +3572,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -3633,7 +3726,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"decimals": null,
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
@@ -3740,7 +3836,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -3837,7 +3936,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -3934,7 +4036,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -4047,7 +4152,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -4147,7 +4255,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -4270,7 +4381,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -4370,7 +4484,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -4518,7 +4635,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -4638,7 +4758,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -4740,7 +4863,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -4864,7 +4990,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -4966,7 +5095,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -5102,7 +5234,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -5218,7 +5353,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -5327,7 +5465,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -5455,7 +5596,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -5591,7 +5735,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -5676,7 +5823,7 @@
|
||||
"refId": "C"
|
||||
},
|
||||
{
|
||||
"expr": "sum(cilium_policy_change_total{k8s_app=\"cilium\", pod=~\"$pod\"}, outcome=\"fail\") by (pod)",
|
||||
"expr": "sum(cilium_policy_change_total{k8s_app=\"cilium\", pod=~\"$pod\", outcome=\"fail\"}) by (pod)",
|
||||
"format": "time_series",
|
||||
"intervalFactor": 1,
|
||||
"legendFormat": "policy change errors",
|
||||
@@ -5733,7 +5880,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -5841,7 +5991,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -5983,7 +6136,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"decimals": null,
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
@@ -6083,7 +6239,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"decimals": null,
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
@@ -6188,7 +6347,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -6298,7 +6460,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -6421,7 +6586,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -6542,7 +6710,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -6674,7 +6845,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -6775,7 +6949,10 @@
|
||||
"bars": false,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -6876,7 +7053,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -6977,7 +7157,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -7078,7 +7261,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -7178,7 +7364,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -7277,7 +7466,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -7376,7 +7568,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -7475,7 +7670,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -7578,7 +7776,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -7681,7 +7882,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -7784,7 +7988,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -7883,7 +8090,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -7982,7 +8192,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -8081,7 +8294,10 @@
|
||||
"bars": true,
|
||||
"dashLength": 10,
|
||||
"dashes": false,
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
@@ -8182,6 +8398,21 @@
|
||||
"tags": [],
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"current": {},
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
"label": "Prometheus",
|
||||
"multi": false,
|
||||
"name": "DS_PROMETHEUS",
|
||||
"options": [],
|
||||
"query": "prometheus",
|
||||
"queryValue": "",
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"type": "datasource"
|
||||
},
|
||||
{
|
||||
"allValue": "cilium.*",
|
||||
"current": {
|
||||
@@ -8189,7 +8420,10 @@
|
||||
"text": "All",
|
||||
"value": "$__all"
|
||||
},
|
||||
"datasource": "prometheus",
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"definition": "label_values(cilium_version, pod)",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
|
||||
@@ -301,6 +301,14 @@
|
||||
"resourceApiVersion": "V3"
|
||||
}
|
||||
},
|
||||
"bootstrapExtensions": [
|
||||
{
|
||||
"name": "envoy.bootstrap.internal_listener",
|
||||
"typed_config": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.bootstrap.internal_listener.v3.InternalListener"
|
||||
}
|
||||
}
|
||||
],
|
||||
"layeredRuntime": {
|
||||
"layers": [
|
||||
{
|
||||
|
||||
@@ -3226,7 +3226,7 @@
|
||||
]
|
||||
},
|
||||
"timezone": "",
|
||||
"title": "Hubble",
|
||||
"title": "Hubble Metrics and Monitoring",
|
||||
"uid": "5HftnJAWz",
|
||||
"version": 24
|
||||
}
|
||||
|
||||
@@ -0,0 +1,602 @@
|
||||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "DS_PROMETHEUS",
|
||||
"label": "Prometheus",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
"pluginId": "prometheus",
|
||||
"pluginName": "Prometheus"
|
||||
}
|
||||
],
|
||||
"__elements": {},
|
||||
"__requires": [
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "bargauge",
|
||||
"name": "Bar gauge",
|
||||
"version": ""
|
||||
},
|
||||
{
|
||||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "9.4.7"
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
"id": "prometheus",
|
||||
"name": "Prometheus",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "timeseries",
|
||||
"name": "Time series",
|
||||
"version": ""
|
||||
}
|
||||
],
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
"builtIn": 1,
|
||||
"datasource": {
|
||||
"type": "datasource",
|
||||
"uid": "grafana"
|
||||
},
|
||||
"enable": true,
|
||||
"hide": true,
|
||||
"iconColor": "rgba(0, 211, 255, 1)",
|
||||
"name": "Annotations & Alerts",
|
||||
"target": {
|
||||
"limit": 100,
|
||||
"matchAny": false,
|
||||
"tags": [],
|
||||
"type": "dashboard"
|
||||
},
|
||||
"type": "dashboard"
|
||||
}
|
||||
]
|
||||
},
|
||||
"description": "",
|
||||
"editable": true,
|
||||
"fiscalYearStartMonth": 0,
|
||||
"gnetId": 16612,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"links": [
|
||||
{
|
||||
"asDropdown": true,
|
||||
"icon": "external link",
|
||||
"includeVars": true,
|
||||
"keepTime": true,
|
||||
"tags": [
|
||||
"cilium-overview"
|
||||
],
|
||||
"targetBlank": false,
|
||||
"title": "Cilium Overviews",
|
||||
"tooltip": "",
|
||||
"type": "dashboards",
|
||||
"url": ""
|
||||
},
|
||||
{
|
||||
"asDropdown": true,
|
||||
"icon": "external link",
|
||||
"includeVars": false,
|
||||
"keepTime": true,
|
||||
"tags": [
|
||||
"hubble"
|
||||
],
|
||||
"targetBlank": false,
|
||||
"title": "Hubble",
|
||||
"tooltip": "",
|
||||
"type": "dashboards",
|
||||
"url": ""
|
||||
}
|
||||
],
|
||||
"liveNow": false,
|
||||
"panels": [
|
||||
{
|
||||
"collapsed": false,
|
||||
"gridPos": {
|
||||
"h": 1,
|
||||
"w": 24,
|
||||
"x": 0,
|
||||
"y": 0
|
||||
},
|
||||
"id": 2,
|
||||
"panels": [],
|
||||
"title": "DNS",
|
||||
"type": "row"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "normal"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "reqps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 9,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 1
|
||||
},
|
||||
"id": 37,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [
|
||||
"mean",
|
||||
"lastNotNull"
|
||||
],
|
||||
"displayMode": "table",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"sort": "none"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(rate(hubble_dns_queries_total{cluster=~\"$cluster\", source_namespace=~\"$source_namespace\", destination_namespace=~\"$destination_namespace\"}[$__rate_interval])) by (source) > 0",
|
||||
"legendFormat": "{{source}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "DNS queries",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "thresholds"
|
||||
},
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "reqps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 9,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 1
|
||||
},
|
||||
"id": 41,
|
||||
"options": {
|
||||
"displayMode": "gradient",
|
||||
"minVizHeight": 10,
|
||||
"minVizWidth": 0,
|
||||
"orientation": "horizontal",
|
||||
"reduceOptions": {
|
||||
"calcs": [
|
||||
"lastNotNull"
|
||||
],
|
||||
"fields": "",
|
||||
"values": false
|
||||
},
|
||||
"showUnfilled": true
|
||||
},
|
||||
"pluginVersion": "9.4.7",
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "topk(10, sum(rate(hubble_dns_queries_total{cluster=~\"$cluster\", source_namespace=~\"$source_namespace\", destination_namespace=~\"$destination_namespace\"}[$__rate_interval])*60) by (query))",
|
||||
"legendFormat": "{{query}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Top 10 DNS queries",
|
||||
"type": "bargauge"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "normal"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "reqps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 9,
|
||||
"w": 12,
|
||||
"x": 0,
|
||||
"y": 10
|
||||
},
|
||||
"id": 39,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [
|
||||
"mean",
|
||||
"lastNotNull"
|
||||
],
|
||||
"displayMode": "table",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"sort": "none"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "round(sum(rate(hubble_dns_queries_total{cluster=~\"$cluster\", source_namespace=~\"$source_namespace\", destination_namespace=~\"$destination_namespace\"}[$__rate_interval])) by (source) - sum(label_replace(sum(rate(hubble_dns_responses_total{cluster=~\"$cluster\", source_namespace=~\"$destination_namespace\", destination_namespace=~\"$source_namespace\"}[$__rate_interval])) by (destination), \"source\", \"$1\", \"destination\", \"(.*)\")) without (destination), 0.001) > 0",
|
||||
"legendFormat": "{{source}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "Missing DNS responses",
|
||||
"type": "timeseries"
|
||||
},
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"color": {
|
||||
"mode": "palette-classic"
|
||||
},
|
||||
"custom": {
|
||||
"axisCenteredZero": false,
|
||||
"axisColorMode": "text",
|
||||
"axisLabel": "",
|
||||
"axisPlacement": "auto",
|
||||
"barAlignment": 0,
|
||||
"drawStyle": "line",
|
||||
"fillOpacity": 10,
|
||||
"gradientMode": "none",
|
||||
"hideFrom": {
|
||||
"legend": false,
|
||||
"tooltip": false,
|
||||
"viz": false
|
||||
},
|
||||
"lineInterpolation": "linear",
|
||||
"lineWidth": 1,
|
||||
"pointSize": 5,
|
||||
"scaleDistribution": {
|
||||
"type": "linear"
|
||||
},
|
||||
"showPoints": "auto",
|
||||
"spanNulls": false,
|
||||
"stacking": {
|
||||
"group": "A",
|
||||
"mode": "normal"
|
||||
},
|
||||
"thresholdsStyle": {
|
||||
"mode": "off"
|
||||
}
|
||||
},
|
||||
"mappings": [],
|
||||
"min": 0,
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
},
|
||||
"unit": "reqps"
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 9,
|
||||
"w": 12,
|
||||
"x": 12,
|
||||
"y": 10
|
||||
},
|
||||
"id": 43,
|
||||
"options": {
|
||||
"legend": {
|
||||
"calcs": [
|
||||
"mean",
|
||||
"lastNotNull"
|
||||
],
|
||||
"displayMode": "table",
|
||||
"placement": "bottom",
|
||||
"showLegend": true
|
||||
},
|
||||
"tooltip": {
|
||||
"mode": "single",
|
||||
"sort": "none"
|
||||
}
|
||||
},
|
||||
"targets": [
|
||||
{
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"editorMode": "code",
|
||||
"expr": "sum(rate(hubble_dns_responses_total{cluster=~\"$cluster\", source_namespace=~\"$destination_namespace\", destination_namespace=~\"$source_namespace\", rcode!=\"No Error\"}[$__rate_interval])) by (destination, rcode) > 0",
|
||||
"legendFormat": "{{destination}}: {{rcode}}",
|
||||
"range": true,
|
||||
"refId": "A"
|
||||
}
|
||||
],
|
||||
"title": "DNS errors",
|
||||
"type": "timeseries"
|
||||
}
|
||||
],
|
||||
"refresh": "",
|
||||
"revision": 1,
|
||||
"schemaVersion": 38,
|
||||
"style": "dark",
|
||||
"tags": [
|
||||
"kubecon-demo"
|
||||
],
|
||||
"templating": {
|
||||
"list": [
|
||||
{
|
||||
"current": {
|
||||
"selected": false,
|
||||
"text": "default",
|
||||
"value": "default"
|
||||
},
|
||||
"hide": 0,
|
||||
"includeAll": false,
|
||||
"label": "Data Source",
|
||||
"multi": false,
|
||||
"name": "prometheus_datasource",
|
||||
"options": [],
|
||||
"query": "prometheus",
|
||||
"queryValue": "",
|
||||
"refresh": 1,
|
||||
"regex": "(?!grafanacloud-usage|grafanacloud-ml-metrics).+",
|
||||
"skipUrlSync": false,
|
||||
"type": "datasource"
|
||||
},
|
||||
{
|
||||
"current": {},
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"definition": "label_values(cilium_version, cluster)",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
"multi": true,
|
||||
"name": "cluster",
|
||||
"options": [],
|
||||
"query": {
|
||||
"query": "label_values(cilium_version, cluster)",
|
||||
"refId": "StandardVariableQuery"
|
||||
},
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"sort": 0,
|
||||
"type": "query"
|
||||
},
|
||||
{
|
||||
"allValue": ".*",
|
||||
"current": {},
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"definition": "label_values(source_namespace)",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
"label": "Source Namespace",
|
||||
"multi": true,
|
||||
"name": "source_namespace",
|
||||
"options": [],
|
||||
"query": {
|
||||
"query": "label_values(source_namespace)",
|
||||
"refId": "StandardVariableQuery"
|
||||
},
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"sort": 0,
|
||||
"type": "query"
|
||||
},
|
||||
{
|
||||
"allValue": ".*",
|
||||
"current": {},
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"definition": "label_values(destination_namespace)",
|
||||
"hide": 0,
|
||||
"includeAll": true,
|
||||
"label": "Destination Namespace",
|
||||
"multi": true,
|
||||
"name": "destination_namespace",
|
||||
"options": [],
|
||||
"query": {
|
||||
"query": "label_values(destination_namespace)",
|
||||
"refId": "StandardVariableQuery"
|
||||
},
|
||||
"refresh": 1,
|
||||
"regex": "",
|
||||
"skipUrlSync": false,
|
||||
"sort": 0,
|
||||
"type": "query"
|
||||
}
|
||||
]
|
||||
},
|
||||
"time": {
|
||||
"from": "now-1h",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {
|
||||
"refresh_intervals": [
|
||||
"10s",
|
||||
"30s",
|
||||
"1m",
|
||||
"5m",
|
||||
"15m",
|
||||
"30m",
|
||||
"1h",
|
||||
"2h",
|
||||
"1d"
|
||||
],
|
||||
"time_options": [
|
||||
"5m",
|
||||
"15m",
|
||||
"1h",
|
||||
"6h",
|
||||
"12h",
|
||||
"24h",
|
||||
"2d",
|
||||
"7d",
|
||||
"30d"
|
||||
]
|
||||
},
|
||||
"timezone": "",
|
||||
"title": "Hubble / DNS Overview (Namespace)",
|
||||
"uid": "_f0DUpY4k",
|
||||
"version": 26,
|
||||
"weekStart": ""
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -100,7 +100,7 @@ then
|
||||
# Since that version containerd no longer allows missing configuration for the CNI,
|
||||
# not even for pods with hostNetwork set to true. Thus, we add a temporary one.
|
||||
# This will be replaced with the real config by the agent pod.
|
||||
echo -e "{\n\t"cniVersion": "0.3.1",\n\t"name": "cilium",\n\t"type": "cilium-cni"\n}" > /etc/cni/net.d/05-cilium.conf
|
||||
echo -e '{\n\t"cniVersion": "0.3.1",\n\t"name": "cilium",\n\t"type": "cilium-cni"\n}' > /etc/cni/net.d/05-cilium.conf
|
||||
fi
|
||||
|
||||
# Start containerd. It won't create it's CNI configuration file anymore.
|
||||
|
||||
@@ -6,6 +6,10 @@ apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: cilium
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
@@ -82,6 +86,9 @@ rules:
|
||||
resources:
|
||||
- ciliumloadbalancerippools
|
||||
- ciliumbgppeeringpolicies
|
||||
- ciliumbgpnodeconfigs
|
||||
- ciliumbgpadvertisements
|
||||
- ciliumbgppeerconfigs
|
||||
- ciliumclusterwideenvoyconfigs
|
||||
- ciliumclusterwidenetworkpolicies
|
||||
- ciliumegressgatewaypolicies
|
||||
@@ -137,6 +144,7 @@ rules:
|
||||
- ciliumendpoints/status
|
||||
- ciliumendpoints
|
||||
- ciliuml2announcementpolicies/status
|
||||
- ciliumbgpnodeconfigs/status
|
||||
verbs:
|
||||
- patch
|
||||
{{- end }}
|
||||
|
||||
@@ -3,6 +3,10 @@ apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: cilium
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
|
||||
@@ -16,6 +16,10 @@ kind: DaemonSet
|
||||
metadata:
|
||||
name: cilium
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: cilium
|
||||
app.kubernetes.io/part-of: cilium
|
||||
@@ -128,6 +132,7 @@ spec:
|
||||
failureThreshold: {{ .Values.startupProbe.failureThreshold }}
|
||||
periodSeconds: {{ .Values.startupProbe.periodSeconds }}
|
||||
successThreshold: 1
|
||||
initialDelaySeconds: 5
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
{{- if or .Values.keepDeprecatedProbes $defaultKeepDeprecatedProbes }}
|
||||
@@ -196,6 +201,11 @@ spec:
|
||||
fieldPath: metadata.namespace
|
||||
- name: CILIUM_CLUSTERMESH_CONFIG
|
||||
value: /var/lib/cilium/clustermesh/
|
||||
- name: GOMEMLIMIT
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.memory
|
||||
divisor: '1'
|
||||
{{- if .Values.k8sServiceHost }}
|
||||
- name: KUBERNETES_SERVICE_HOST
|
||||
value: {{ .Values.k8sServiceHost | quote }}
|
||||
@@ -371,6 +381,11 @@ spec:
|
||||
mountPropagation: {{ .mountPropagation }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.export.dynamic.enabled }}
|
||||
- name: hubble-flowlog-config
|
||||
mountPath: /flowlog-config
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- with .Values.extraVolumeMounts }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
@@ -387,7 +402,7 @@ spec:
|
||||
for i in {1..5}; do \
|
||||
[ -S /var/run/cilium/monitor1_2.sock ] && break || sleep 10;\
|
||||
done; \
|
||||
cilium monitor
|
||||
cilium-dbg monitor
|
||||
{{- range $type := .Values.monitor.eventTypes -}}
|
||||
{{ " " }}--type={{ $type }}
|
||||
{{- end }}
|
||||
@@ -411,7 +426,7 @@ spec:
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- cilium
|
||||
- cilium-dbg
|
||||
- build-config
|
||||
{{- if (not (kindIs "invalid" .Values.daemon.configSources)) }}
|
||||
- "--source={{.Values.daemon.configSources}}"
|
||||
@@ -447,6 +462,9 @@ spec:
|
||||
volumeMounts:
|
||||
- name: tmp
|
||||
mountPath: /tmp
|
||||
{{- with .Values.extraVolumeMounts }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
{{- if .Values.cgroup.autoMount.enabled }}
|
||||
# Required to mount cgroup2 filesystem on the underlying Kubernetes node.
|
||||
@@ -609,6 +627,12 @@ spec:
|
||||
name: cilium-config
|
||||
key: clean-cilium-bpf-state
|
||||
optional: true
|
||||
- name: WRITE_CNI_CONF_WHEN_READY
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: cilium-config
|
||||
key: write-cni-conf-when-ready
|
||||
optional: true
|
||||
{{- if .Values.k8sServiceHost }}
|
||||
- name: KUBERNETES_SERVICE_HOST
|
||||
value: {{ .Values.k8sServiceHost | quote }}
|
||||
@@ -656,7 +680,7 @@ spec:
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- if and .Values.waitForKubeProxy (ne $kubeProxyReplacement "strict") }}
|
||||
{{- if and .Values.waitForKubeProxy (and (ne $kubeProxyReplacement "strict") (ne $kubeProxyReplacement "true")) }}
|
||||
- name: wait-for-kube-proxy
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
@@ -700,10 +724,10 @@ spec:
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- "/install-plugin.sh"
|
||||
{{- with .Values.cni.resources }}
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 10Mi
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
{{- if .Values.securityContext.privileged }}
|
||||
privileged: true
|
||||
@@ -747,7 +771,7 @@ spec:
|
||||
tolerations:
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if and .Values.clustermesh.useAPIServer .Values.clustermesh.config.enabled (not .Values.clustermesh.apiserver.kvstoremesh.enabled) }}
|
||||
{{- if and .Values.clustermesh.config.enabled (not (and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.kvstoremesh.enabled )) }}
|
||||
hostAliases:
|
||||
{{- range $cluster := .Values.clustermesh.config.clusters }}
|
||||
{{- range $ip := $cluster.ips }}
|
||||
@@ -941,6 +965,12 @@ spec:
|
||||
path: client-ca.crt
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.export.dynamic.enabled }}
|
||||
- name: hubble-flowlog-config
|
||||
configMap:
|
||||
name: {{ .Values.hubble.export.dynamic.config.configMapName }}
|
||||
optional: true
|
||||
{{- end }}
|
||||
{{- range .Values.extraHostPathMounts }}
|
||||
- name: {{ .name }}
|
||||
hostPath:
|
||||
|
||||
@@ -0,0 +1,981 @@
|
||||
{{- if and .Values.agent (not .Values.preflight.enabled) }}
|
||||
|
||||
{{- /* Default values with backwards compatibility */ -}}
|
||||
{{- $defaultKeepDeprecatedProbes := true -}}
|
||||
|
||||
{{- /* Default values when 1.8 was initially deployed */ -}}
|
||||
{{- if semverCompare ">=1.8" (default "1.8" .Values.upgradeCompatibility) -}}
|
||||
{{- $defaultKeepDeprecatedProbes = false -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- $kubeProxyReplacement := (coalesce .Values.kubeProxyReplacement "false") -}}
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: cilium
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: cilium
|
||||
app.kubernetes.io/part-of: cilium
|
||||
app.kubernetes.io/name: cilium-agent
|
||||
{{- if .Values.keepDeprecatedLabels }}
|
||||
kubernetes.io/cluster-service: "true"
|
||||
{{- if and .Values.gke.enabled (eq .Release.Namespace "kube-system" ) }}
|
||||
{{- fail "Invalid configuration: Installing Cilium on GKE with 'kubernetes.io/cluster-service' labels on 'kube-system' namespace causes Cilium DaemonSet to be removed by GKE. Either install Cilium on a different Namespace or install with '--set keepDeprecatedLabels=false'" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: cilium
|
||||
{{- if .Values.keepDeprecatedLabels }}
|
||||
kubernetes.io/cluster-service: "true"
|
||||
{{- end }}
|
||||
{{- with .Values.updateStrategy }}
|
||||
updateStrategy:
|
||||
{{- toYaml . | trim | nindent 4 }}
|
||||
{{- end }}
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
{{- if and .Values.prometheus.enabled (not .Values.prometheus.serviceMonitor.enabled) }}
|
||||
prometheus.io/port: "{{ .Values.prometheus.port }}"
|
||||
prometheus.io/scrape: "true"
|
||||
{{- end }}
|
||||
{{- if .Values.rollOutCiliumPods }}
|
||||
# ensure pods roll when configmap updates
|
||||
cilium.io/cilium-configmap-checksum: {{ include (print $.Template.BasePath "/cilium-configmap.yaml") . | sha256sum | quote }}
|
||||
{{- end }}
|
||||
{{- if not .Values.securityContext.privileged }}
|
||||
# Set app AppArmor's profile to "unconfined". The value of this annotation
|
||||
# can be modified as long users know which profiles they have available
|
||||
# in AppArmor.
|
||||
container.apparmor.security.beta.kubernetes.io/cilium-agent: "unconfined"
|
||||
container.apparmor.security.beta.kubernetes.io/clean-cilium-state: "unconfined"
|
||||
{{- if .Values.cgroup.autoMount.enabled }}
|
||||
container.apparmor.security.beta.kubernetes.io/mount-cgroup: "unconfined"
|
||||
container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: "unconfined"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- with .Values.podAnnotations }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: cilium
|
||||
app.kubernetes.io/name: cilium-agent
|
||||
app.kubernetes.io/part-of: cilium
|
||||
{{- if .Values.keepDeprecatedLabels }}
|
||||
kubernetes.io/cluster-service: "true"
|
||||
{{- end }}
|
||||
{{- with .Values.podLabels }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.podSecurityContext }}
|
||||
securityContext:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: cilium-agent
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
{{- if .Values.sleepAfterInit }}
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- --
|
||||
args:
|
||||
- |
|
||||
while true; do
|
||||
sleep 30;
|
||||
done
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- "true"
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- "true"
|
||||
{{- else }}
|
||||
command:
|
||||
- cilium-agent
|
||||
args:
|
||||
- --config-dir=/tmp/cilium/config-map
|
||||
{{- with .Values.extraArgs }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if semverCompare ">=1.20-0" .Capabilities.KubeVersion.Version }}
|
||||
startupProbe:
|
||||
httpGet:
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
path: /healthz
|
||||
port: {{ .Values.healthPort }}
|
||||
scheme: HTTP
|
||||
httpHeaders:
|
||||
- name: "brief"
|
||||
value: "true"
|
||||
failureThreshold: {{ .Values.startupProbe.failureThreshold }}
|
||||
periodSeconds: {{ .Values.startupProbe.periodSeconds }}
|
||||
successThreshold: 1
|
||||
initialDelaySeconds: 5
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
{{- if or .Values.keepDeprecatedProbes $defaultKeepDeprecatedProbes }}
|
||||
exec:
|
||||
command:
|
||||
- cilium
|
||||
- status
|
||||
- --brief
|
||||
{{- else }}
|
||||
httpGet:
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
path: /healthz
|
||||
port: {{ .Values.healthPort }}
|
||||
scheme: HTTP
|
||||
httpHeaders:
|
||||
- name: "brief"
|
||||
value: "true"
|
||||
{{- end }}
|
||||
{{- if semverCompare "<1.20-0" .Capabilities.KubeVersion.Version }}
|
||||
# The initial delay for the liveness probe is intentionally large to
|
||||
# avoid an endless kill & restart cycle if in the event that the initial
|
||||
# bootstrapping takes longer than expected.
|
||||
# Starting from Kubernetes 1.20, we are using startupProbe instead
|
||||
# of this field.
|
||||
initialDelaySeconds: 120
|
||||
{{- end }}
|
||||
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
|
||||
successThreshold: 1
|
||||
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
{{- if or .Values.keepDeprecatedProbes $defaultKeepDeprecatedProbes }}
|
||||
exec:
|
||||
command:
|
||||
- cilium
|
||||
- status
|
||||
- --brief
|
||||
{{- else }}
|
||||
httpGet:
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
path: /healthz
|
||||
port: {{ .Values.healthPort }}
|
||||
scheme: HTTP
|
||||
httpHeaders:
|
||||
- name: "brief"
|
||||
value: "true"
|
||||
{{- end }}
|
||||
{{- if semverCompare "<1.20-0" .Capabilities.KubeVersion.Version }}
|
||||
initialDelaySeconds: 5
|
||||
{{- end }}
|
||||
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
|
||||
successThreshold: 1
|
||||
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
|
||||
timeoutSeconds: 5
|
||||
{{- end }}
|
||||
env:
|
||||
- name: K8S_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: spec.nodeName
|
||||
- name: CILIUM_K8S_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: metadata.namespace
|
||||
- name: CILIUM_CLUSTERMESH_CONFIG
|
||||
value: /var/lib/cilium/clustermesh/
|
||||
- name: GOMEMLIMIT
|
||||
valueFrom:
|
||||
resourceFieldRef:
|
||||
resource: limits.memory
|
||||
divisor: '1'
|
||||
{{- if .Values.k8sServiceHost }}
|
||||
- name: KUBERNETES_SERVICE_HOST
|
||||
value: {{ .Values.k8sServiceHost | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.k8sServicePort }}
|
||||
- name: KUBERNETES_SERVICE_PORT
|
||||
value: {{ .Values.k8sServicePort | quote }}
|
||||
{{- end }}
|
||||
{{- with .Values.extraEnv }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.cni.install }}
|
||||
lifecycle:
|
||||
{{- if ne .Values.cni.chainingMode "aws-cni" }}
|
||||
postStart:
|
||||
exec:
|
||||
command:
|
||||
- "bash"
|
||||
- "-c"
|
||||
- |
|
||||
{{- tpl (.Files.Get "files/agent/poststart-eni.bash") . | nindent 20 }}
|
||||
{{- end }}
|
||||
preStop:
|
||||
exec:
|
||||
command:
|
||||
- /cni-uninstall.sh
|
||||
{{- end }}
|
||||
{{- with .Values.resources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.prometheus.enabled .Values.hubble.metrics.enabled }}
|
||||
ports:
|
||||
- name: peer-service
|
||||
containerPort: {{ .Values.hubble.peerService.targetPort }}
|
||||
hostPort: {{ .Values.hubble.peerService.targetPort }}
|
||||
protocol: TCP
|
||||
{{- if .Values.prometheus.enabled }}
|
||||
- name: prometheus
|
||||
containerPort: {{ .Values.prometheus.port }}
|
||||
hostPort: {{ .Values.prometheus.port }}
|
||||
protocol: TCP
|
||||
{{- if and .Values.proxy.prometheus.enabled .Values.envoy.prometheus.enabled (not .Values.envoy.enabled) }}
|
||||
- name: envoy-metrics
|
||||
containerPort: {{ .Values.proxy.prometheus.port | default .Values.envoy.prometheus.port }}
|
||||
hostPort: {{ .Values.proxy.prometheus.port | default .Values.envoy.prometheus.port }}
|
||||
protocol: TCP
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.metrics.enabled }}
|
||||
- name: hubble-metrics
|
||||
containerPort: {{ .Values.hubble.metrics.port }}
|
||||
hostPort: {{ .Values.hubble.metrics.port }}
|
||||
protocol: TCP
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
{{- if .Values.securityContext.privileged }}
|
||||
privileged: true
|
||||
{{- else }}
|
||||
seLinuxOptions:
|
||||
{{- with .Values.securityContext.seLinuxOptions }}
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
capabilities:
|
||||
add:
|
||||
{{- with .Values.securityContext.capabilities.ciliumAgent }}
|
||||
{{- toYaml . | nindent 14 }}
|
||||
{{- end }}
|
||||
drop:
|
||||
- ALL
|
||||
{{- end }}
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
volumeMounts:
|
||||
{{- if .Values.authentication.mutual.spire.enabled }}
|
||||
- name: spire-agent-socket
|
||||
mountPath: {{ dir .Values.authentication.mutual.spire.adminSocketPath }}
|
||||
readOnly: false
|
||||
{{- end }}
|
||||
{{- if .Values.envoy.enabled }}
|
||||
- name: envoy-sockets
|
||||
mountPath: /var/run/cilium/envoy/sockets
|
||||
readOnly: false
|
||||
{{- end }}
|
||||
{{- if not .Values.securityContext.privileged }}
|
||||
# Unprivileged containers need to mount /proc/sys/net from the host
|
||||
# to have write access
|
||||
- mountPath: /host/proc/sys/net
|
||||
name: host-proc-sys-net
|
||||
# Unprivileged containers need to mount /proc/sys/kernel from the host
|
||||
# to have write access
|
||||
- mountPath: /host/proc/sys/kernel
|
||||
name: host-proc-sys-kernel
|
||||
{{- end}}
|
||||
{{- /* CRI-O already mounts the BPF filesystem */ -}}
|
||||
{{- if and .Values.bpf.autoMount.enabled (not (eq .Values.containerRuntime.integration "crio")) }}
|
||||
- name: bpf-maps
|
||||
mountPath: /sys/fs/bpf
|
||||
{{- if .Values.securityContext.privileged }}
|
||||
mountPropagation: Bidirectional
|
||||
{{- else }}
|
||||
# Unprivileged containers can't set mount propagation to bidirectional
|
||||
# in this case we will mount the bpf fs from an init container that
|
||||
# is privileged and set the mount propagation from host to container
|
||||
# in Cilium.
|
||||
mountPropagation: HostToContainer
|
||||
{{- end}}
|
||||
{{- end }}
|
||||
{{- if not (contains "/run/cilium/cgroupv2" .Values.cgroup.hostRoot) }}
|
||||
# Check for duplicate mounts before mounting
|
||||
- name: cilium-cgroup
|
||||
mountPath: {{ .Values.cgroup.hostRoot }}
|
||||
{{- end}}
|
||||
- name: cilium-run
|
||||
mountPath: /var/run/cilium
|
||||
- name: etc-cni-netd
|
||||
mountPath: {{ .Values.cni.hostConfDirMountPath }}
|
||||
{{- if .Values.etcd.enabled }}
|
||||
- name: etcd-config-path
|
||||
mountPath: /var/lib/etcd-config
|
||||
readOnly: true
|
||||
{{- if or .Values.etcd.ssl .Values.etcd.managed }}
|
||||
- name: etcd-secrets
|
||||
mountPath: /var/lib/etcd-secrets
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
- name: clustermesh-secrets
|
||||
mountPath: /var/lib/cilium/clustermesh
|
||||
readOnly: true
|
||||
{{- if .Values.ipMasqAgent.enabled }}
|
||||
- name: ip-masq-agent
|
||||
mountPath: /etc/config
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if .Values.cni.configMap }}
|
||||
- name: cni-configuration
|
||||
mountPath: {{ .Values.cni.confFileMountPath }}
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
# Needed to be able to load kernel modules
|
||||
- name: lib-modules
|
||||
mountPath: /lib/modules
|
||||
readOnly: true
|
||||
- name: xtables-lock
|
||||
mountPath: /run/xtables.lock
|
||||
{{- if and .Values.encryption.enabled (eq .Values.encryption.type "ipsec") }}
|
||||
- name: cilium-ipsec-secrets
|
||||
mountPath: {{ .Values.encryption.ipsec.mountPath | default .Values.encryption.mountPath }}
|
||||
{{- end }}
|
||||
{{- if .Values.kubeConfigPath }}
|
||||
- name: kube-config
|
||||
mountPath: {{ .Values.kubeConfigPath }}
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if .Values.bgp.enabled }}
|
||||
- name: bgp-config-path
|
||||
mountPath: /var/lib/cilium/bgp
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if and .Values.hubble.enabled .Values.hubble.tls.enabled (hasKey .Values.hubble "listenAddress") }}
|
||||
- name: hubble-tls
|
||||
mountPath: /var/lib/cilium/tls/hubble
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
- name: tmp
|
||||
mountPath: /tmp
|
||||
{{- range .Values.extraHostPathMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
readOnly: {{ .readOnly }}
|
||||
{{- if .mountPropagation }}
|
||||
mountPropagation: {{ .mountPropagation }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.export.dynamic.enabled }}
|
||||
- name: hubble-flowlog-config
|
||||
mountPath: /flowlog-config
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- with .Values.extraVolumeMounts }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.monitor.enabled }}
|
||||
- name: cilium-monitor
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- --
|
||||
args:
|
||||
- |-
|
||||
for i in {1..5}; do \
|
||||
[ -S /var/run/cilium/monitor1_2.sock ] && break || sleep 10;\
|
||||
done; \
|
||||
cilium-dbg monitor
|
||||
{{- range $type := .Values.monitor.eventTypes -}}
|
||||
{{ " " }}--type={{ $type }}
|
||||
{{- end }}
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
volumeMounts:
|
||||
- name: cilium-run
|
||||
mountPath: /var/run/cilium
|
||||
{{- with .Values.extraVolumeMounts }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.monitor.resources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.extraContainers }}
|
||||
{{- toYaml .Values.extraContainers | nindent 6 }}
|
||||
{{- end }}
|
||||
initContainers:
|
||||
- name: config
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- cilium-dbg
|
||||
- build-config
|
||||
{{- if (not (kindIs "invalid" .Values.daemon.configSources)) }}
|
||||
- "--source={{.Values.daemon.configSources}}"
|
||||
{{- end }}
|
||||
{{- if (not (kindIs "invalid" .Values.daemon.allowedConfigOverrides)) }}
|
||||
- "--allow-config-keys={{.Values.daemon.allowedConfigOverrides}}"
|
||||
{{- end }}
|
||||
{{- if (not (kindIs "invalid" .Values.daemon.blockedConfigOverrides)) }}
|
||||
- "--deny-config-keys={{.Values.daemon.blockedConfigOverrides}}"
|
||||
{{- end }}
|
||||
env:
|
||||
- name: K8S_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: spec.nodeName
|
||||
- name: CILIUM_K8S_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: metadata.namespace
|
||||
{{- if .Values.k8sServiceHost }}
|
||||
- name: KUBERNETES_SERVICE_HOST
|
||||
value: {{ .Values.k8sServiceHost | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.k8sServicePort }}
|
||||
- name: KUBERNETES_SERVICE_PORT
|
||||
value: {{ .Values.k8sServicePort | quote }}
|
||||
{{- end }}
|
||||
{{- with .Values.extraEnv }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: tmp
|
||||
mountPath: /tmp
|
||||
{{- with .Values.extraVolumeMounts }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
{{- if .Values.cgroup.autoMount.enabled }}
|
||||
# Required to mount cgroup2 filesystem on the underlying Kubernetes node.
|
||||
# We use nsenter command with host's cgroup and mount namespaces enabled.
|
||||
- name: mount-cgroup
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
env:
|
||||
- name: CGROUP_ROOT
|
||||
value: {{ .Values.cgroup.hostRoot }}
|
||||
- name: BIN_PATH
|
||||
value: {{ .Values.cni.binPath }}
|
||||
{{- with .Values.cgroup.autoMount.resources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
command:
|
||||
- sh
|
||||
- -ec
|
||||
# The statically linked Go program binary is invoked to avoid any
|
||||
# dependency on utilities like sh and mount that can be missing on certain
|
||||
# distros installed on the underlying host. Copy the binary to the
|
||||
# same directory where we install cilium cni plugin so that exec permissions
|
||||
# are available.
|
||||
- |
|
||||
cp /usr/bin/cilium-mount /hostbin/cilium-mount;
|
||||
nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
|
||||
rm /hostbin/cilium-mount
|
||||
volumeMounts:
|
||||
- name: hostproc
|
||||
mountPath: /hostproc
|
||||
- name: cni-path
|
||||
mountPath: /hostbin
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
securityContext:
|
||||
{{- if .Values.securityContext.privileged }}
|
||||
privileged: true
|
||||
{{- else }}
|
||||
seLinuxOptions:
|
||||
{{- with .Values.securityContext.seLinuxOptions }}
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
capabilities:
|
||||
add:
|
||||
{{- with .Values.securityContext.capabilities.mountCgroup }}
|
||||
{{- toYaml . | nindent 14 }}
|
||||
{{- end }}
|
||||
drop:
|
||||
- ALL
|
||||
{{- end}}
|
||||
- name: apply-sysctl-overwrites
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
{{- with .Values.initResources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
env:
|
||||
- name: BIN_PATH
|
||||
value: {{ .Values.cni.binPath }}
|
||||
command:
|
||||
- sh
|
||||
- -ec
|
||||
# The statically linked Go program binary is invoked to avoid any
|
||||
# dependency on utilities like sh that can be missing on certain
|
||||
# distros installed on the underlying host. Copy the binary to the
|
||||
# same directory where we install cilium cni plugin so that exec permissions
|
||||
# are available.
|
||||
- |
|
||||
cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;
|
||||
nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";
|
||||
rm /hostbin/cilium-sysctlfix
|
||||
volumeMounts:
|
||||
- name: hostproc
|
||||
mountPath: /hostproc
|
||||
- name: cni-path
|
||||
mountPath: /hostbin
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
securityContext:
|
||||
{{- if .Values.securityContext.privileged }}
|
||||
privileged: true
|
||||
{{- else }}
|
||||
seLinuxOptions:
|
||||
{{- with .Values.securityContext.seLinuxOptions }}
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
capabilities:
|
||||
add:
|
||||
{{- with .Values.securityContext.capabilities.applySysctlOverwrites }}
|
||||
{{- toYaml . | nindent 14 }}
|
||||
{{- end }}
|
||||
drop:
|
||||
- ALL
|
||||
{{- end}}
|
||||
{{- end }}
|
||||
{{- if and .Values.bpf.autoMount.enabled (not .Values.securityContext.privileged) }}
|
||||
# Mount the bpf fs if it is not mounted. We will perform this task
|
||||
# from a privileged container because the mount propagation bidirectional
|
||||
# only works from privileged containers.
|
||||
- name: mount-bpf-fs
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
{{- with .Values.initResources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
args:
|
||||
- 'mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf'
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- --
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
securityContext:
|
||||
privileged: true
|
||||
{{- /* CRI-O already mounts the BPF filesystem */ -}}
|
||||
{{- if and .Values.bpf.autoMount.enabled (not (eq .Values.containerRuntime.integration "crio")) }}
|
||||
volumeMounts:
|
||||
- name: bpf-maps
|
||||
mountPath: /sys/fs/bpf
|
||||
mountPropagation: Bidirectional
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if and .Values.nodeinit.enabled .Values.nodeinit.bootstrapFile }}
|
||||
- name: wait-for-node-init
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
{{- with .Values.initResources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
until test -s {{ (print "/tmp/cilium-bootstrap.d/" (.Values.nodeinit.bootstrapFile | base)) | quote }}; do
|
||||
echo "Waiting on node-init to run...";
|
||||
sleep 1;
|
||||
done
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
volumeMounts:
|
||||
- name: cilium-bootstrap-file-dir
|
||||
mountPath: "/tmp/cilium-bootstrap.d"
|
||||
{{- end }}
|
||||
- name: clean-cilium-state
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- /init-container.sh
|
||||
env:
|
||||
- name: CILIUM_ALL_STATE
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: cilium-config
|
||||
key: clean-cilium-state
|
||||
optional: true
|
||||
- name: CILIUM_BPF_STATE
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: cilium-config
|
||||
key: clean-cilium-bpf-state
|
||||
optional: true
|
||||
- name: WRITE_CNI_CONF_WHEN_READY
|
||||
valueFrom:
|
||||
configMapKeyRef:
|
||||
name: cilium-config
|
||||
key: write-cni-conf-when-ready
|
||||
optional: true
|
||||
{{- if .Values.k8sServiceHost }}
|
||||
- name: KUBERNETES_SERVICE_HOST
|
||||
value: {{ .Values.k8sServiceHost | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.k8sServicePort }}
|
||||
- name: KUBERNETES_SERVICE_PORT
|
||||
value: {{ .Values.k8sServicePort | quote }}
|
||||
{{- end }}
|
||||
{{- with .Values.extraEnv }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
securityContext:
|
||||
{{- if .Values.securityContext.privileged }}
|
||||
privileged: true
|
||||
{{- else }}
|
||||
seLinuxOptions:
|
||||
{{- with .Values.securityContext.seLinuxOptions }}
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
capabilities:
|
||||
add:
|
||||
{{- with .Values.securityContext.capabilities.cleanCiliumState }}
|
||||
{{- toYaml . | nindent 14 }}
|
||||
{{- end }}
|
||||
drop:
|
||||
- ALL
|
||||
{{- end}}
|
||||
volumeMounts:
|
||||
{{- /* CRI-O already mounts the BPF filesystem */ -}}
|
||||
{{- if and .Values.bpf.autoMount.enabled (not (eq .Values.containerRuntime.integration "crio")) }}
|
||||
- name: bpf-maps
|
||||
mountPath: /sys/fs/bpf
|
||||
{{- end }}
|
||||
# Required to mount cgroup filesystem from the host to cilium agent pod
|
||||
- name: cilium-cgroup
|
||||
mountPath: {{ .Values.cgroup.hostRoot }}
|
||||
mountPropagation: HostToContainer
|
||||
- name: cilium-run
|
||||
mountPath: /var/run/cilium
|
||||
{{- with .Values.extraVolumeMounts }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.initResources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- if and .Values.waitForKubeProxy (and (ne $kubeProxyReplacement "strict") (ne $kubeProxyReplacement "true")) }}
|
||||
- name: wait-for-kube-proxy
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
{{- with .Values.initResources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
privileged: true
|
||||
command:
|
||||
- bash
|
||||
- -c
|
||||
- |
|
||||
while true
|
||||
do
|
||||
if iptables-nft-save -t mangle | grep -E '^:(KUBE-IPTABLES-HINT|KUBE-PROXY-CANARY)'; then
|
||||
echo "Found KUBE-IPTABLES-HINT or KUBE-PROXY-CANARY iptables rule in 'iptables-nft-save -t mangle'"
|
||||
exit 0
|
||||
fi
|
||||
if ip6tables-nft-save -t mangle | grep -E '^:(KUBE-IPTABLES-HINT|KUBE-PROXY-CANARY)'; then
|
||||
echo "Found KUBE-IPTABLES-HINT or KUBE-PROXY-CANARY iptables rule in 'ip6tables-nft-save -t mangle'"
|
||||
exit 0
|
||||
fi
|
||||
if iptables-legacy-save | grep -E '^:KUBE-PROXY-CANARY'; then
|
||||
echo "Found KUBE-PROXY-CANARY iptables rule in 'iptables-legacy-save"
|
||||
exit 0
|
||||
fi
|
||||
if ip6tables-legacy-save | grep -E '^:KUBE-PROXY-CANARY'; then
|
||||
echo "KUBE-PROXY-CANARY iptables rule in 'ip6tables-legacy-save'"
|
||||
exit 0
|
||||
fi
|
||||
echo "Waiting for kube-proxy to create iptables rules...";
|
||||
sleep 1;
|
||||
done
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
{{- end }} # wait-for-kube-proxy
|
||||
{{- if .Values.cni.install }}
|
||||
# Install the CNI binaries in an InitContainer so we don't have a writable host mount in the agent
|
||||
- name: install-cni-binaries
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
command:
|
||||
- "/install-plugin.sh"
|
||||
{{- with .Values.cni.resources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
{{- if .Values.securityContext.privileged }}
|
||||
privileged: true
|
||||
{{- else }}
|
||||
seLinuxOptions:
|
||||
{{- with .Values.securityContext.seLinuxOptions }}
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
volumeMounts:
|
||||
- name: cni-path
|
||||
mountPath: /host/opt/cni/bin
|
||||
{{- end }} # .Values.cni.install
|
||||
restartPolicy: Always
|
||||
priorityClassName: {{ include "cilium.priorityClass" (list $ .Values.priorityClassName "system-node-critical") }}
|
||||
serviceAccount: {{ .Values.serviceAccounts.cilium.name | quote }}
|
||||
serviceAccountName: {{ .Values.serviceAccounts.cilium.name | quote }}
|
||||
automountServiceAccountToken: {{ .Values.serviceAccounts.cilium.automount }}
|
||||
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
|
||||
hostNetwork: true
|
||||
{{- if and .Values.etcd.managed (not .Values.etcd.k8sService) }}
|
||||
# In managed etcd mode, Cilium must be able to resolve the DNS name of
|
||||
# the etcd service
|
||||
dnsPolicy: ClusterFirstWithHostNet
|
||||
{{- else if .Values.dnsPolicy }}
|
||||
dnsPolicy: {{ .Values.dnsPolicy }}
|
||||
{{- end }}
|
||||
{{- with .Values.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with .Values.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if and .Values.clustermesh.config.enabled (not (and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.kvstoremesh.enabled )) }}
|
||||
hostAliases:
|
||||
{{- range $cluster := .Values.clustermesh.config.clusters }}
|
||||
{{- range $ip := $cluster.ips }}
|
||||
- ip: {{ $ip }}
|
||||
hostnames: [ "{{ $cluster.name }}.{{ $.Values.clustermesh.config.domain }}" ]
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
# For sharing configuration between the "config" initContainer and the agent
|
||||
- name: tmp
|
||||
emptyDir: {}
|
||||
# To keep state between restarts / upgrades
|
||||
- name: cilium-run
|
||||
hostPath:
|
||||
path: {{ .Values.daemon.runPath }}
|
||||
type: DirectoryOrCreate
|
||||
{{- /* CRI-O already mounts the BPF filesystem */ -}}
|
||||
{{- if and .Values.bpf.autoMount.enabled (not (eq .Values.containerRuntime.integration "crio")) }}
|
||||
# To keep state between restarts / upgrades for bpf maps
|
||||
- name: bpf-maps
|
||||
hostPath:
|
||||
path: /sys/fs/bpf
|
||||
type: DirectoryOrCreate
|
||||
{{- end }}
|
||||
{{- if .Values.cgroup.autoMount.enabled }}
|
||||
# To mount cgroup2 filesystem on the host
|
||||
- name: hostproc
|
||||
hostPath:
|
||||
path: /proc
|
||||
type: Directory
|
||||
{{- end }}
|
||||
# To keep state between restarts / upgrades for cgroup2 filesystem
|
||||
- name: cilium-cgroup
|
||||
hostPath:
|
||||
path: {{ .Values.cgroup.hostRoot}}
|
||||
type: DirectoryOrCreate
|
||||
# To install cilium cni plugin in the host
|
||||
- name: cni-path
|
||||
hostPath:
|
||||
path: {{ .Values.cni.binPath }}
|
||||
type: DirectoryOrCreate
|
||||
# To install cilium cni configuration in the host
|
||||
- name: etc-cni-netd
|
||||
hostPath:
|
||||
path: {{ .Values.cni.confPath }}
|
||||
type: DirectoryOrCreate
|
||||
# To be able to load kernel modules
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
# To access iptables concurrently with other processes (e.g. kube-proxy)
|
||||
- name: xtables-lock
|
||||
hostPath:
|
||||
path: /run/xtables.lock
|
||||
type: FileOrCreate
|
||||
{{- if .Values.authentication.mutual.spire.enabled }}
|
||||
- name: spire-agent-socket
|
||||
hostPath:
|
||||
path: {{ dir .Values.authentication.mutual.spire.adminSocketPath }}
|
||||
type: DirectoryOrCreate
|
||||
{{- end }}
|
||||
{{- if .Values.envoy.enabled }}
|
||||
# Sharing socket with Cilium Envoy on the same node by using a host path
|
||||
- name: envoy-sockets
|
||||
hostPath:
|
||||
path: "{{ .Values.daemon.runPath }}/envoy/sockets"
|
||||
type: DirectoryOrCreate
|
||||
{{- end }}
|
||||
{{- if .Values.kubeConfigPath }}
|
||||
- name: kube-config
|
||||
hostPath:
|
||||
path: {{ .Values.kubeConfigPath }}
|
||||
type: FileOrCreate
|
||||
{{- end }}
|
||||
{{- if and .Values.nodeinit.enabled .Values.nodeinit.bootstrapFile }}
|
||||
- name: cilium-bootstrap-file-dir
|
||||
hostPath:
|
||||
path: {{ .Values.nodeinit.bootstrapFile | dir | quote }}
|
||||
type: DirectoryOrCreate
|
||||
{{- end }}
|
||||
{{- if .Values.etcd.enabled }}
|
||||
# To read the etcd config stored in config maps
|
||||
- name: etcd-config-path
|
||||
configMap:
|
||||
name: cilium-config
|
||||
# note: the leading zero means this number is in octal representation: do not remove it
|
||||
defaultMode: 0400
|
||||
items:
|
||||
- key: etcd-config
|
||||
path: etcd.config
|
||||
# To read the k8s etcd secrets in case the user might want to use TLS
|
||||
{{- if or .Values.etcd.ssl .Values.etcd.managed }}
|
||||
- name: etcd-secrets
|
||||
secret:
|
||||
secretName: cilium-etcd-secrets
|
||||
# note: the leading zero means this number is in octal representation: do not remove it
|
||||
defaultMode: 0400
|
||||
optional: true
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
# To read the clustermesh configuration
|
||||
- name: clustermesh-secrets
|
||||
projected:
|
||||
# note: the leading zero means this number is in octal representation: do not remove it
|
||||
defaultMode: 0400
|
||||
sources:
|
||||
- secret:
|
||||
name: cilium-clustermesh
|
||||
optional: true
|
||||
# note: items are not explicitly listed here, since the entries of this secret
|
||||
# depend on the peers configured, and that would cause a restart of all agents
|
||||
# at every addition/removal. Leaving the field empty makes each secret entry
|
||||
# to be automatically projected into the volume as a file whose name is the key.
|
||||
- secret:
|
||||
name: clustermesh-apiserver-remote-cert
|
||||
optional: true
|
||||
items:
|
||||
- key: tls.key
|
||||
path: common-etcd-client.key
|
||||
- key: tls.crt
|
||||
path: common-etcd-client.crt
|
||||
{{- if not .Values.tls.caBundle.enabled }}
|
||||
- key: ca.crt
|
||||
path: common-etcd-client-ca.crt
|
||||
{{- else }}
|
||||
- {{ .Values.tls.caBundle.useSecret | ternary "secret" "configMap" }}:
|
||||
name: {{ .Values.tls.caBundle.name }}
|
||||
optional: true
|
||||
items:
|
||||
- key: {{ .Values.tls.caBundle.key }}
|
||||
path: common-etcd-client-ca.crt
|
||||
{{- end }}
|
||||
{{- if and .Values.ipMasqAgent .Values.ipMasqAgent.enabled }}
|
||||
- name: ip-masq-agent
|
||||
configMap:
|
||||
name: ip-masq-agent
|
||||
optional: true
|
||||
items:
|
||||
- key: config
|
||||
path: ip-masq-agent
|
||||
{{- end }}
|
||||
{{- if and .Values.encryption.enabled (eq .Values.encryption.type "ipsec") }}
|
||||
- name: cilium-ipsec-secrets
|
||||
secret:
|
||||
secretName: {{ .Values.encryption.ipsec.secretName | default .Values.encryption.secretName }}
|
||||
{{- end }}
|
||||
{{- if .Values.cni.configMap }}
|
||||
- name: cni-configuration
|
||||
configMap:
|
||||
name: {{ .Values.cni.configMap }}
|
||||
{{- end }}
|
||||
{{- if .Values.bgp.enabled }}
|
||||
- name: bgp-config-path
|
||||
configMap:
|
||||
name: bgp-config
|
||||
{{- end }}
|
||||
{{- if not .Values.securityContext.privileged }}
|
||||
- name: host-proc-sys-net
|
||||
hostPath:
|
||||
path: /proc/sys/net
|
||||
type: Directory
|
||||
- name: host-proc-sys-kernel
|
||||
hostPath:
|
||||
path: /proc/sys/kernel
|
||||
type: Directory
|
||||
{{- end }}
|
||||
{{- if and .Values.hubble.enabled .Values.hubble.tls.enabled (hasKey .Values.hubble "listenAddress") }}
|
||||
- name: hubble-tls
|
||||
projected:
|
||||
# note: the leading zero means this number is in octal representation: do not remove it
|
||||
defaultMode: 0400
|
||||
sources:
|
||||
- secret:
|
||||
name: hubble-server-certs
|
||||
optional: true
|
||||
items:
|
||||
- key: tls.crt
|
||||
path: server.crt
|
||||
- key: tls.key
|
||||
path: server.key
|
||||
{{- if not .Values.tls.caBundle.enabled }}
|
||||
- key: ca.crt
|
||||
path: client-ca.crt
|
||||
{{- else }}
|
||||
- {{ .Values.tls.caBundle.useSecret | ternary "secret" "configMap" }}:
|
||||
name: {{ .Values.tls.caBundle.name }}
|
||||
optional: true
|
||||
items:
|
||||
- key: {{ .Values.tls.caBundle.key }}
|
||||
path: client-ca.crt
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.export.dynamic.enabled }}
|
||||
- name: hubble-flowlog-config
|
||||
configMap:
|
||||
name: {{ .Values.hubble.export.dynamic.config.configMapName }}
|
||||
optional: true
|
||||
{{- end }}
|
||||
{{- range .Values.extraHostPathMounts }}
|
||||
- name: {{ .name }}
|
||||
hostPath:
|
||||
path: {{ .hostPath }}
|
||||
{{- if .hostPathType }}
|
||||
type: {{ .hostPathType }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- with .Values.extraVolumes }}
|
||||
{{- toYaml . | nindent 6 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -15,9 +15,14 @@ metadata:
|
||||
{{- if $.Values.dashboards.label }}
|
||||
{{ $.Values.dashboards.label }}: {{ ternary $.Values.dashboards.labelValue "1" (not (empty $.Values.dashboards.labelValue)) | quote }}
|
||||
{{- end }}
|
||||
{{- with $.Values.dashboards.annotations }}
|
||||
{{- if or $.Values.dashboards.annotations $.Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- with $.Values.dashboards.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with $.Values.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
data:
|
||||
{{ $dashboardName }}.json: {{ $.Files.Get $path | toJson }}
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: Role
|
||||
metadata:
|
||||
name: cilium-config-agent
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
@@ -26,6 +30,10 @@ kind: Role
|
||||
metadata:
|
||||
name: cilium-ingress-secrets
|
||||
namespace: {{ .Values.ingressController.secretsNamespace.name | quote }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
@@ -46,6 +54,10 @@ kind: Role
|
||||
metadata:
|
||||
name: cilium-gateway-secrets
|
||||
namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
@@ -66,6 +78,30 @@ kind: Role
|
||||
metadata:
|
||||
name: cilium-envoy-config-secrets
|
||||
namespace: {{ .Values.envoyConfig.secretsNamespace.name | quote }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
{{- end}}
|
||||
|
||||
{{- if and .Values.agent (not .Values.preflight.enabled) .Values.serviceAccounts.cilium.create .Values.bgpControlPlane.enabled .Values.bgpControlPlane.secretsNamespace.name }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: cilium-bgp-control-plane-secrets
|
||||
namespace: {{ .Values.bgpControlPlane.secretsNamespace.name | quote }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: RoleBinding
|
||||
metadata:
|
||||
name: cilium-config-agent
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
@@ -24,6 +28,10 @@ kind: RoleBinding
|
||||
metadata:
|
||||
name: cilium-secrets
|
||||
namespace: {{ .Values.ingressController.secretsNamespace.name | quote }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
@@ -43,6 +51,10 @@ kind: RoleBinding
|
||||
metadata:
|
||||
name: cilium-gateway-secrets
|
||||
namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
@@ -62,6 +74,10 @@ kind: RoleBinding
|
||||
metadata:
|
||||
name: cilium-envoy-config-secrets
|
||||
namespace: {{ .Values.envoyConfig.secretsNamespace.name | quote }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
@@ -73,3 +89,22 @@ subjects:
|
||||
name: {{ .Values.serviceAccounts.cilium.name | quote }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end}}
|
||||
|
||||
{{- if and .Values.agent (not .Values.preflight.enabled) .Values.serviceAccounts.cilium.create .Values.bgpControlPlane.enabled .Values.bgpControlPlane.secretsNamespace.name}}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: cilium-bgp-control-plane-secrets
|
||||
namespace: {{ .Values.bgpControlPlane.secretsNamespace.name | quote }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: cilium-bgp-control-plane-secrets
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ .Values.serviceAccounts.cilium.name | quote }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- end}}
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: Service
|
||||
metadata:
|
||||
name: cilium-agent
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: cilium
|
||||
app.kubernetes.io/name: cilium-agent
|
||||
|
||||
@@ -4,8 +4,13 @@ kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ .Values.serviceAccounts.cilium.name | quote }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- if .Values.serviceAccounts.cilium.annotations }}
|
||||
{{- if or .Values.serviceAccounts.cilium.annotations .Values.annotations }}
|
||||
annotations:
|
||||
{{- toYaml .Values.serviceAccounts.cilium.annotations | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.serviceAccounts.cilium.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -10,10 +10,15 @@ metadata:
|
||||
{{- with .Values.prometheus.serviceMonitor.labels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.prometheus.serviceMonitor.annotations .Values.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.prometheus.serviceMonitor.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- with .Values.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.prometheus.serviceMonitor.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
@@ -34,6 +39,23 @@ spec:
|
||||
metricRelabelings:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if .Values.envoy.prometheus.serviceMonitor.enabled }}
|
||||
- port: envoy-metrics
|
||||
interval: {{ .Values.envoy.prometheus.serviceMonitor.interval | quote }}
|
||||
honorLabels: true
|
||||
path: /metrics
|
||||
{{- with .Values.envoy.prometheus.serviceMonitor.relabelings }}
|
||||
relabelings:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.envoy.prometheus.serviceMonitor.metricRelabelings }}
|
||||
metricRelabelings:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
targetLabels:
|
||||
- k8s-app
|
||||
{{- if .Values.prometheus.serviceMonitor.jobLabel }}
|
||||
jobLabel: {{ .Values.prometheus.serviceMonitor.jobLabel | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{{- if or
|
||||
(and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") (not .Values.clustermesh.apiserver.tls.ca.cert))
|
||||
(and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm"))
|
||||
(and (or .Values.agent .Values.hubble.relay.enabled .Values.hubble.ui.enabled) .Values.hubble.enabled .Values.hubble.tls.enabled .Values.hubble.tls.auto.enabled (eq .Values.hubble.tls.auto.method "helm"))
|
||||
(and .Values.tls.ca.key .Values.tls.ca.cert)
|
||||
-}}
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
{{- if and (.Values.agent) (not .Values.preflight.enabled) }}
|
||||
{{- /* Default values with backwards compatibility */ -}}
|
||||
{{- $defaultEnableCnpStatusUpdates := "true" -}}
|
||||
{{- $defaultBpfMapDynamicSizeRatio := 0.0 -}}
|
||||
{{- $defaultBpfMasquerade := "false" -}}
|
||||
{{- $defaultBpfClockProbe := "false" -}}
|
||||
@@ -13,10 +12,12 @@
|
||||
{{- $fragmentTracking := "true" -}}
|
||||
{{- $defaultKubeProxyReplacement := "false" -}}
|
||||
{{- $azureUsePrimaryAddress := "true" -}}
|
||||
{{- $defaultK8sClientQPS := 5 -}}
|
||||
{{- $defaultK8sClientBurst := 10 -}}
|
||||
{{- $defaultDNSProxyEnableTransparentMode := "false" -}}
|
||||
|
||||
{{- /* Default values when 1.8 was initially deployed */ -}}
|
||||
{{- if semverCompare ">=1.8" (default "1.8" .Values.upgradeCompatibility) -}}
|
||||
{{- $defaultEnableCnpStatusUpdates = "false" -}}
|
||||
{{- $defaultBpfMapDynamicSizeRatio = 0.0025 -}}
|
||||
{{- $defaultBpfMasquerade = "true" -}}
|
||||
{{- $defaultBpfClockProbe = "true" -}}
|
||||
@@ -48,6 +49,7 @@
|
||||
{{- $azureUsePrimaryAddress = "false" -}}
|
||||
{{- end }}
|
||||
{{- $defaultKubeProxyReplacement = "disabled" -}}
|
||||
{{- $defaultDNSProxyEnableTransparentMode = "true" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- /* Default values when 1.14 was initially deployed */ -}}
|
||||
@@ -76,6 +78,11 @@
|
||||
{{- else if (not (kindIs "invalid" .Values.cni.chainingTarget)) -}}
|
||||
{{- $cniChainingMode = "generic-veth" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- if semverCompare ">=1.27-0" .Capabilities.KubeVersion.Version -}}
|
||||
{{- $defaultK8sClientQPS = 10 -}}
|
||||
{{- $defaultK8sClientBurst = 20 -}}
|
||||
{{- end -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
@@ -189,6 +196,11 @@ data:
|
||||
enable-policy: "{{ lower .Values.policyEnforcementMode }}"
|
||||
{{- end }}
|
||||
|
||||
{{- if hasKey .Values "policyCIDRMatchMode" }}
|
||||
policy-cidr-match-mode: {{ join " " .Values.policyCIDRMatchMode | quote }}
|
||||
{{- end}}
|
||||
|
||||
|
||||
{{- if .Values.prometheus.enabled }}
|
||||
# If you want metrics enabled in all of your Cilium agents, set the port for
|
||||
# which the Cilium agents will have their metrics exposed.
|
||||
@@ -205,6 +217,13 @@ data:
|
||||
{{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.prometheus.controllerGroupMetrics }}
|
||||
# A space-separated list of controller groups for which to enable metrics.
|
||||
# The special values of "all" and "none" are supported.
|
||||
controller-group-metrics: {{- range .Values.prometheus.controllerGroupMetrics }}
|
||||
{{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if not .Values.envoy.enabled }}
|
||||
@@ -238,6 +257,7 @@ data:
|
||||
{{- if .Values.ingressController.enabled }}
|
||||
enable-ingress-controller: "true"
|
||||
enforce-ingress-https: {{ .Values.ingressController.enforceHttps | quote }}
|
||||
enable-ingress-proxy-protocol: {{ .Values.ingressController.enableProxyProtocol | quote }}
|
||||
enable-ingress-secrets-sync: {{ .Values.ingressController.secretsNamespace.sync | quote }}
|
||||
ingress-secrets-namespace: {{ .Values.ingressController.secretsNamespace.name | quote }}
|
||||
ingress-lb-annotation-prefixes: {{ .Values.ingressController.ingressLBAnnotationPrefixes | join " " | quote }}
|
||||
@@ -430,28 +450,23 @@ data:
|
||||
# - vxlan (default)
|
||||
# - geneve
|
||||
{{- if .Values.gke.enabled }}
|
||||
{{- if ne (.Values.routingMode | default "native") "native" }}
|
||||
{{- fail (printf "RoutingMode must be set to native when gke.enabled=true" )}}
|
||||
{{- end }}
|
||||
routing-mode: "native"
|
||||
enable-endpoint-routes: "true"
|
||||
enable-local-node-route: "false"
|
||||
{{- else if .Values.aksbyocni.enabled }}
|
||||
{{- if ne (.Values.routingMode | default "tunnel") "tunnel" }}
|
||||
{{- fail (printf "RoutingMode must be set to tunnel when aksbyocni.enabled=true" )}}
|
||||
{{- end }}
|
||||
routing-mode: "tunnel"
|
||||
tunnel-protocol: "vxlan"
|
||||
{{- else if .Values.routingMode }}
|
||||
routing-mode: {{ .Values.routingMode | quote }}
|
||||
{{- else }}
|
||||
{{- if eq .Values.tunnel "disabled" }}
|
||||
routing-mode: "native"
|
||||
{{- else if eq .Values.tunnel "vxlan" }}
|
||||
routing-mode: "tunnel"
|
||||
tunnel-protocol: "vxlan"
|
||||
{{- else if eq .Values.tunnel "geneve" }}
|
||||
routing-mode: "tunnel"
|
||||
tunnel-protocol: "geneve"
|
||||
{{- else }}
|
||||
# Default case
|
||||
routing-mode: "tunnel"
|
||||
tunnel-protocol: "vxlan"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.tunnelProtocol }}
|
||||
@@ -462,6 +477,10 @@ data:
|
||||
tunnel-port: {{ .Values.tunnelPort | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.serviceNoBackendResponse }}
|
||||
service-no-backend-response: "{{ .Values.serviceNoBackendResponse }}"
|
||||
{{- end}}
|
||||
|
||||
{{- if .Values.MTU }}
|
||||
mtu: {{ .Values.MTU | quote }}
|
||||
{{- end }}
|
||||
@@ -500,7 +519,6 @@ data:
|
||||
{{- if .Values.azure.enabled }}
|
||||
enable-endpoint-routes: "true"
|
||||
auto-create-cilium-node-resource: "true"
|
||||
enable-local-node-route: "false"
|
||||
{{- if .Values.azure.userAssignedIdentityID }}
|
||||
azure-user-assigned-identity-id: {{ .Values.azure.userAssignedIdentityID | quote }}
|
||||
{{- end }}
|
||||
@@ -551,6 +569,7 @@ data:
|
||||
{{- else if eq $defaultBpfMasquerade "true" }}
|
||||
enable-bpf-masquerade: {{ $defaultBpfMasquerade | quote }}
|
||||
{{- end }}
|
||||
enable-masquerade-to-route-source: {{ .Values.enableMasqueradeRouteSource | quote }}
|
||||
{{- if hasKey .Values "egressMasqueradeInterfaces" }}
|
||||
egress-masquerade-interfaces: {{ .Values.egressMasqueradeInterfaces }}
|
||||
{{- end }}
|
||||
@@ -583,8 +602,8 @@ data:
|
||||
{{- if .Values.encryption.wireguard.userspaceFallback }}
|
||||
enable-wireguard-userspace-fallback: {{ .Values.encryption.wireguard.userspaceFallback | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.encryption.wireguard.encapsulate }}
|
||||
wireguard-encapsulate: {{ .Values.encryption.wireguard.encapsulate | quote }}
|
||||
{{- if .Values.encryption.wireguard.persistentKeepalive }}
|
||||
wireguard-persistent-keepalive: {{ .Values.encryption.wireguard.persistentKeepalive | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.encryption.nodeEncryption }}
|
||||
@@ -592,6 +611,14 @@ data:
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.encryption.strictMode.enabled }}
|
||||
enable-encryption-strict-mode: {{ .Values.encryption.strictMode.enabled | quote }}
|
||||
|
||||
encryption-strict-mode-cidr: {{ .Values.encryption.strictMode.cidr | quote }}
|
||||
|
||||
encryption-strict-mode-allow-remote-node-identities: {{ .Values.encryption.strictMode.allowRemoteNodeIdentities | quote }}
|
||||
{{- end }}
|
||||
|
||||
enable-xt-socket-fallback: {{ .Values.enableXTSocketFallback | quote }}
|
||||
{{- if or (.Values.azure.enabled) (.Values.eni.enabled) (.Values.gke.enabled) (ne $cniChainingMode "none") }}
|
||||
install-no-conntrack-iptables-rules: "false"
|
||||
@@ -693,6 +720,11 @@ data:
|
||||
{{- end }}
|
||||
{{- if hasKey .Values.nodePort "enableHealthCheck" }}
|
||||
enable-health-check-nodeport: {{ .Values.nodePort.enableHealthCheck | quote}}
|
||||
{{- end }}
|
||||
{{- if .Values.gke.enabled }}
|
||||
enable-health-check-loadbalancer-ip: "true"
|
||||
{{- else if hasKey .Values.nodePort "enableHealthCheckLoadBalancerIP" }}
|
||||
enable-health-check-loadbalancer-ip: {{ .Values.nodePort.enableHealthCheckLoadBalancerIP | quote}}
|
||||
{{- end }}
|
||||
node-port-bind-protection: {{ .Values.nodePort.bindProtection | quote }}
|
||||
enable-auto-protect-node-port-range: {{ .Values.nodePort.autoProtectPortRange | quote }}
|
||||
@@ -828,7 +860,7 @@ data:
|
||||
|
||||
{{- if .Values.hubble.enabled }}
|
||||
# Enable Hubble gRPC service.
|
||||
enable-hubble: {{ .Values.hubble.enabled | quote }}
|
||||
enable-hubble: {{ .Values.hubble.enabled | quote }}
|
||||
# UNIX domain socket for Hubble server to listen to.
|
||||
hubble-socket-path: {{ .Values.hubble.socketPath | quote }}
|
||||
{{- if hasKey .Values.hubble "eventQueueSize" }}
|
||||
@@ -852,6 +884,49 @@ data:
|
||||
{{- end }}
|
||||
enable-hubble-open-metrics: {{ .Values.hubble.metrics.enableOpenMetrics | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.redact }}
|
||||
{{- if eq .Values.hubble.redact.enabled true }}
|
||||
# Enables hubble redact capabilities
|
||||
hubble-redact-enabled: "true"
|
||||
{{- if .Values.hubble.redact.http }}
|
||||
# Enables redaction of the http URL query part in flows
|
||||
hubble-redact-http-urlquery: {{ .Values.hubble.redact.http.urlQuery | quote }}
|
||||
# Enables redaction of the http user info in flows
|
||||
hubble-redact-http-userinfo: {{ .Values.hubble.redact.http.userInfo | quote }}
|
||||
{{- if .Values.hubble.redact.http.headers }}
|
||||
{{- if .Values.hubble.redact.http.headers.allow }}
|
||||
# Redact all http headers that do not match this list
|
||||
hubble-redact-http-headers-allow: {{- range .Values.hubble.redact.http.headers.allow }}
|
||||
{{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.redact.http.headers.deny }}
|
||||
# Redact all http headers that match this list
|
||||
hubble-redact-http-headers-deny: {{- range .Values.hubble.redact.http.headers.deny }}
|
||||
{{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.redact.kafka }}
|
||||
# Enables redaction of the Kafka API key part in flows
|
||||
hubble-redact-kafka-apikey: {{ .Values.hubble.redact.kafka.apiKey | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.export }}
|
||||
hubble-export-file-max-size-mb: {{ .Values.hubble.export.fileMaxSizeMb | quote }}
|
||||
hubble-export-file-max-backups: {{ .Values.hubble.export.fileMaxBackups | quote }}
|
||||
{{- if .Values.hubble.export.static.enabled }}
|
||||
hubble-export-file-path: {{ .Values.hubble.export.static.filePath | quote }}
|
||||
hubble-export-fieldmask: {{ .Values.hubble.export.static.fieldMask | join " " | quote }}
|
||||
hubble-export-allowlist: {{ .Values.hubble.export.static.allowList | join "," | quote }}
|
||||
hubble-export-denylist: {{ .Values.hubble.export.static.denyList | join "," | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.export.dynamic.enabled }}
|
||||
hubble-flowlogs-config-path: /flowlog-config/flowlogs.yaml
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if hasKey .Values.hubble "listenAddress" }}
|
||||
# An additional address for Hubble server to listen to (e.g. ":4244").
|
||||
hubble-listen-address: {{ .Values.hubble.listenAddress | quote }}
|
||||
@@ -885,7 +960,7 @@ data:
|
||||
ipam-cilium-node-update-rate: {{ include "validateDuration" .Values.ipam.ciliumNodeUpdateRate | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if or (eq $ipam "cluster-pool") (eq $ipam "cluster-pool-v2beta") }}
|
||||
{{- if (eq $ipam "cluster-pool") }}
|
||||
{{- if .Values.ipv4.enabled }}
|
||||
{{- if hasKey .Values.ipam.operator "clusterPoolIPv4PodCIDR" }}
|
||||
{{- /* ipam.operator.clusterPoolIPv4PodCIDR removed in v1.14, remove this failsafe around v1.17 */ -}}
|
||||
@@ -927,11 +1002,8 @@ data:
|
||||
limit-ipam-api-qps: {{ .Values.ipam.operator.externalAPILimitQPS | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.enableCnpStatusUpdates }}
|
||||
disable-cnp-status-updates: "false"
|
||||
{{- else if (eq $defaultEnableCnpStatusUpdates "false") }}
|
||||
disable-cnp-status-updates: "true"
|
||||
cnp-node-status-gc-interval: "0s"
|
||||
{{- if .Values.apiRateLimit }}
|
||||
api-rate-limit: {{ .Values.apiRateLimit | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.egressGateway.enabled }}
|
||||
@@ -963,10 +1035,6 @@ data:
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.enableK8sEventHandover }}
|
||||
enable-k8s-event-handover: "true"
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.crdWaitTimeout }}
|
||||
crd-wait-timeout: {{ include "validateDuration" .Values.crdWaitTimeout | quote }}
|
||||
{{- end }}
|
||||
@@ -1018,6 +1086,7 @@ data:
|
||||
|
||||
{{- if .Values.bgpControlPlane.enabled }}
|
||||
enable-bgp-control-plane: "true"
|
||||
bgp-secrets-namespace: {{ .Values.bgpControlPlane.secretsNamespace.name | quote }}
|
||||
{{- else }}
|
||||
enable-bgp-control-plane: "false"
|
||||
{{- end }}
|
||||
@@ -1064,10 +1133,8 @@ data:
|
||||
annotate-k8s-node: "true"
|
||||
{{- end }}
|
||||
|
||||
{{- if hasKey .Values "k8sClientRateLimit" }}
|
||||
k8s-client-qps: {{ .Values.k8sClientRateLimit.qps | quote }}
|
||||
k8s-client-burst: {{ .Values.k8sClientRateLimit.burst | quote }}
|
||||
{{- end }}
|
||||
k8s-client-qps: {{ .Values.k8sClientRateLimit.qps | default $defaultK8sClientQPS | quote}}
|
||||
k8s-client-burst: {{ .Values.k8sClientRateLimit.burst | default $defaultK8sClientBurst | quote }}
|
||||
|
||||
{{- if and .Values.operator.setNodeTaints (not .Values.operator.removeNodeTaints) -}}
|
||||
{{ fail "Cannot have operator.setNodeTaintsMaxNodes and not operator.removeNodeTaints = false" }}
|
||||
@@ -1092,6 +1159,13 @@ data:
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.dnsProxy }}
|
||||
{{- if hasKey .Values.dnsProxy "enableTransparentMode" }}
|
||||
# explicit setting gets precedence
|
||||
dnsproxy-enable-transparent-mode: {{ .Values.dnsProxy.enableTransparentMode | quote }}
|
||||
{{- else if eq $cniChainingMode "none" }}
|
||||
# default DNS proxy to transparent mode in non-chaining modes
|
||||
dnsproxy-enable-transparent-mode: {{ $defaultDNSProxyEnableTransparentMode | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.dnsProxy.dnsRejectResponseCode }}
|
||||
tofqdns-dns-reject-response-code: {{ .Values.dnsProxy.dnsRejectResponseCode | quote }}
|
||||
{{- end }}
|
||||
@@ -1121,10 +1195,6 @@ data:
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.extraConfig }}
|
||||
{{ toYaml .Values.extraConfig | nindent 2 }}
|
||||
{{- end }}
|
||||
|
||||
{{- if hasKey .Values "agentNotReadyTaintKey" }}
|
||||
agent-not-ready-taint-key: {{ .Values.agentNotReadyTaintKey | quote }}
|
||||
{{- end }}
|
||||
@@ -1138,6 +1208,7 @@ data:
|
||||
mesh-auth-mutual-enabled: "true"
|
||||
mesh-auth-mutual-listener-port: {{ .Values.authentication.mutual.port | quote }}
|
||||
mesh-auth-spire-agent-socket: {{ .Values.authentication.mutual.spire.agentSocketPath | quote }}
|
||||
mesh-auth-mutual-connect-timeout: {{ include "validateDuration" .Values.authentication.mutual.connectTimeout | quote }}
|
||||
{{- if .Values.authentication.mutual.spire.serverAddress }}
|
||||
mesh-auth-spire-server-address: {{ .Values.authentication.mutual.spire.serverAddress | quote }}
|
||||
{{- else }}
|
||||
@@ -1158,6 +1229,16 @@ data:
|
||||
envoy-log: {{ .Values.envoy.log.path | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if hasKey .Values.clustermesh "maxConnectedClusters" }}
|
||||
max-connected-clusters: {{ .Values.clustermesh.maxConnectedClusters | quote }}
|
||||
{{- end }}
|
||||
|
||||
# Extra config allows adding arbitrary properties to the cilium config.
|
||||
# By putting it at the end of the ConfigMap, it's also possible to override existing properties.
|
||||
{{- if .Values.extraConfig }}
|
||||
{{ toYaml .Values.extraConfig | nindent 2 }}
|
||||
{{- end }}
|
||||
|
||||
{{- end }}
|
||||
---
|
||||
{{- if and .Values.ipMasqAgent.enabled .Values.ipMasqAgent.config }}
|
||||
|
||||
@@ -6,6 +6,10 @@ kind: ConfigMap
|
||||
metadata:
|
||||
name: cilium-envoy-config
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.envoy.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
data:
|
||||
{{- (tpl (.Files.Glob "files/cilium-envoy/configmap/bootstrap-config.json").AsConfig .) | nindent 2 }}
|
||||
|
||||
|
||||
@@ -6,6 +6,10 @@ kind: DaemonSet
|
||||
metadata:
|
||||
name: cilium-envoy
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.envoy.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: cilium-envoy
|
||||
app.kubernetes.io/part-of: cilium
|
||||
@@ -61,11 +65,11 @@ spec:
|
||||
image: {{ include "cilium.image" .Values.envoy.image | quote }}
|
||||
imagePullPolicy: {{ .Values.envoy.image.pullPolicy }}
|
||||
command:
|
||||
- /usr/bin/cilium-envoy
|
||||
- /usr/bin/cilium-envoy-starter
|
||||
args:
|
||||
- '-c /var/run/cilium/envoy/bootstrap-config.json'
|
||||
- '--base-id 0'
|
||||
{{- if and (hasKey .Values.debug "verbose") (.Values.debug.verbose) (has "envoy" ( splitList " " .Values.debug.verbose )) }}
|
||||
{{- if and (.Values.debug.enabled) (hasKey .Values.debug "verbose") (.Values.debug.verbose) (has "envoy" ( splitList " " .Values.debug.verbose )) }}
|
||||
- '--log-level trace'
|
||||
{{- else if and (.Values.debug.enabled) (hasKey .Values.debug "verbose") (.Values.debug.verbose) (has "flow" ( splitList " " .Values.debug.verbose )) }}
|
||||
- '--log-level debug'
|
||||
@@ -82,17 +86,18 @@ spec:
|
||||
{{- if semverCompare ">=1.20-0" .Capabilities.KubeVersion.Version }}
|
||||
startupProbe:
|
||||
httpGet:
|
||||
host: "localhost"
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
path: /healthz
|
||||
port: {{ .Values.envoy.healthPort }}
|
||||
scheme: HTTP
|
||||
failureThreshold: {{ .Values.envoy.startupProbe.failureThreshold }}
|
||||
periodSeconds: {{ .Values.envoy.startupProbe.periodSeconds }}
|
||||
successThreshold: 1
|
||||
initialDelaySeconds: 5
|
||||
{{- end }}
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
host: "localhost"
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
path: /healthz
|
||||
port: {{ .Values.envoy.healthPort }}
|
||||
scheme: HTTP
|
||||
@@ -110,7 +115,7 @@ spec:
|
||||
timeoutSeconds: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
host: "localhost"
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
path: /healthz
|
||||
port: {{ .Values.envoy.healthPort }}
|
||||
scheme: HTTP
|
||||
@@ -175,6 +180,9 @@ spec:
|
||||
- name: envoy-sockets
|
||||
mountPath: /var/run/cilium/envoy/sockets
|
||||
readOnly: false
|
||||
- name: envoy-artifacts
|
||||
mountPath: /var/run/cilium/envoy/artifacts
|
||||
readOnly: true
|
||||
- name: envoy-config
|
||||
mountPath: /var/run/cilium/envoy/
|
||||
readOnly: true
|
||||
@@ -224,6 +232,10 @@ spec:
|
||||
hostPath:
|
||||
path: "{{ .Values.daemon.runPath }}/envoy/sockets"
|
||||
type: DirectoryOrCreate
|
||||
- name: envoy-artifacts
|
||||
hostPath:
|
||||
path: "{{ .Values.daemon.runPath }}/envoy/artifacts"
|
||||
type: DirectoryOrCreate
|
||||
- name: envoy-config
|
||||
configMap:
|
||||
name: cilium-envoy-config
|
||||
|
||||
@@ -4,11 +4,16 @@ kind: Service
|
||||
metadata:
|
||||
name: cilium-envoy
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- if not .Values.envoy.prometheus.serviceMonitor.enabled }}
|
||||
{{- if or (not .Values.envoy.prometheus.serviceMonitor.enabled) .Values.envoy.annotations }}
|
||||
annotations:
|
||||
{{- if not .Values.envoy.prometheus.serviceMonitor.enabled }}
|
||||
prometheus.io/scrape: "true"
|
||||
prometheus.io/port: {{ .Values.proxy.prometheus.port | default .Values.envoy.prometheus.port | quote }}
|
||||
{{- end }}
|
||||
{{- with .Values.envoy.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: cilium-envoy
|
||||
app.kubernetes.io/name: cilium-envoy
|
||||
|
||||
@@ -4,8 +4,13 @@ kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ .Values.serviceAccounts.envoy.name | quote }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- if .Values.serviceAccounts.envoy.annotations }}
|
||||
{{- if or .Values.serviceAccounts.envoy.annotations .Values.envoy.annotations }}
|
||||
annotations:
|
||||
{{- toYaml .Values.serviceAccounts.envoy.annotations | nindent 4 }}
|
||||
{{- with .Values.envoy.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.serviceAccounts.envoy.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -7,13 +7,19 @@ metadata:
|
||||
namespace: {{ .Values.envoy.prometheus.serviceMonitor.namespace | default .Release.Namespace }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
app.kubernetes.io/name: cilium-envoy
|
||||
{{- with .Values.envoy.prometheus.serviceMonitor.labels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.envoy.prometheus.serviceMonitor.annotations .Values.envoy.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.envoy.prometheus.serviceMonitor.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- with .Values.envoy.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.envoy.prometheus.serviceMonitor.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
@@ -22,7 +28,7 @@ spec:
|
||||
matchNames:
|
||||
- {{ .Release.Namespace }}
|
||||
endpoints:
|
||||
- port: metrics
|
||||
- port: envoy-metrics
|
||||
interval: {{ .Values.envoy.prometheus.serviceMonitor.interval | quote }}
|
||||
honorLabels: true
|
||||
path: /metrics
|
||||
|
||||
@@ -0,0 +1,12 @@
|
||||
{{- if and .Values.hubble.export.dynamic.enabled .Values.hubble.export.dynamic.config.createConfigMap }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: {{ .Values.hubble.export.dynamic.config.configMapName }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
data:
|
||||
flowlogs.yaml: |
|
||||
flowLogs:
|
||||
{{ .Values.hubble.export.dynamic.config.content | toYaml | indent 4 }}
|
||||
{{- end }}
|
||||
@@ -1,6 +1,6 @@
|
||||
{{- if .Values.gatewayAPI.enabled -}}
|
||||
{{- if .Capabilities.APIVersions.Has "gateway.networking.k8s.io/v1beta1/GatewayClass" }}
|
||||
apiVersion: gateway.networking.k8s.io/v1beta1
|
||||
{{- if .Capabilities.APIVersions.Has "gateway.networking.k8s.io/v1/GatewayClass" }}
|
||||
apiVersion: gateway.networking.k8s.io/v1
|
||||
kind: GatewayClass
|
||||
metadata:
|
||||
name: cilium
|
||||
|
||||
@@ -5,6 +5,10 @@ apiVersion: apps/v1
|
||||
metadata:
|
||||
name: cilium-node-init
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.nodeinit.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app: cilium-node-init
|
||||
app.kubernetes.io/part-of: cilium
|
||||
|
||||
@@ -4,8 +4,13 @@ kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ .Values.serviceAccounts.nodeinit.name | quote }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- if .Values.serviceAccounts.nodeinit.annotations }}
|
||||
{{- if or .Values.serviceAccounts.nodeinit.annotations .Values.nodeinit.annotations }}
|
||||
annotations:
|
||||
{{- toYaml .Values.serviceAccounts.nodeinit.annotations | nindent 4 }}
|
||||
{{- with .Values.nodeinit.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.serviceAccounts.nodeinit.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -3,6 +3,10 @@ apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: cilium-operator
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
@@ -157,6 +161,9 @@ rules:
|
||||
resources:
|
||||
- ciliumendpointslices
|
||||
- ciliumenvoyconfigs
|
||||
- ciliumbgppeerconfigs
|
||||
- ciliumbgpadvertisements
|
||||
- ciliumbgpnodeconfigs
|
||||
verbs:
|
||||
- create
|
||||
- update
|
||||
@@ -183,6 +190,11 @@ rules:
|
||||
resourceNames:
|
||||
- ciliumloadbalancerippools.cilium.io
|
||||
- ciliumbgppeeringpolicies.cilium.io
|
||||
- ciliumbgpclusterconfigs.cilium.io
|
||||
- ciliumbgppeerconfigs.cilium.io
|
||||
- ciliumbgpadvertisements.cilium.io
|
||||
- ciliumbgpnodeconfigs.cilium.io
|
||||
- ciliumbgpnodeconfigoverrides.cilium.io
|
||||
- ciliumclusterwideenvoyconfigs.cilium.io
|
||||
- ciliumclusterwidenetworkpolicies.cilium.io
|
||||
- ciliumegressgatewaypolicies.cilium.io
|
||||
@@ -203,6 +215,8 @@ rules:
|
||||
resources:
|
||||
- ciliumloadbalancerippools
|
||||
- ciliumpodippools
|
||||
- ciliumbgpclusterconfigs
|
||||
- ciliumbgpnodeconfigoverrides
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
@@ -258,6 +272,7 @@ rules:
|
||||
- gateways
|
||||
- tlsroutes
|
||||
- httproutes
|
||||
- grpcroutes
|
||||
- referencegrants
|
||||
- referencepolicies
|
||||
verbs:
|
||||
@@ -270,6 +285,7 @@ rules:
|
||||
- gatewayclasses/status
|
||||
- gateways/status
|
||||
- httproutes/status
|
||||
- grpcroutes/status
|
||||
- tlsroutes/status
|
||||
verbs:
|
||||
- update
|
||||
|
||||
@@ -3,6 +3,10 @@ apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: cilium-operator
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
|
||||
@@ -15,9 +15,14 @@ metadata:
|
||||
{{- if $.Values.operator.dashboards.label }}
|
||||
{{ $.Values.operator.dashboards.label }}: {{ ternary $.Values.operator.dashboards.labelValue "1" (not (empty $.Values.operator.dashboards.labelValue)) | quote }}
|
||||
{{- end }}
|
||||
{{- with $.Values.operator.dashboards.annotations }}
|
||||
{{- if or $.Values.operator.dashboards.annotations $.Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- with $.Values.operator.dashboards.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with $.Values.operator.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
data:
|
||||
{{ $dashboardName }}.json: {{ $.Files.Get $path | toJson }}
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: Deployment
|
||||
metadata:
|
||||
name: cilium-operator
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
io.cilium/app: operator
|
||||
name: cilium-operator
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: cilium-operator
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
io.cilium/app: operator
|
||||
name: cilium-operator
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: Role
|
||||
metadata:
|
||||
name: cilium-operator-ingress-secrets
|
||||
namespace: {{ .Values.ingressController.secretsNamespace.name | quote }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
@@ -26,6 +30,10 @@ kind: Role
|
||||
metadata:
|
||||
name: cilium-operator-gateway-secrets
|
||||
namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: RoleBinding
|
||||
metadata:
|
||||
name: cilium-operator-ingress-secrets
|
||||
namespace: {{ .Values.ingressController.secretsNamespace.name | quote }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
@@ -24,6 +28,10 @@ kind: RoleBinding
|
||||
metadata:
|
||||
name: cilium-operator-gateway-secrets
|
||||
namespace: {{ .Values.gatewayAPI.secretsNamespace.name | quote }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: Secret
|
||||
metadata:
|
||||
name: cilium-azure
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: Opaque
|
||||
data:
|
||||
AZURE_CLIENT_ID: {{ default "" .Values.azure.clientID | b64enc | quote }}
|
||||
|
||||
@@ -4,6 +4,10 @@ apiVersion: v1
|
||||
metadata:
|
||||
name: cilium-operator
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
io.cilium/app: operator
|
||||
name: cilium-operator
|
||||
|
||||
@@ -8,8 +8,13 @@ kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ .Values.serviceAccounts.operator.name | quote }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- if .Values.serviceAccounts.operator.annotations }}
|
||||
{{- if or .Values.serviceAccounts.operator.annotations .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml .Values.serviceAccounts.operator.annotations | nindent 4 }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.serviceAccounts.operator.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -10,10 +10,15 @@ metadata:
|
||||
{{- with .Values.operator.prometheus.serviceMonitor.labels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.operator.prometheus.serviceMonitor.annotations .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.operator.prometheus.serviceMonitor.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.operator.prometheus.serviceMonitor.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
@@ -37,4 +42,7 @@ spec:
|
||||
{{- end }}
|
||||
targetLabels:
|
||||
- io.cilium/app
|
||||
{{- if .Values.operator.prometheus.serviceMonitor.jobLabel }}
|
||||
jobLabel: {{ .Values.operator.prometheus.serviceMonitor.jobLabel | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -6,6 +6,10 @@ apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: cilium-pre-flight
|
||||
{{- with .Values.preflight.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
@@ -82,6 +86,9 @@ rules:
|
||||
resources:
|
||||
- ciliumloadbalancerippools
|
||||
- ciliumbgppeeringpolicies
|
||||
- ciliumbgpnodeconfigs
|
||||
- ciliumbgpadvertisements
|
||||
- ciliumbgppeerconfigs
|
||||
- ciliumclusterwideenvoyconfigs
|
||||
- ciliumclusterwidenetworkpolicies
|
||||
- ciliumegressgatewaypolicies
|
||||
@@ -137,6 +144,7 @@ rules:
|
||||
- ciliumendpoints/status
|
||||
- ciliumendpoints
|
||||
- ciliuml2announcementpolicies/status
|
||||
- ciliumbgpnodeconfigs/status
|
||||
verbs:
|
||||
- patch
|
||||
{{- end }}
|
||||
|
||||
@@ -3,6 +3,10 @@ apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: cilium-pre-flight
|
||||
{{- with .Values.preflight.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
|
||||
@@ -4,6 +4,10 @@ kind: DaemonSet
|
||||
metadata:
|
||||
name: cilium-pre-flight-check
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.preflight.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
@@ -66,8 +70,13 @@ spec:
|
||||
- /tmp/ready
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 5
|
||||
{{- with .Values.preflight.extraEnv }}
|
||||
env:
|
||||
- name: K8S_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: spec.nodeName
|
||||
{{- with .Values.preflight.extraEnv }}
|
||||
{{- toYaml . | trim | nindent 12 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
@@ -104,7 +113,7 @@ spec:
|
||||
args:
|
||||
- -ec
|
||||
- |
|
||||
cilium preflight fqdn-poller --tofqdns-pre-cache {{ .Values.preflight.tofqdnsPreCache }};
|
||||
cilium-dbg preflight fqdn-poller --tofqdns-pre-cache {{ .Values.preflight.tofqdnsPreCache }};
|
||||
touch /tmp/ready-tofqdns-precache;
|
||||
livenessProbe:
|
||||
exec:
|
||||
|
||||
@@ -4,6 +4,10 @@ kind: Deployment
|
||||
metadata:
|
||||
name: cilium-pre-flight-check
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.preflight.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
app.kubernetes.io/name: cilium-pre-flight-check
|
||||
@@ -39,7 +43,7 @@ spec:
|
||||
args:
|
||||
- -ec
|
||||
- |
|
||||
cilium preflight validate-cnp;
|
||||
cilium-dbg preflight validate-cnp;
|
||||
touch /tmp/ready-validate-cnp;
|
||||
sleep 1h;
|
||||
livenessProbe:
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: cilium-pre-flight-check
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.preflight.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: cilium-pre-flight-check-deployment
|
||||
app.kubernetes.io/part-of: cilium
|
||||
|
||||
@@ -4,8 +4,13 @@ kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ .Values.serviceAccounts.preflight.name | quote }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- if .Values.serviceAccounts.preflight.annotations }}
|
||||
{{- if or .Values.serviceAccounts.preflight.annotations .Values.preflight.annotations }}
|
||||
annotations:
|
||||
{{ toYaml .Values.serviceAccounts.preflight.annotations | nindent 4 }}
|
||||
{{- with .Values.preflight.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.serviceAccounts.preflight.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -1,32 +1,14 @@
|
||||
{{- if and .Values.ingressController.enabled .Values.ingressController.secretsNamespace.create .Values.ingressController.secretsNamespace.name }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {{ .Values.ingressController.secretsNamespace.name | quote }}
|
||||
{{- end}}
|
||||
{{- $secretNamespaces := dict -}}
|
||||
{{- range $cfg := tuple .Values.ingressController .Values.gatewayAPI .Values.envoyConfig .Values.bgpControlPlane -}}
|
||||
{{- if and $cfg.enabled $cfg.secretsNamespace.create $cfg.secretsNamespace.name -}}
|
||||
{{- $_ := set $secretNamespaces $cfg.secretsNamespace.name 1 -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
# Only create the namespace if it's different from Ingress secret namespace or Ingress is not enabled.
|
||||
{{- if and .Values.gatewayAPI.enabled .Values.gatewayAPI.secretsNamespace.create .Values.gatewayAPI.secretsNamespace.name
|
||||
(or (not (and .Values.ingressController.enabled .Values.ingressController.secretsNamespace.create .Values.ingressController.secretsNamespace.name))
|
||||
(ne .Values.gatewayAPI.secretsNamespace.name .Values.ingressController.secretsNamespace.name)) }}
|
||||
{{- range $name, $_ := $secretNamespaces }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {{ .Values.gatewayAPI.secretsNamespace.name | quote }}
|
||||
{{- end}}
|
||||
|
||||
# Only create the namespace if it's different from Ingress and Gateway API secret namespaces (if enabled).
|
||||
{{- if and .Values.envoyConfig.enabled .Values.envoyConfig.secretsNamespace.create .Values.envoyConfig.secretsNamespace.name
|
||||
(and
|
||||
(or (not (and .Values.ingressController.enabled .Values.ingressController.secretsNamespace.create .Values.ingressController.secretsNamespace.name))
|
||||
(ne .Values.envoyConfig.secretsNamespace.name .Values.ingressController.secretsNamespace.name))
|
||||
(or (not (and .Values.gatewayAPI.enabled .Values.gatewayAPI.secretsNamespace.create .Values.gatewayAPI.secretsNamespace.name))
|
||||
(ne .Values.envoyConfig.secretsNamespace.name .Values.gatewayAPI.secretsNamespace.name))) }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: {{ .Values.envoyConfig.secretsNamespace.name | quote }}
|
||||
name: {{ $name | quote }}
|
||||
{{- end}}
|
||||
|
||||
@@ -5,6 +5,10 @@ metadata:
|
||||
name: clustermesh-apiserver
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- cilium.io
|
||||
|
||||
@@ -5,6 +5,10 @@ metadata:
|
||||
name: clustermesh-apiserver
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
|
||||
@@ -7,6 +7,10 @@ kind: Deployment
|
||||
metadata:
|
||||
name: clustermesh-apiserver
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: clustermesh-apiserver
|
||||
app.kubernetes.io/part-of: cilium
|
||||
@@ -44,41 +48,37 @@ spec:
|
||||
{{- end }}
|
||||
initContainers:
|
||||
- name: etcd-init
|
||||
image: {{ include "cilium.image" .Values.clustermesh.apiserver.etcd.image | quote }}
|
||||
imagePullPolicy: {{ .Values.clustermesh.apiserver.etcd.image.pullPolicy }}
|
||||
command: ["/bin/sh", "-c"]
|
||||
image: {{ include "cilium.image" .Values.clustermesh.apiserver.image | quote }}
|
||||
imagePullPolicy: {{ .Values.clustermesh.apiserver.image.pullPolicy }}
|
||||
command:
|
||||
- /usr/bin/clustermesh-apiserver
|
||||
args:
|
||||
- |
|
||||
rm -rf /var/run/etcd/*;
|
||||
/usr/local/bin/etcd --data-dir=/var/run/etcd --name=clustermesh-apiserver --listen-client-urls=http://127.0.0.1:2379 --advertise-client-urls=http://127.0.0.1:2379 --initial-cluster-token=clustermesh-apiserver --initial-cluster-state=new --auto-compaction-retention=1 &
|
||||
|
||||
# The following key needs to be created before that the cilium agents
|
||||
# have the possibility of connecting to etcd.
|
||||
etcdctl put cilium/.has-cluster-config true
|
||||
|
||||
etcdctl user add root --no-password;
|
||||
etcdctl user grant-role root root;
|
||||
etcdctl user add admin-{{ .Values.cluster.name }} --no-password;
|
||||
etcdctl user grant-role admin-{{ .Values.cluster.name }} root;
|
||||
etcdctl user add externalworkload --no-password;
|
||||
etcdctl role add externalworkload;
|
||||
etcdctl role grant-permission externalworkload --from-key read '';
|
||||
etcdctl role grant-permission externalworkload readwrite --prefix cilium/state/noderegister/v1/;
|
||||
etcdctl role grant-permission externalworkload readwrite --prefix cilium/.initlock/;
|
||||
etcdctl user grant-role externalworkload externalworkload;
|
||||
etcdctl user add remote --no-password;
|
||||
etcdctl role add remote;
|
||||
etcdctl role grant-permission remote --from-key read '';
|
||||
etcdctl user grant-role remote remote;
|
||||
etcdctl auth enable;
|
||||
exit
|
||||
- etcdinit
|
||||
{{- if .Values.debug.enabled }}
|
||||
- --debug
|
||||
{{- end }}
|
||||
# These need to match the equivalent arguments to etcd in the main container.
|
||||
- --etcd-cluster-name=clustermesh-apiserver
|
||||
- --etcd-initial-cluster-token=clustermesh-apiserver
|
||||
- --etcd-data-dir=/var/run/etcd
|
||||
{{- with .Values.clustermesh.apiserver.etcd.init.extraArgs }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
env:
|
||||
- name: ETCDCTL_API
|
||||
value: "3"
|
||||
- name: HOSTNAME_IP
|
||||
# The Cilium cluster name (specified via the `CILIUM_CLUSTER_NAME` environment variable) and the etcd cluster
|
||||
# name (specified via the `--etcd-cluster-name` argument) are very different concepts. The Cilium cluster name
|
||||
# is the name of the overall Cilium cluster, and is used to set the admin account username. The etcd cluster
|
||||
# name is a concept that's only relevant for etcd itself. The etcd cluster name must be the same for both this
|
||||
# command and the actual invocation of etcd in the main containers of this Pod, but it's otherwise not
|
||||
# relevant to Cilium.
|
||||
- name: CILIUM_CLUSTER_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
configMapKeyRef:
|
||||
name: cilium-config
|
||||
key: cluster-name
|
||||
{{- with .Values.clustermesh.apiserver.etcd.init.extraEnv }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
volumeMounts:
|
||||
- name: etcd-data-dir
|
||||
mountPath: /var/run/etcd
|
||||
@@ -92,10 +92,11 @@ spec:
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: etcd
|
||||
image: {{ include "cilium.image" .Values.clustermesh.apiserver.etcd.image | quote }}
|
||||
imagePullPolicy: {{ .Values.clustermesh.apiserver.etcd.image.pullPolicy }}
|
||||
# The clustermesh-apiserver container image includes an etcd binary.
|
||||
image: {{ include "cilium.image" .Values.clustermesh.apiserver.image | quote }}
|
||||
imagePullPolicy: {{ .Values.clustermesh.apiserver.image.pullPolicy }}
|
||||
command:
|
||||
- /usr/local/bin/etcd
|
||||
- /usr/bin/etcd
|
||||
args:
|
||||
- --data-dir=/var/run/etcd
|
||||
- --name=clustermesh-apiserver
|
||||
@@ -147,12 +148,17 @@ spec:
|
||||
securityContext:
|
||||
{{- toYaml . | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.apiserver.etcd.lifecycle }}
|
||||
lifecycle:
|
||||
{{- toYaml . | nindent 10 }}
|
||||
{{- end }}
|
||||
- name: apiserver
|
||||
image: {{ include "cilium.image" .Values.clustermesh.apiserver.image | quote }}
|
||||
imagePullPolicy: {{ .Values.clustermesh.apiserver.image.pullPolicy }}
|
||||
command:
|
||||
- /usr/bin/clustermesh-apiserver
|
||||
args:
|
||||
- clustermesh
|
||||
{{- if .Values.debug.enabled }}
|
||||
- --debug
|
||||
{{- end }}
|
||||
@@ -160,6 +166,9 @@ spec:
|
||||
- --cluster-id=$(CLUSTER_ID)
|
||||
- --kvstore-opt
|
||||
- etcd.config=/var/lib/cilium/etcd-config.yaml
|
||||
{{- if hasKey .Values.clustermesh "maxConnectedClusters" }}
|
||||
- --max-connected-clusters={{ .Values.clustermesh.maxConnectedClusters }}
|
||||
{{- end }}
|
||||
{{- if ne .Values.clustermesh.apiserver.tls.authMode "legacy" }}
|
||||
- --cluster-users-enabled
|
||||
- --cluster-users-config-path=/var/lib/cilium/etcd-config/users.yaml
|
||||
@@ -167,6 +176,7 @@ spec:
|
||||
- --enable-external-workloads={{ .Values.externalWorkloads.enabled }}
|
||||
{{- if .Values.clustermesh.apiserver.metrics.enabled }}
|
||||
- --prometheus-serve-addr=:{{ .Values.clustermesh.apiserver.metrics.port }}
|
||||
- --controller-group-metrics=all
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.apiserver.extraArgs }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
@@ -224,13 +234,18 @@ spec:
|
||||
securityContext:
|
||||
{{- toYaml . | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.apiserver.lifecycle }}
|
||||
lifecycle:
|
||||
{{- toYaml . | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- if .Values.clustermesh.apiserver.kvstoremesh.enabled }}
|
||||
- name: kvstoremesh
|
||||
image: {{ include "cilium.image" .Values.clustermesh.apiserver.kvstoremesh.image | quote }}
|
||||
imagePullPolicy: {{ .Values.clustermesh.apiserver.kvstoremesh.image.pullPolicy }}
|
||||
image: {{ include "cilium.image" .Values.clustermesh.apiserver.image | quote }}
|
||||
imagePullPolicy: {{ .Values.clustermesh.apiserver.image.pullPolicy }}
|
||||
command:
|
||||
- /usr/bin/kvstoremesh
|
||||
- /usr/bin/clustermesh-apiserver
|
||||
args:
|
||||
- kvstoremesh
|
||||
{{- if .Values.debug.enabled }}
|
||||
- --debug
|
||||
{{- end }}
|
||||
@@ -240,8 +255,12 @@ spec:
|
||||
- --kvstore-opt=etcd.qps=100
|
||||
- --kvstore-opt=etcd.maxInflight=10
|
||||
- --clustermesh-config=/var/lib/cilium/clustermesh
|
||||
{{- if hasKey .Values.clustermesh "maxConnectedClusters" }}
|
||||
- --max-connected-clusters={{ .Values.clustermesh.maxConnectedClusters }}
|
||||
{{- end }}
|
||||
{{- if .Values.clustermesh.apiserver.metrics.kvstoremesh.enabled }}
|
||||
- --prometheus-serve-addr=:{{ .Values.clustermesh.apiserver.metrics.kvstoremesh.port }}
|
||||
- --controller-group-metrics=all
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.apiserver.kvstoremesh.extraArgs }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
@@ -285,6 +304,10 @@ spec:
|
||||
securityContext:
|
||||
{{- toYaml . | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.apiserver.kvstoremesh.lifecycle }}
|
||||
lifecycle:
|
||||
{{- toYaml . | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: etcd-server-secrets
|
||||
@@ -371,6 +394,7 @@ spec:
|
||||
priorityClassName: {{ include "cilium.priorityClass" (list $ .Values.clustermesh.apiserver.priorityClassName "system-cluster-critical") }}
|
||||
serviceAccount: {{ .Values.serviceAccounts.clustermeshApiserver.name | quote }}
|
||||
serviceAccountName: {{ .Values.serviceAccounts.clustermeshApiserver.name | quote }}
|
||||
terminationGracePeriodSeconds: {{ .Values.clustermesh.apiserver.terminationGracePeriodSeconds }}
|
||||
automountServiceAccountToken: {{ .Values.serviceAccounts.clustermeshApiserver.automount }}
|
||||
{{- with .Values.clustermesh.apiserver.affinity }}
|
||||
affinity:
|
||||
|
||||
@@ -7,6 +7,10 @@ kind: Service
|
||||
metadata:
|
||||
name: clustermesh-apiserver-metrics
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: clustermesh-apiserver
|
||||
app.kubernetes.io/part-of: cilium
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: PodDisruptionBudget
|
||||
metadata:
|
||||
name: clustermesh-apiserver
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: clustermesh-apiserver
|
||||
app.kubernetes.io/part-of: cilium
|
||||
|
||||
@@ -8,9 +8,14 @@ metadata:
|
||||
k8s-app: clustermesh-apiserver
|
||||
app.kubernetes.io/part-of: cilium
|
||||
app.kubernetes.io/name: clustermesh-apiserver
|
||||
{{- with .Values.clustermesh.apiserver.service.annotations }}
|
||||
{{- if or .Values.clustermesh.apiserver.service.annotations .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.apiserver.service.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
type: {{ .Values.clustermesh.apiserver.service.type }}
|
||||
|
||||
@@ -4,8 +4,13 @@ kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ .Values.serviceAccounts.clustermeshApiserver.name | quote }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.serviceAccounts.clustermeshApiserver.annotations }}
|
||||
{{- if or .Values.serviceAccounts.clustermeshApiserver.annotations .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.serviceAccounts.clustermeshApiserver.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -14,10 +14,15 @@ metadata:
|
||||
{{- with .Values.clustermesh.apiserver.metrics.serviceMonitor.labels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.clustermesh.apiserver.metrics.serviceMonitor.annotations .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.clustermesh.apiserver.metrics.serviceMonitor.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.apiserver.metrics.serviceMonitor.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: Certificate
|
||||
metadata:
|
||||
name: clustermesh-apiserver-admin-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
issuerRef:
|
||||
{{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }}
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: Certificate
|
||||
metadata:
|
||||
name: clustermesh-apiserver-client-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
issuerRef:
|
||||
{{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }}
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: Certificate
|
||||
metadata:
|
||||
name: clustermesh-apiserver-remote-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
issuerRef:
|
||||
{{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }}
|
||||
|
||||
@@ -5,6 +5,10 @@ kind: Certificate
|
||||
metadata:
|
||||
name: clustermesh-apiserver-server-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
issuerRef:
|
||||
{{- toYaml .Values.clustermesh.apiserver.tls.auto.certManagerIssuerRef | nindent 4 }}
|
||||
|
||||
@@ -26,12 +26,8 @@ spec:
|
||||
{{- end }}
|
||||
- "--ca-generate"
|
||||
- "--ca-reuse-secret"
|
||||
{{- if .Values.clustermesh.apiserver.tls.ca.cert }}
|
||||
- "--ca-secret-name=clustermesh-apiserver-ca-cert"
|
||||
{{- else -}}
|
||||
{{- if and .Values.tls.ca.cert .Values.tls.ca.key }}
|
||||
{{- if and .Values.tls.ca.cert .Values.tls.ca.key }}
|
||||
- "--ca-secret-name=cilium-ca"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
- "--clustermesh-apiserver-server-cert-generate"
|
||||
- "--clustermesh-apiserver-server-cert-validity-duration={{ $certValiditySecondsStr }}"
|
||||
@@ -69,5 +65,9 @@ spec:
|
||||
volumes:
|
||||
{{- toYaml . | nindent 6 }}
|
||||
{{- end }}
|
||||
affinity:
|
||||
{{- with .Values.certgen.affinity }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
ttlSecondsAfterFinished: {{ .Values.certgen.ttlSecondsAfterFinished }}
|
||||
{{- end }}
|
||||
|
||||
@@ -1,15 +0,0 @@
|
||||
{{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "cronJob") }}
|
||||
{{- $crt := .Values.clustermesh.apiserver.tls.ca.cert | default .Values.tls.ca.cert -}}
|
||||
{{- $key := .Values.clustermesh.apiserver.tls.ca.key | default .Values.tls.ca.key -}}
|
||||
{{- if and $crt $key }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-ca-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
data:
|
||||
ca.crt: {{ $crt }}
|
||||
ca.key: {{ $key }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -4,6 +4,10 @@ kind: CronJob
|
||||
metadata:
|
||||
name: clustermesh-apiserver-generate-certs
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: clustermesh-apiserver-generate-certs
|
||||
app.kubernetes.io/part-of: cilium
|
||||
|
||||
@@ -13,5 +13,8 @@ metadata:
|
||||
{{- with .Values.certgen.annotations.job }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{ include "clustermesh-apiserver-generate-certs.job.spec" . }}
|
||||
{{- end }}
|
||||
|
||||
@@ -4,6 +4,10 @@ kind: Role
|
||||
metadata:
|
||||
name: clustermesh-apiserver-generate-certs
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
rules:
|
||||
@@ -19,7 +23,6 @@ rules:
|
||||
- secrets
|
||||
resourceNames:
|
||||
- cilium-ca
|
||||
- clustermesh-apiserver-ca-cert
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
|
||||
@@ -4,6 +4,10 @@ kind: RoleBinding
|
||||
metadata:
|
||||
name: clustermesh-apiserver-generate-certs
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
roleRef:
|
||||
|
||||
@@ -4,8 +4,13 @@ kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ .Values.serviceAccounts.clustermeshcertgen.name | quote }}
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.serviceAccounts.clustermeshcertgen.annotations }}
|
||||
{{- if or .Values.serviceAccounts.clustermeshcertgen.annotations .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- with .Values.serviceAccounts.clustermeshcertgen.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
{{/*
|
||||
Generate TLS certificates for ClusterMesh.
|
||||
|
||||
Note: Always use this template as follows:
|
||||
|
||||
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}}
|
||||
|
||||
The assignment to `$_` is required because we store the generated CI in a global `cmca` variable.
|
||||
Please, don't try to "simplify" this, as without this trick, every generated
|
||||
certificate would be signed by a different CA.
|
||||
*/}}
|
||||
{{- define "clustermesh-apiserver-generate-certs.helm.setup-ca" }}
|
||||
{{- if not .cmca }}
|
||||
{{- $ca := "" -}}
|
||||
{{- $crt := .Values.clustermesh.apiserver.tls.ca.cert | default .Values.tls.ca.cert -}}
|
||||
{{- $key := .Values.clustermesh.apiserver.tls.ca.key | default .Values.tls.ca.key -}}
|
||||
{{- if and $crt $key }}
|
||||
{{- $ca = buildCustomCert $crt $key -}}
|
||||
{{- else }}
|
||||
{{- with lookup "v1" "Secret" .Release.Namespace "clustermesh-apiserver-ca-cert" }}
|
||||
{{- $crt := index .data "ca.crt" }}
|
||||
{{- $key := index .data "ca.key" }}
|
||||
{{- $ca = buildCustomCert $crt $key -}}
|
||||
{{- else }}
|
||||
{{- $_ := include "cilium.ca.setup" . -}}
|
||||
{{- with lookup "v1" "Secret" .Release.Namespace .commonCASecretName }}
|
||||
{{- $crt := index .data "ca.crt" }}
|
||||
{{- $key := index .data "ca.key" }}
|
||||
{{- $ca = buildCustomCert $crt $key -}}
|
||||
{{- else }}
|
||||
{{- $ca = .commonCA -}}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- $_ := set . "cmca" $ca -}}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -1,17 +1,21 @@
|
||||
{{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }}
|
||||
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}}
|
||||
{{- $_ := include "cilium.ca.setup" . -}}
|
||||
{{- $cn := include "clustermesh-apiserver-generate-certs.admin-common-name" . -}}
|
||||
{{- $dns := list "localhost" }}
|
||||
{{- $cert := genSignedCert $cn nil $dns (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .cmca -}}
|
||||
{{- $cert := genSignedCert $cn nil $dns (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .commonCA -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-admin-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
ca.crt: {{ .cmca.Cert | b64enc }}
|
||||
ca.crt: {{ .commonCA.Cert | b64enc }}
|
||||
tls.crt: {{ $cert.Cert | b64enc }}
|
||||
tls.key: {{ $cert.Key | b64enc }}
|
||||
{{- end }}
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
{{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }}
|
||||
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-ca-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
data:
|
||||
ca.crt: {{ .cmca.Cert | b64enc }}
|
||||
ca.key: {{ .cmca.Key | b64enc }}
|
||||
{{- end }}
|
||||
@@ -1,16 +1,20 @@
|
||||
{{- if and .Values.externalWorkloads.enabled .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }}
|
||||
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}}
|
||||
{{- $_ := include "cilium.ca.setup" . -}}
|
||||
{{- $cn := "externalworkload" }}
|
||||
{{- $cert := genSignedCert $cn nil nil (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .cmca -}}
|
||||
{{- $cert := genSignedCert $cn nil nil (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .commonCA -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-client-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
ca.crt: {{ .cmca.Cert | b64enc }}
|
||||
ca.crt: {{ .commonCA.Cert | b64enc }}
|
||||
tls.crt: {{ $cert.Cert | b64enc }}
|
||||
tls.key: {{ $cert.Key | b64enc }}
|
||||
{{- end }}
|
||||
|
||||
@@ -1,16 +1,20 @@
|
||||
{{- if and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }}
|
||||
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}}
|
||||
{{- $_ := include "cilium.ca.setup" . -}}
|
||||
{{- $cn := include "clustermesh-apiserver-generate-certs.remote-common-name" . -}}
|
||||
{{- $cert := genSignedCert $cn nil nil (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .cmca -}}
|
||||
{{- $cert := genSignedCert $cn nil nil (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .commonCA -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-remote-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
ca.crt: {{ .cmca.Cert | b64enc }}
|
||||
ca.crt: {{ .commonCA.Cert | b64enc }}
|
||||
tls.crt: {{ $cert.Cert | b64enc }}
|
||||
tls.key: {{ $cert.Key | b64enc }}
|
||||
{{- end }}
|
||||
|
||||
@@ -1,18 +1,22 @@
|
||||
{{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "helm") }}
|
||||
{{- $_ := include "clustermesh-apiserver-generate-certs.helm.setup-ca" . -}}
|
||||
{{- $_ := include "cilium.ca.setup" . -}}
|
||||
{{- $cn := "clustermesh-apiserver.cilium.io" }}
|
||||
{{- $ip := concat (list "127.0.0.1" "::1") .Values.clustermesh.apiserver.tls.server.extraIpAddresses }}
|
||||
{{- $dns := concat (list $cn "*.mesh.cilium.io" (printf "clustermesh-apiserver.%s.svc" .Release.Namespace)) .Values.clustermesh.apiserver.tls.server.extraDnsNames }}
|
||||
{{- $cert := genSignedCert $cn $ip $dns (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .cmca -}}
|
||||
{{- $cert := genSignedCert $cn $ip $dns (.Values.clustermesh.apiserver.tls.auto.certValidityDuration | int) .commonCA -}}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-server-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
ca.crt: {{ .cmca.Cert | b64enc }}
|
||||
ca.crt: {{ .commonCA.Cert | b64enc }}
|
||||
tls.crt: {{ $cert.Cert | b64enc }}
|
||||
tls.key: {{ $cert.Key | b64enc }}
|
||||
{{- end }}
|
||||
|
||||
@@ -4,9 +4,13 @@ kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-admin-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
ca.crt: {{ .Values.clustermesh.apiserver.tls.ca.cert | default .Values.tls.ca.cert }}
|
||||
ca.crt: {{ .Values.tls.ca.cert }}
|
||||
tls.crt: {{ .Values.clustermesh.apiserver.tls.admin.cert | required "missing clustermesh.apiserver.tls.admin.cert" }}
|
||||
tls.key: {{ .Values.clustermesh.apiserver.tls.admin.key | required "missing clustermesh.apiserver.tls.admin.key" }}
|
||||
{{- end }}
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
{{- if and (or .Values.externalWorkloads.enabled .Values.clustermesh.useAPIServer) (not .Values.clustermesh.apiserver.tls.auto.enabled) }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-ca-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
data:
|
||||
ca.crt: {{ .Values.clustermesh.apiserver.tls.ca.cert | default .Values.tls.ca.cert }}
|
||||
{{- if .Values.clustermesh.apiserver.tls.ca.key | default .Values.tls.ca.key }}
|
||||
ca.key: {{ .Values.clustermesh.apiserver.tls.ca.key | default .Values.tls.ca.key }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -4,9 +4,13 @@ kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-client-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
ca.crt: {{ .Values.clustermesh.apiserver.tls.ca.cert | default .Values.tls.ca.cert }}
|
||||
ca.crt: {{ .Values.tls.ca.cert }}
|
||||
tls.crt: {{ .Values.clustermesh.apiserver.tls.client.cert | required "missing clustermesh.apiserver.tls.client.cert" }}
|
||||
tls.key: {{ .Values.clustermesh.apiserver.tls.client.key | required "missing clustermesh.apiserver.tls.client.key" }}
|
||||
{{- end }}
|
||||
|
||||
@@ -4,9 +4,13 @@ kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-remote-cert
|
||||
namespace: {{ .Release.Namespace }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
ca.crt: {{ .Values.clustermesh.apiserver.tls.ca.cert | default .Values.tls.ca.cert }}
|
||||
ca.crt: {{ .Values.tls.ca.cert }}
|
||||
tls.crt: {{ .Values.clustermesh.apiserver.tls.remote.cert | required "missing clustermesh.apiserver.tls.remote.cert" }}
|
||||
tls.key: {{ .Values.clustermesh.apiserver.tls.remote.key | required "missing clustermesh.apiserver.tls.remote.key" }}
|
||||
{{- end }}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user