mirror of
https://github.com/cozystack/cozystack.git
synced 2026-03-11 09:28:55 +00:00
Compare commits
25 Commits
fix/backup
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7e2c035179 | ||
|
|
9e166300a7 | ||
|
|
bf31b7c408 | ||
|
|
fb5820e858 | ||
|
|
7b0a5d216f | ||
|
|
12b34c737a | ||
|
|
a240ff4b27 | ||
|
|
539b5c3d44 | ||
|
|
b772475ad5 | ||
|
|
27c5b0b1e2 | ||
|
|
6535ec9f38 | ||
|
|
8ac57811eb | ||
|
|
4b9c64c459 | ||
|
|
e08c895a09 | ||
|
|
630dfc767a | ||
|
|
a13481bfea | ||
|
|
3606b51a3f | ||
|
|
748f814523 | ||
|
|
9a4f49238c | ||
|
|
4b166e788a | ||
|
|
49601b166d | ||
|
|
e69efd80c4 | ||
|
|
318079bf66 | ||
|
|
62d36ec4ee | ||
|
|
a3825314e6 |
21
.github/workflows/pull-requests.yaml
vendored
21
.github/workflows/pull-requests.yaml
vendored
@@ -6,8 +6,6 @@ env:
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened]
|
||||
paths-ignore:
|
||||
- 'docs/**/*'
|
||||
|
||||
# Cancel in‑flight runs for the same PR when a new push arrives.
|
||||
concurrency:
|
||||
@@ -15,6 +13,19 @@ concurrency:
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
detect-changes:
|
||||
name: Detect changes
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
code: ${{ steps.filter.outputs.code }}
|
||||
steps:
|
||||
- uses: dorny/paths-filter@v3
|
||||
id: filter
|
||||
with:
|
||||
filters: |
|
||||
code:
|
||||
- '!docs/**'
|
||||
|
||||
build:
|
||||
name: Build
|
||||
runs-on: [self-hosted]
|
||||
@@ -22,9 +33,11 @@ jobs:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
# Never run when the PR carries the "release" label.
|
||||
needs: ["detect-changes"]
|
||||
# Never run when the PR carries the "release" label or only docs changed.
|
||||
if: |
|
||||
!contains(github.event.pull_request.labels.*.name, 'release')
|
||||
needs.detect-changes.outputs.code == 'true'
|
||||
&& !contains(github.event.pull_request.labels.*.name, 'release')
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
|
||||
25
docs/changelogs/v1.0.4.md
Normal file
25
docs/changelogs/v1.0.4.md
Normal file
@@ -0,0 +1,25 @@
|
||||
<!--
|
||||
https://github.com/cozystack/cozystack/releases/tag/v1.0.4
|
||||
-->
|
||||
|
||||
## Fixes
|
||||
|
||||
* **[system] Fix Keycloak probe crashloop with management port health endpoints**: Fixed a crashloop where Keycloak 26.x was endlessly restarting because liveness and readiness probes were sending HTTP requests to port 8080. Keycloak 26.x redirects all requests on port 8080 to `KC_HOSTNAME` (HTTPS), and since kubelet does not follow redirects, probes failed, eventually triggering container restarts. The fix switches probes to the dedicated management port 9000 (`/health/live`, `/health/ready`) enabled via `KC_HEALTH_ENABLED=true`, exposes management port 9000, and adds a `startupProbe` with appropriate failure thresholds for better startup tolerance ([**@mattia-eleuteri**](https://github.com/mattia-eleuteri) in #2162, #2178).
|
||||
|
||||
* **[system] Fix etcd-operator deprecated kube-rbac-proxy image**: Replaced the deprecated `gcr.io/kubebuilder/kube-rbac-proxy:v0.16.0` image with `quay.io/brancz/kube-rbac-proxy:v0.18.1` in the vendored etcd-operator chart. The GCR-hosted image became unavailable after March 18, 2025, causing etcd-operator pods to fail on image pull ([**@kvaps**](https://github.com/kvaps) in #2181, #2183).
|
||||
|
||||
* **[platform] Fix VM MAC address not preserved during virtual-machine to vm-instance migration**: During the `virtual-machine` → `vm-instance` migration (script 29), VM MAC addresses were not preserved. Kube-OVN reads MAC addresses exclusively from the pod annotation `ovn.kubernetes.io/mac_address`, not from `spec.macAddress` of the IP resource. Without this annotation, migrated VMs received a new random MAC address, breaking OS-level network configuration that matches by MAC (e.g., netplan). The fix adds a Helm `lookup` in the vm-instance chart template to read the Kube-OVN IP resource and automatically inject the MAC and IP addresses as pod annotations ([**@sircthulhu**](https://github.com/sircthulhu) in #2169, #2191).
|
||||
|
||||
* **[dashboard] Fix External IPs page showing empty rows**: Fixed the External IPs administration page displaying empty rows instead of service data. The `EnrichedTable` configuration in the `external-ips` factory was using incorrect property names — replaced `clusterNamePartOfUrl` with `cluster` and changed `pathToItems` from array format to dot-path string format, matching the convention used by all other `EnrichedTable` instances ([**@IvanHunters**](https://github.com/IvanHunters) in #2175, #2192).
|
||||
|
||||
* **[dashboard] Fix disabled/hidden state reset on MarketplacePanel reconciliation**: Fixed a bug where the dashboard controller was hardcoding `disabled=false` and `hidden=false` on every reconcile loop, overwriting changes made through the dashboard UI. Services disabled or hidden via the marketplace panel now correctly retain their state after controller reconciliation ([**@IvanHunters**](https://github.com/IvanHunters) in #2176, #2202).
|
||||
|
||||
* **[dashboard] Fix hidden MarketplacePanel resources appearing in sidebar menu**: Fixed the sidebar navigation showing all resources regardless of their MarketplacePanel `hidden` state. The controller now fetches MarketplacePanels during sidebar reconciliation and filters out resources where `hidden=true`, ensuring that hiding a resource from the marketplace also removes it from the sidebar navigation. Listing failures are non-fatal — if the configuration fetch fails, no hiding is applied and the dashboard remains functional ([**@IvanHunters**](https://github.com/IvanHunters) in #2177, #2204).
|
||||
|
||||
## Documentation
|
||||
|
||||
* **[website] Add OIDC self-signed certificates configuration guide**: Added a comprehensive guide for configuring OIDC authentication with Keycloak when using self-signed certificates (the default in Cozystack). Covers Talos machine configuration with certificate mounting and host entries, kubelogin setup instructions, and a troubleshooting section. The guide is available for both v0 and v1 versioned documentation paths ([**@IvanHunters**](https://github.com/IvanHunters) in cozystack/website#443).
|
||||
|
||||
---
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.0.3...v1.0.4
|
||||
23
docs/changelogs/v1.1.1.md
Normal file
23
docs/changelogs/v1.1.1.md
Normal file
@@ -0,0 +1,23 @@
|
||||
<!--
|
||||
https://github.com/cozystack/cozystack/releases/tag/v1.1.1
|
||||
-->
|
||||
|
||||
## Fixes
|
||||
|
||||
* **[dashboard] Fix hidden MarketplacePanel resources appearing in sidebar menu**: The sidebar was generated independently from MarketplacePanels, always showing all resources regardless of their `hidden` state. Fixed by fetching MarketplacePanels during sidebar reconciliation and skipping resources where `hidden=true`, so hiding a resource from the marketplace also removes it from the sidebar navigation ([**@IvanHunters**](https://github.com/IvanHunters) in #2177, #2203).
|
||||
|
||||
* **[dashboard] Fix disabled/hidden state overwritten on every MarketplacePanel reconciliation**: The controller was hardcoding `disabled=false` and `hidden=false` on every reconciliation, silently overwriting any user changes made through the dashboard UI. Fixed by reading and preserving the current `disabled`/`hidden` values from the existing resource before updating ([**@IvanHunters**](https://github.com/IvanHunters) in #2176, #2201).
|
||||
|
||||
* **[dashboard] Fix External IPs factory EnrichedTable rendering**: The external-IPs table displayed empty rows because the factory used incorrect `EnrichedTable` properties. Replaced `clusterNamePartOfUrl` with `cluster` and changed `pathToItems` from array to dot-path string format, consistent with all other working `EnrichedTable` instances ([**@IvanHunters**](https://github.com/IvanHunters) in #2175, #2193).
|
||||
|
||||
* **[platform] Fix VM MAC address not preserved during virtual-machine to vm-instance migration**: Kube-OVN reads MAC address exclusively from the pod annotation `ovn.kubernetes.io/mac_address`, not from the IP resource `spec.macAddress`. Without the annotation, migrated VMs received a new random MAC, breaking OS-level network configurations that match by MAC (e.g. netplan). Added a Helm `lookup` for the Kube-OVN IP resource in the vm-instance chart so that MAC and IP addresses are automatically injected as pod annotations when the resource exists ([**@sircthulhu**](https://github.com/sircthulhu) in #2169, #2190).
|
||||
|
||||
* **[etcd-operator] Replace deprecated kube-rbac-proxy image**: The `gcr.io/kubebuilder/kube-rbac-proxy` image became unavailable after Google Container Registry was deprecated. Replaced it with `quay.io/brancz/kube-rbac-proxy` from the original upstream author, restoring etcd-operator functionality ([**@kvaps**](https://github.com/kvaps) in #2181, #2182).
|
||||
|
||||
* **[migrations] Handle missing RabbitMQ CRD in migration 34**: Migration 34 failed with an error when the `rabbitmqs.apps.cozystack.io` CRD did not exist — which occurs on clusters where RabbitMQ was never installed. Added a CRD presence check before attempting to list resources so that migration 34 completes cleanly on such clusters ([**@IvanHunters**](https://github.com/IvanHunters) in #2168, #2180).
|
||||
|
||||
* **[keycloak] Fix Keycloak crashloop due to misconfigured health probes**: Keycloak 26.x redirects all HTTP requests on port 8080 to the configured HTTPS hostname; since kubelet does not follow redirects, liveness and readiness probes failed causing a crashloop. Fixed by enabling `KC_HEALTH_ENABLED=true`, exposing management port 9000, and switching all probes to `/health/live` and `/health/ready` on port 9000. Also added a `startupProbe` for improved startup tolerance ([**@mattia-eleuteri**](https://github.com/mattia-eleuteri) in #2162, #2179).
|
||||
|
||||
---
|
||||
|
||||
**Full Changelog**: https://github.com/cozystack/cozystack/compare/v1.1.0...v1.1.1
|
||||
@@ -175,8 +175,8 @@ EOF
|
||||
|
||||
# VictoriaMetrics components
|
||||
kubectl wait vmalert/vmalert-shortterm vmalertmanager/alertmanager -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=15m
|
||||
kubectl wait vlogs/generic -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
|
||||
kubectl wait vmcluster/shortterm vmcluster/longterm -n tenant-root --for=jsonpath='{.status.clusterStatus}'=operational --timeout=5m
|
||||
kubectl wait vlclusters/generic -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
|
||||
kubectl wait vmcluster/shortterm vmcluster/longterm -n tenant-root --for=jsonpath='{.status.updateStatus}'=operational --timeout=5m
|
||||
|
||||
# Grafana
|
||||
kubectl wait clusters.postgresql.cnpg.io/grafana-db -n tenant-root --for=condition=ready --timeout=5m
|
||||
|
||||
@@ -68,31 +68,46 @@ func (m *Manager) ensureMarketplacePanel(ctx context.Context, crd *cozyv1alpha1.
|
||||
tags[i] = t
|
||||
}
|
||||
|
||||
specMap := map[string]any{
|
||||
"description": d.Description,
|
||||
"name": displayName,
|
||||
"type": "nonCrd",
|
||||
"apiGroup": "apps.cozystack.io",
|
||||
"apiVersion": "v1alpha1",
|
||||
"plural": app.Plural, // e.g., "buckets"
|
||||
"disabled": false,
|
||||
"hidden": false,
|
||||
"tags": tags,
|
||||
"icon": d.Icon,
|
||||
}
|
||||
|
||||
specBytes, err := json.Marshal(specMap)
|
||||
if err != nil {
|
||||
return reconcile.Result{}, err
|
||||
}
|
||||
|
||||
_, err = controllerutil.CreateOrUpdate(ctx, m.Client, mp, func() error {
|
||||
_, err := controllerutil.CreateOrUpdate(ctx, m.Client, mp, func() error {
|
||||
if err := controllerutil.SetOwnerReference(crd, mp, m.Scheme); err != nil {
|
||||
return err
|
||||
}
|
||||
// Add dashboard labels to dynamic resources
|
||||
m.addDashboardLabels(mp, crd, ResourceTypeDynamic)
|
||||
|
||||
// Preserve user-set disabled/hidden values from existing resource
|
||||
disabled := false
|
||||
hidden := false
|
||||
if mp.Spec.Raw != nil {
|
||||
var existing map[string]any
|
||||
if err := json.Unmarshal(mp.Spec.Raw, &existing); err == nil {
|
||||
if v, ok := existing["disabled"].(bool); ok {
|
||||
disabled = v
|
||||
}
|
||||
if v, ok := existing["hidden"].(bool); ok {
|
||||
hidden = v
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
specMap := map[string]any{
|
||||
"description": d.Description,
|
||||
"name": displayName,
|
||||
"type": "nonCrd",
|
||||
"apiGroup": "apps.cozystack.io",
|
||||
"apiVersion": "v1alpha1",
|
||||
"plural": app.Plural, // e.g., "buckets"
|
||||
"disabled": disabled,
|
||||
"hidden": hidden,
|
||||
"tags": tags,
|
||||
"icon": d.Icon,
|
||||
}
|
||||
|
||||
specBytes, err := json.Marshal(specMap)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Only update spec if it's different to avoid unnecessary updates
|
||||
newSpec := dashv1alpha1.ArbitrarySpec{
|
||||
JSON: apiextv1.JSON{Raw: specBytes},
|
||||
|
||||
@@ -38,6 +38,23 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
|
||||
}
|
||||
all = crdList.Items
|
||||
|
||||
// 1b) Fetch all MarketplacePanels to determine which resources are hidden
|
||||
hiddenResources := map[string]bool{}
|
||||
var mpList dashv1alpha1.MarketplacePanelList
|
||||
if err := m.List(ctx, &mpList, &client.ListOptions{}); err == nil {
|
||||
for i := range mpList.Items {
|
||||
mp := &mpList.Items[i]
|
||||
if mp.Spec.Raw != nil {
|
||||
var spec map[string]any
|
||||
if err := json.Unmarshal(mp.Spec.Raw, &spec); err == nil {
|
||||
if hidden, ok := spec["hidden"].(bool); ok && hidden {
|
||||
hiddenResources[mp.Name] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// 2) Build category -> []item map (only for CRDs with spec.dashboard != nil)
|
||||
type item struct {
|
||||
Key string
|
||||
@@ -63,6 +80,11 @@ func (m *Manager) ensureSidebar(ctx context.Context, crd *cozyv1alpha1.Applicati
|
||||
plural := pickPlural(kind, def)
|
||||
lowerKind := strings.ToLower(kind)
|
||||
|
||||
// Skip resources hidden via MarketplacePanel
|
||||
if hiddenResources[def.Name] {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if this resource is a module
|
||||
if def.Spec.Dashboard.Module {
|
||||
// Special case: info should have its own keysAndTags, not be in modules
|
||||
|
||||
@@ -1924,12 +1924,12 @@ func CreateAllFactories() []*dashboardv1alpha1.Factory {
|
||||
map[string]any{
|
||||
"type": "EnrichedTable",
|
||||
"data": map[string]any{
|
||||
"id": "external-ips-table",
|
||||
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/services",
|
||||
"clusterNamePartOfUrl": "{2}",
|
||||
"baseprefix": "/openapi-ui",
|
||||
"customizationId": "factory-details-v1.services",
|
||||
"pathToItems": []any{"items"},
|
||||
"id": "external-ips-table",
|
||||
"fetchUrl": "/api/clusters/{2}/k8s/api/v1/namespaces/{3}/services",
|
||||
"cluster": "{2}",
|
||||
"baseprefix": "/openapi-ui",
|
||||
"customizationId": "factory-details-v1.services",
|
||||
"pathToItems": ".items",
|
||||
"fieldSelector": map[string]any{
|
||||
"spec.type": "LoadBalancer",
|
||||
},
|
||||
|
||||
@@ -73,8 +73,8 @@ spec:
|
||||
[OUTPUT]
|
||||
Name http
|
||||
Match kube.*
|
||||
Host vlogs-generic.{{ $targetTenant }}.svc.{{ $clusterDomain }}
|
||||
port 9428
|
||||
Host vlinsert-generic.{{ $targetTenant }}.svc.{{ $clusterDomain }}
|
||||
port 9481
|
||||
compress gzip
|
||||
uri /insert/jsonline?_stream_fields=stream,kubernetes_pod_name,kubernetes_container_name,kubernetes_namespace_name&_msg_field=log&_time_field=date
|
||||
format json_lines
|
||||
|
||||
@@ -207,6 +207,27 @@ spec:
|
||||
- toEndpoints:
|
||||
- matchLabels:
|
||||
"k8s:io.kubernetes.pod.namespace": cozy-kubevirt-cdi
|
||||
{{- if .Values.monitoring }}
|
||||
---
|
||||
apiVersion: cilium.io/v2
|
||||
kind: CiliumClusterwideNetworkPolicy
|
||||
metadata:
|
||||
name: {{ include "tenant.name" . }}-egress-virt-handler
|
||||
spec:
|
||||
endpointSelector:
|
||||
matchLabels:
|
||||
"k8s:io.kubernetes.pod.namespace": "{{ include "tenant.name" . }}"
|
||||
"k8s:app.kubernetes.io/name": "vmagent"
|
||||
egress:
|
||||
- toEndpoints:
|
||||
- matchLabels:
|
||||
"k8s:kubevirt.io": "virt-handler"
|
||||
"k8s:io.kubernetes.pod.namespace": "cozy-kubevirt"
|
||||
toPorts:
|
||||
- ports:
|
||||
- port: "8443"
|
||||
protocol: TCP
|
||||
{{- end }}
|
||||
---
|
||||
apiVersion: cilium.io/v2
|
||||
kind: CiliumNetworkPolicy
|
||||
|
||||
@@ -6,7 +6,7 @@ metadata:
|
||||
name: {{ include "virtual-machine.fullname" $ }}-ssh-keys
|
||||
stringData:
|
||||
{{- range $k, $v := .Values.sshKeys }}
|
||||
key{{ $k }}: {{ quote $v }}
|
||||
key{{ $k }}: {{ quote $v }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if or .Values.cloudInit .Values.sshKeys }}
|
||||
@@ -27,21 +27,7 @@ stringData:
|
||||
#cloud-config
|
||||
ssh_authorized_keys:
|
||||
{{- range .Values.sshKeys }}
|
||||
- {{ quote . }}
|
||||
- {{ quote . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
networkdata: |
|
||||
{{- /*
|
||||
Provide network config without MAC addresses so the VM can be restored/cloned
|
||||
with a new MAC without breaking DHCP. Interface names are stable by PCI slot:
|
||||
enp1s0 = default (pod) NIC, enp2s0+ = additional subnet NICs.
|
||||
*/}}
|
||||
version: 2
|
||||
ethernets:
|
||||
enp1s0:
|
||||
dhcp4: true
|
||||
{{- range $i, $subnet := .Values.subnets }}
|
||||
enp{{ add $i 2 }}s0:
|
||||
dhcp4: true
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -34,6 +34,12 @@ spec:
|
||||
metadata:
|
||||
annotations:
|
||||
kubevirt.io/allow-pod-bridge-network-live-migration: "true"
|
||||
{{- $ovnIPName := printf "%s.%s" (include "virtual-machine.fullname" .) .Release.Namespace }}
|
||||
{{- $ovnIP := lookup "kubeovn.io/v1" "IP" "" $ovnIPName }}
|
||||
{{- if $ovnIP }}
|
||||
ovn.kubernetes.io/mac_address: {{ $ovnIP.spec.macAddress | quote }}
|
||||
ovn.kubernetes.io/ip_address: {{ $ovnIP.spec.ipAddress | quote }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "virtual-machine.labels" . | nindent 8 }}
|
||||
spec:
|
||||
@@ -113,8 +119,6 @@ spec:
|
||||
cloudInitNoCloud:
|
||||
secretRef:
|
||||
name: {{ include "virtual-machine.fullname" . }}-cloud-init
|
||||
networkDataSecretRef:
|
||||
name: {{ include "virtual-machine.fullname" . }}-cloud-init
|
||||
{{- end }}
|
||||
networks:
|
||||
- name: default
|
||||
|
||||
21
packages/core/platform/images/migrations/migrations/35
Executable file
21
packages/core/platform/images/migrations/migrations/35
Executable file
@@ -0,0 +1,21 @@
|
||||
#!/bin/sh
|
||||
# Migration 35 --> 36
|
||||
# Add helm.sh/resource-policy=keep annotation to existing VLogs resources
|
||||
# so they are preserved when the monitoring helm release upgrades to VLCluster.
|
||||
# Users will need to manually verify the new cluster is working, then optionally
|
||||
# migrate historical data and delete old VLogs resources.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
VLOGS=$(kubectl get vlogs.operator.victoriametrics.com --all-namespaces --output jsonpath='{range .items[*]}{.metadata.namespace}/{.metadata.name}{"\n"}{end}' 2>/dev/null || true)
|
||||
for resource in $VLOGS; do
|
||||
NS="${resource%%/*}"
|
||||
NAME="${resource##*/}"
|
||||
echo "Adding keep annotation to VLogs/$NAME in $NS"
|
||||
kubectl annotate vlogs.operator.victoriametrics.com --namespace "$NS" "$NAME" \
|
||||
helm.sh/resource-policy=keep \
|
||||
--overwrite
|
||||
done
|
||||
|
||||
kubectl create configmap --namespace cozy-system cozystack-version \
|
||||
--from-literal=version=36 --dry-run=client --output yaml | kubectl apply --filename -
|
||||
@@ -18,5 +18,5 @@ spec:
|
||||
path: system/backupstrategy-controller
|
||||
install:
|
||||
privileged: true
|
||||
namespace: cozy-backup-controller
|
||||
namespace: cozy-backupstrategy-controller
|
||||
releaseName: backupstrategy-controller
|
||||
|
||||
@@ -6,7 +6,7 @@ sourceRef:
|
||||
migrations:
|
||||
enabled: false
|
||||
image: ghcr.io/cozystack/cozystack/platform-migrations:v1.1.0@sha256:d7e8955c1ad8c8fbd4ce42b014c0f849d73d0c3faf0cedaac8e15d647fb2f663
|
||||
targetVersion: 35
|
||||
targetVersion: 36
|
||||
# Bundle deployment configuration
|
||||
bundles:
|
||||
system:
|
||||
|
||||
@@ -42,7 +42,9 @@ rules:
|
||||
- {{ .name }}-vminsert
|
||||
{{- end }}
|
||||
{{- range .Values.logsStorages }}
|
||||
- {{ $.Release.Name }}-vlogs-{{ .name }}
|
||||
- {{ .name }}-vlstorage
|
||||
- {{ .name }}-vlselect
|
||||
- {{ .name }}-vlinsert
|
||||
{{- end }}
|
||||
{{- range .Values.metricsStorages }}
|
||||
- vmalert-{{ .name }}
|
||||
|
||||
@@ -67,16 +67,46 @@ spec:
|
||||
apiVersion: cozystack.io/v1alpha1
|
||||
kind: WorkloadMonitor
|
||||
metadata:
|
||||
name: vlogs-{{ .name }}
|
||||
name: {{ .name }}-vlstorage
|
||||
spec:
|
||||
replicas: 1
|
||||
replicas: 2
|
||||
minReplicas: 1
|
||||
kind: monitoring
|
||||
type: vlogs
|
||||
type: vlstorage
|
||||
selector:
|
||||
app.kubernetes.io/component: monitoring
|
||||
app.kubernetes.io/instance: {{ .name }}
|
||||
app.kubernetes.io/name: vlogs
|
||||
app.kubernetes.io/name: vlstorage
|
||||
version: {{ $.Chart.Version }}
|
||||
---
|
||||
apiVersion: cozystack.io/v1alpha1
|
||||
kind: WorkloadMonitor
|
||||
metadata:
|
||||
name: {{ .name }}-vlselect
|
||||
spec:
|
||||
replicas: 2
|
||||
minReplicas: 1
|
||||
kind: monitoring
|
||||
type: vlselect
|
||||
selector:
|
||||
app.kubernetes.io/component: monitoring
|
||||
app.kubernetes.io/instance: {{ .name }}
|
||||
app.kubernetes.io/name: vlselect
|
||||
version: {{ $.Chart.Version }}
|
||||
---
|
||||
apiVersion: cozystack.io/v1alpha1
|
||||
kind: WorkloadMonitor
|
||||
metadata:
|
||||
name: {{ .name }}-vlinsert
|
||||
spec:
|
||||
replicas: 2
|
||||
minReplicas: 1
|
||||
kind: monitoring
|
||||
type: vlinsert
|
||||
selector:
|
||||
app.kubernetes.io/component: monitoring
|
||||
app.kubernetes.io/instance: {{ .name }}
|
||||
app.kubernetes.io/name: vlinsert
|
||||
version: {{ $.Chart.Version }}
|
||||
{{- end }}
|
||||
---
|
||||
|
||||
@@ -14,10 +14,3 @@ rules:
|
||||
- apiGroups: ["backups.cozystack.io"]
|
||||
resources: ["backupjobs"]
|
||||
verbs: ["create", "get", "list", "watch"]
|
||||
# Leader election (--leader-elect)
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["create", "patch"]
|
||||
|
||||
@@ -30,10 +30,6 @@ rules:
|
||||
- apiGroups: ["velero.io"]
|
||||
resources: ["backups", "restores"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "patch"]
|
||||
# Events from Recorder.Event() calls
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["create", "patch"]
|
||||
# Leader election (--leader-elect)
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
|
||||
@@ -10,7 +10,7 @@ update:
|
||||
rm -rf charts
|
||||
helm repo add cilium https://helm.cilium.io/
|
||||
helm repo update cilium
|
||||
helm pull cilium/cilium --untar --untardir charts --version 1.18
|
||||
helm pull cilium/cilium --untar --untardir charts --version 1.19
|
||||
$(SED_INPLACE) -e '/Used in iptables/d' -e '/SYS_MODULE/d' charts/cilium/values.yaml
|
||||
version=$$(awk '$$1 == "version:" {print $$2}' charts/cilium/Chart.yaml) && \
|
||||
$(SED_INPLACE) "s/ARG VERSION=.*/ARG VERSION=v$${version}/" images/cilium/Dockerfile
|
||||
|
||||
@@ -41,11 +41,8 @@ annotations:
|
||||
namespace context.\n- kind: CiliumNodeConfig\n version: v2\n name: ciliumnodeconfigs.cilium.io\n
|
||||
\ displayName: Cilium Node Configuration\n description: |\n CiliumNodeConfig
|
||||
is a list of configuration key-value pairs. It is applied to\n nodes indicated
|
||||
by a label selector.\n- kind: CiliumBGPPeeringPolicy\n version: v2alpha1\n name:
|
||||
ciliumbgppeeringpolicies.cilium.io\n displayName: Cilium BGP Peering Policy\n
|
||||
\ description: |\n Cilium BGP Peering Policy instructs Cilium to create specific
|
||||
BGP peering\n configurations.\n- kind: CiliumBGPClusterConfig\n version: v2alpha1\n
|
||||
\ name: ciliumbgpclusterconfigs.cilium.io\n displayName: Cilium BGP Cluster Config\n
|
||||
by a label selector.\n- kind: CiliumBGPClusterConfig\n version: v2alpha1\n name:
|
||||
ciliumbgpclusterconfigs.cilium.io\n displayName: Cilium BGP Cluster Config\n
|
||||
\ description: |\n Cilium BGP Cluster Config instructs Cilium operator to create
|
||||
specific BGP cluster\n configurations.\n- kind: CiliumBGPPeerConfig\n version:
|
||||
v2alpha1\n name: ciliumbgppeerconfigs.cilium.io\n displayName: Cilium BGP Peer
|
||||
@@ -79,7 +76,7 @@ annotations:
|
||||
Cilium Gateway Class Config\n description: |\n CiliumGatewayClassConfig defines
|
||||
a configuration for Gateway API GatewayClass.\n"
|
||||
apiVersion: v2
|
||||
appVersion: 1.18.6
|
||||
appVersion: 1.19.1
|
||||
description: eBPF-based Networking, Security, and Observability
|
||||
home: https://cilium.io/
|
||||
icon: https://cdn.jsdelivr.net/gh/cilium/cilium@main/Documentation/images/logo-solo.svg
|
||||
@@ -95,4 +92,4 @@ kubeVersion: '>= 1.21.0-0'
|
||||
name: cilium
|
||||
sources:
|
||||
- https://github.com/cilium/cilium
|
||||
version: 1.18.6
|
||||
version: 1.19.1
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# cilium
|
||||
|
||||
 
|
||||
 
|
||||
|
||||
Cilium is open source software for providing and transparently securing
|
||||
network connectivity and loadbalancing between application workloads such as
|
||||
@@ -59,10 +59,14 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| agentNotReadyTaintKey | string | `"node.cilium.io/agent-not-ready"` | Configure the key of the taint indicating that Cilium is not ready on the node. When set to a value starting with `ignore-taint.cluster-autoscaler.kubernetes.io/`, the Cluster Autoscaler will ignore the taint on its decisions, allowing the cluster to scale up. |
|
||||
| aksbyocni.enabled | bool | `false` | Enable AKS BYOCNI integration. Note that this is incompatible with AKS clusters not created in BYOCNI mode: use Azure integration (`azure.enabled`) instead. |
|
||||
| alibabacloud.enabled | bool | `false` | Enable AlibabaCloud ENI integration |
|
||||
| alibabacloud.nodeSpec.securityGroupTags | list | `[]` | |
|
||||
| alibabacloud.nodeSpec.securityGroups | list | `[]` | |
|
||||
| alibabacloud.nodeSpec.vSwitchTags | list | `[]` | |
|
||||
| alibabacloud.nodeSpec.vSwitches | list | `[]` | |
|
||||
| annotateK8sNode | bool | `false` | Annotate k8s node upon initialization with Cilium's metadata. |
|
||||
| annotations | object | `{}` | Annotations to be added to all top-level cilium-agent objects (resources under templates/cilium-agent) |
|
||||
| apiRateLimit | string | `nil` | The api-rate-limit option can be used to overwrite individual settings of the default configuration for rate limiting calls to the Cilium Agent API |
|
||||
| authentication.enabled | bool | `true` | Enable authentication processing and garbage collection. Note that if disabled, policy enforcement will still block requests that require authentication. But the resulting authentication requests for these requests will not be processed, therefore the requests not be allowed. |
|
||||
| authentication.enabled | bool | `false` | Enable authentication processing and garbage collection. Note that if disabled, policy enforcement will still block requests that require authentication. But the resulting authentication requests for these requests will not be processed, therefore the requests not be allowed. |
|
||||
| authentication.gcInterval | string | `"5m0s"` | Interval for garbage collection of auth map entries. |
|
||||
| authentication.mutual.connectTimeout | string | `"5s"` | Timeout for connecting to the remote node TCP socket |
|
||||
| authentication.mutual.port | int | `4250` | Port on the agent where mutual authentication handshakes between agents will be performed |
|
||||
@@ -73,7 +77,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| authentication.mutual.spire.enabled | bool | `false` | Enable SPIRE integration (beta) |
|
||||
| authentication.mutual.spire.install.agent.affinity | object | `{}` | SPIRE agent affinity configuration |
|
||||
| authentication.mutual.spire.install.agent.annotations | object | `{}` | SPIRE agent annotations |
|
||||
| authentication.mutual.spire.install.agent.image | object | `{"digest":"sha256:163970884fba18860cac93655dc32b6af85a5dcf2ebb7e3e119a10888eff8fcd","override":null,"pullPolicy":"IfNotPresent","repository":"ghcr.io/spiffe/spire-agent","tag":"1.12.4","useDigest":true}` | SPIRE agent image |
|
||||
| authentication.mutual.spire.install.agent.image | object | `{"digest":"sha256:5106ac601272a88684db14daf7f54b9a45f31f77bb16a906bd5e87756ee7b97c","override":null,"pullPolicy":"IfNotPresent","repository":"ghcr.io/spiffe/spire-agent","tag":"1.9.6","useDigest":true}` | SPIRE agent image |
|
||||
| authentication.mutual.spire.install.agent.labels | object | `{}` | SPIRE agent labels |
|
||||
| authentication.mutual.spire.install.agent.nodeSelector | object | `{}` | SPIRE agent nodeSelector configuration ref: ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| authentication.mutual.spire.install.agent.podSecurityContext | object | `{}` | Security context to be added to spire agent pods. SecurityContext holds pod-level security attributes and common container settings. ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod |
|
||||
@@ -85,7 +89,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| authentication.mutual.spire.install.agent.tolerations | list | `[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"effect":"NoSchedule","key":"node-role.kubernetes.io/control-plane"},{"effect":"NoSchedule","key":"node.cloudprovider.kubernetes.io/uninitialized","value":"true"},{"key":"CriticalAddonsOnly","operator":"Exists"}]` | SPIRE agent tolerations configuration By default it follows the same tolerations as the agent itself to allow the Cilium agent on this node to connect to SPIRE. ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| authentication.mutual.spire.install.enabled | bool | `true` | Enable SPIRE installation. This will only take effect only if authentication.mutual.spire.enabled is true |
|
||||
| authentication.mutual.spire.install.existingNamespace | bool | `false` | SPIRE namespace already exists. Set to true if Helm should not create, manage, and import the SPIRE namespace. |
|
||||
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:2383baad1860bbe9d8a7a843775048fd07d8afe292b94bd876df64a69aae7cb1","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
|
||||
| authentication.mutual.spire.install.initImage | object | `{"digest":"sha256:b3255e7dfbcd10cb367af0d409747d511aeb66dfac98cf30e97e87e4207dd76f","override":null,"pullPolicy":"IfNotPresent","repository":"docker.io/library/busybox","tag":"1.37.0","useDigest":true}` | init container image of SPIRE agent and server |
|
||||
| authentication.mutual.spire.install.namespace | string | `"cilium-spire"` | SPIRE namespace to install into |
|
||||
| authentication.mutual.spire.install.server.affinity | object | `{}` | SPIRE server affinity configuration |
|
||||
| authentication.mutual.spire.install.server.annotations | object | `{}` | SPIRE server annotations |
|
||||
@@ -95,7 +99,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| authentication.mutual.spire.install.server.dataStorage.enabled | bool | `true` | Enable SPIRE server data storage |
|
||||
| authentication.mutual.spire.install.server.dataStorage.size | string | `"1Gi"` | Size of the SPIRE server data storage |
|
||||
| authentication.mutual.spire.install.server.dataStorage.storageClass | string | `nil` | StorageClass of the SPIRE server data storage |
|
||||
| authentication.mutual.spire.install.server.image | object | `{"digest":"sha256:34147f27066ab2be5cc10ca1d4bfd361144196467155d46c45f3519f41596e49","override":null,"pullPolicy":"IfNotPresent","repository":"ghcr.io/spiffe/spire-server","tag":"1.12.4","useDigest":true}` | SPIRE server image |
|
||||
| authentication.mutual.spire.install.server.image | object | `{"digest":"sha256:59a0b92b39773515e25e68a46c40d3b931b9c1860bc445a79ceb45a805cab8b4","override":null,"pullPolicy":"IfNotPresent","repository":"ghcr.io/spiffe/spire-server","tag":"1.9.6","useDigest":true}` | SPIRE server image |
|
||||
| authentication.mutual.spire.install.server.initContainers | list | `[]` | SPIRE server init containers |
|
||||
| authentication.mutual.spire.install.server.labels | object | `{}` | SPIRE server labels |
|
||||
| authentication.mutual.spire.install.server.nodeSelector | object | `{}` | SPIRE server nodeSelector configuration ref: ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
@@ -114,13 +118,14 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| authentication.rotatedIdentitiesQueueSize | int | `1024` | Buffer size of the channel Cilium uses to receive certificate expiration events from auth handlers. |
|
||||
| autoDirectNodeRoutes | bool | `false` | Enable installation of PodCIDR routes between worker nodes if worker nodes share a common L2 network segment. |
|
||||
| azure.enabled | bool | `false` | Enable Azure integration. Note that this is incompatible with AKS clusters created in BYOCNI mode: use AKS BYOCNI integration (`aksbyocni.enabled`) instead. |
|
||||
| azure.nodeSpec.azureInterfaceName | string | `""` | |
|
||||
| bandwidthManager | object | `{"bbr":false,"bbrHostNamespaceOnly":false,"enabled":false}` | Enable bandwidth manager to optimize TCP and UDP workloads and allow for rate-limiting traffic from individual Pods with EDT (Earliest Departure Time) through the "kubernetes.io/egress-bandwidth" Pod annotation. |
|
||||
| bandwidthManager.bbr | bool | `false` | Activate BBR TCP congestion control for Pods |
|
||||
| bandwidthManager.bbrHostNamespaceOnly | bool | `false` | Activate BBR TCP congestion control for Pods in the host namespace only. |
|
||||
| bandwidthManager.enabled | bool | `false` | Enable bandwidth manager infrastructure (also prerequirement for BBR) |
|
||||
| bgpControlPlane | object | `{"enabled":false,"legacyOriginAttribute":{"enabled":false},"routerIDAllocation":{"ipPool":"","mode":"default"},"secretsNamespace":{"create":false,"name":"kube-system"},"statusReport":{"enabled":true}}` | This feature set enables virtual BGP routers to be created via CiliumBGPPeeringPolicy CRDs. |
|
||||
| bgpControlPlane | object | `{"enabled":false,"legacyOriginAttribute":{"enabled":false},"routerIDAllocation":{"ipPool":"","mode":"default"},"secretsNamespace":{"create":false,"name":"kube-system"},"statusReport":{"enabled":true}}` | This feature set enables virtual BGP routers to be created via BGP CRDs. |
|
||||
| bgpControlPlane.enabled | bool | `false` | Enables the BGP control plane. |
|
||||
| bgpControlPlane.legacyOriginAttribute | object | `{"enabled":false}` | Legacy BGP ORIGIN attribute settings (BGPv2 only) |
|
||||
| bgpControlPlane.legacyOriginAttribute | object | `{"enabled":false}` | Legacy BGP ORIGIN attribute settings |
|
||||
| bgpControlPlane.legacyOriginAttribute.enabled | bool | `false` | Enable/Disable advertising LoadBalancerIP routes with the legacy BGP ORIGIN attribute value INCOMPLETE (2) instead of the default IGP (0). Enable for compatibility with the legacy behavior of MetalLB integration. |
|
||||
| bgpControlPlane.routerIDAllocation | object | `{"ipPool":"","mode":"default"}` | BGP router-id allocation mode |
|
||||
| bgpControlPlane.routerIDAllocation.ipPool | string | `""` | IP pool to allocate the BGP router-id from when the mode is ip-pool. |
|
||||
@@ -128,20 +133,20 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| bgpControlPlane.secretsNamespace | object | `{"create":false,"name":"kube-system"}` | SecretsNamespace is the namespace which BGP support will retrieve secrets from. |
|
||||
| bgpControlPlane.secretsNamespace.create | bool | `false` | Create secrets namespace for BGP secrets. |
|
||||
| bgpControlPlane.secretsNamespace.name | string | `"kube-system"` | The name of the secret namespace to which Cilium agents are given read access |
|
||||
| bgpControlPlane.statusReport | object | `{"enabled":true}` | Status reporting settings (BGPv2 only) |
|
||||
| bgpControlPlane.statusReport.enabled | bool | `true` | Enable/Disable BGPv2 status reporting It is recommended to enable status reporting in general, but if you have any issue such as high API server load, you can disable it by setting this to false. |
|
||||
| bgpControlPlane.statusReport | object | `{"enabled":true}` | Status reporting settings |
|
||||
| bgpControlPlane.statusReport.enabled | bool | `true` | Enable/Disable BGP status reporting It is recommended to enable status reporting in general, but if you have any issue such as high API server load, you can disable it by setting this to false. |
|
||||
| bpf.authMapMax | int | `524288` | Configure the maximum number of entries in auth map. |
|
||||
| bpf.autoMount.enabled | bool | `true` | Enable automatic mount of BPF filesystem When `autoMount` is enabled, the BPF filesystem is mounted at `bpf.root` path on the underlying host and inside the cilium agent pod. If users disable `autoMount`, it's expected that users have mounted bpffs filesystem at the specified `bpf.root` volume, and then the volume will be mounted inside the cilium agent pod at the same path. |
|
||||
| bpf.ctAccounting | bool | `false` | Enable CT accounting for packets and bytes |
|
||||
| bpf.ctAnyMax | int | `262144` | Configure the maximum number of entries for the non-TCP connection tracking table. |
|
||||
| bpf.ctTcpMax | int | `524288` | Configure the maximum number of entries in the TCP connection tracking table. |
|
||||
| bpf.datapathMode | string | `veth` | Mode for Pod devices for the core datapath (veth, netkit, netkit-l2) |
|
||||
| bpf.datapathMode | string | `veth` | Mode for Pod devices for the core datapath (veth, netkit, netkit-l2). Note netkit is incompatible with TPROXY (`bpf.tproxy`). |
|
||||
| bpf.disableExternalIPMitigation | bool | `false` | Disable ExternalIP mitigation (CVE-2020-8554) |
|
||||
| bpf.distributedLRU | object | `{"enabled":false}` | Control to use a distributed per-CPU backend memory for the core BPF LRU maps which Cilium uses. This improves performance significantly, but it is also recommended to increase BPF map sizing along with that. |
|
||||
| bpf.distributedLRU.enabled | bool | `false` | Enable distributed LRU backend memory. For compatibility with existing installations it is off by default. |
|
||||
| bpf.enableTCX | bool | `true` | Attach endpoint programs using tcx instead of legacy tc hooks on supported kernels. |
|
||||
| bpf.events | object | `{"default":{"burstLimit":null,"rateLimit":null},"drop":{"enabled":true},"policyVerdict":{"enabled":true},"trace":{"enabled":true}}` | Control events generated by the Cilium datapath exposed to Cilium monitor and Hubble. Helm configuration for BPF events map rate limiting is experimental and might change in upcoming releases. |
|
||||
| bpf.events.default | object | `{"burstLimit":null,"rateLimit":null}` | Default settings for all types of events except dbg and pcap. |
|
||||
| bpf.events.default | object | `{"burstLimit":null,"rateLimit":null}` | Default settings for all types of events except dbg. |
|
||||
| bpf.events.default.burstLimit | int | `0` | Configure the maximum number of messages that can be written to BPF events map in 1 second. If burstLimit is greater than 0, non-zero value for rateLimit must also be provided lest the configuration is considered invalid. Setting both burstLimit and rateLimit to 0 disables BPF events rate limiting. |
|
||||
| bpf.events.default.rateLimit | int | `0` | Configure the limit of messages per second that can be written to BPF events map. The number of messages is averaged, meaning that if no messages were written to the map over 5 seconds, it's possible to write more events in the 6th second. If rateLimit is greater than 0, non-zero value for burstLimit must also be provided lest the configuration is considered invalid. Setting both burstLimit and rateLimit to 0 disables BPF events rate limiting. |
|
||||
| bpf.events.drop.enabled | bool | `true` | Enable drop events. |
|
||||
@@ -158,19 +163,23 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| bpf.monitorAggregation | string | `"medium"` | Configure the level of aggregation for monitor notifications. Valid options are none, low, medium, maximum. |
|
||||
| bpf.monitorFlags | string | `"all"` | Configure which TCP flags trigger notifications when seen for the first time in a connection. |
|
||||
| bpf.monitorInterval | string | `"5s"` | Configure the typical time between monitor notifications for active connections. |
|
||||
| bpf.monitorTraceIPOption | int | `0` | Configure the IP tracing option type. This option is used to specify the IP option type to use for tracing. The value must be an integer between 0 and 255. @schema type: [null, integer] minimum: 0 maximum: 255 @schema |
|
||||
| bpf.natMax | int | `524288` | Configure the maximum number of entries for the NAT table. |
|
||||
| bpf.neighMax | int | `524288` | Configure the maximum number of entries for the neighbor table. |
|
||||
| bpf.nodeMapMax | int | `nil` | Configures the maximum number of entries for the node table. |
|
||||
| bpf.policyMapMax | int | `16384` | Configure the maximum number of entries in endpoint policy map (per endpoint). @schema type: [null, integer] @schema |
|
||||
| bpf.policyMapPressureMetricsThreshold | float64 | `0.1` | Configure threshold for emitting pressure metrics of policy maps. @schema type: [null, number] @schema |
|
||||
| bpf.policyStatsMapMax | int | `65536` | Configure the maximum number of entries in global policy stats map. @schema type: [null, integer] @schema |
|
||||
| bpf.preallocateMaps | bool | `false` | Enables pre-allocation of eBPF map values. This increases memory usage but can reduce latency. |
|
||||
| bpf.root | string | `"/sys/fs/bpf"` | Configure the mount point for the BPF filesystem |
|
||||
| bpf.tproxy | bool | `false` | Configure the eBPF-based TPROXY (beta) to reduce reliance on iptables rules for implementing Layer 7 policy. |
|
||||
| bpf.tproxy | bool | `false` | Configure the eBPF-based TPROXY (beta) to reduce reliance on iptables rules for implementing Layer 7 policy. Note this is incompatible with netkit (`bpf.datapathMode=netkit`, `bpf.datapathMode=netkit-l2`). |
|
||||
| bpf.vlanBypass | list | `[]` | Configure explicitly allowed VLAN id's for bpf logic bypass. [0] will allow all VLAN id's without any filtering. |
|
||||
| bpfClockProbe | bool | `false` | Enable BPF clock source probing for more efficient tick retrieval. |
|
||||
| certgen | object | `{"affinity":{},"annotations":{"cronJob":{},"job":{}},"extraVolumeMounts":[],"extraVolumes":[],"generateCA":true,"image":{"digest":"sha256:2825dbfa6f89cbed882fd1d81e46a56c087e35885825139923aa29eb8aec47a9","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/certgen","tag":"v0.3.1","useDigest":true},"nodeSelector":{},"podLabels":{},"priorityClassName":"","resources":{},"tolerations":[],"ttlSecondsAfterFinished":1800}` | Configure certificate generation for Hubble integration. If hubble.tls.auto.method=cronJob, these values are used for the Kubernetes CronJob which will be scheduled regularly to (re)generate any certificates not provided manually. |
|
||||
| certgen | object | `{"affinity":{},"annotations":{"cronJob":{},"job":{}},"cronJob":{"failedJobsHistoryLimit":1,"successfulJobsHistoryLimit":3},"extraVolumeMounts":[],"extraVolumes":[],"generateCA":true,"image":{"digest":"sha256:19921f48ee7e2295ea4dca955878a6cd8d70e6d4219d08f688e866ece9d95d4d","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/certgen","tag":"v0.3.2","useDigest":true},"nodeSelector":{},"podLabels":{},"priorityClassName":"","resources":{},"tolerations":[],"ttlSecondsAfterFinished":null}` | Configure certificate generation for Hubble integration. If hubble.tls.auto.method=cronJob, these values are used for the Kubernetes CronJob which will be scheduled regularly to (re)generate any certificates not provided manually. |
|
||||
| certgen.affinity | object | `{}` | Affinity for certgen |
|
||||
| certgen.annotations | object | `{"cronJob":{},"job":{}}` | Annotations to be added to the hubble-certgen initial Job and CronJob |
|
||||
| certgen.cronJob.failedJobsHistoryLimit | int | `1` | The number of failed finished jobs to keep |
|
||||
| certgen.cronJob.successfulJobsHistoryLimit | int | `3` | The number of successful finished jobs to keep |
|
||||
| certgen.extraVolumeMounts | list | `[]` | Additional certgen volumeMounts. |
|
||||
| certgen.extraVolumes | list | `[]` | Additional certgen volumes. |
|
||||
| certgen.generateCA | bool | `true` | When set to true the certificate authority secret is created. |
|
||||
@@ -179,7 +188,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| certgen.priorityClassName | string | `""` | Priority class for certgen ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass |
|
||||
| certgen.resources | object | `{}` | Resource limits for certgen ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers |
|
||||
| certgen.tolerations | list | `[]` | Node tolerations for pod assignment on nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| certgen.ttlSecondsAfterFinished | int | `1800` | Seconds after which the completed job pod will be deleted |
|
||||
| certgen.ttlSecondsAfterFinished | string | `nil` | Seconds after which the completed job pod will be deleted |
|
||||
| cgroup | object | `{"autoMount":{"enabled":true,"resources":{}},"hostRoot":"/run/cilium/cgroupv2"}` | Configure cgroup related configuration |
|
||||
| cgroup.autoMount.enabled | bool | `true` | Enable auto mount of cgroup2 filesystem. When `autoMount` is enabled, cgroup2 filesystem is mounted at `cgroup.hostRoot` path on the underlying host and inside the cilium agent pod. If users disable `autoMount`, it's expected that users have mounted cgroup2 filesystem at the specified `cgroup.hostRoot` volume, and then the volume will be mounted inside the cilium agent pod at the same path. |
|
||||
| cgroup.autoMount.resources | object | `{}` | Init Container Cgroup Automount resource limits & requests |
|
||||
@@ -205,7 +214,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| clustermesh.apiserver.extraVolumeMounts | list | `[]` | Additional clustermesh-apiserver volumeMounts. |
|
||||
| clustermesh.apiserver.extraVolumes | list | `[]` | Additional clustermesh-apiserver volumes. |
|
||||
| clustermesh.apiserver.healthPort | int | `9880` | TCP port for the clustermesh-apiserver health API. |
|
||||
| clustermesh.apiserver.image | object | `{"digest":"sha256:8ee142912a0e261850c0802d9256ddbe3729e1cd35c6bea2d93077f334c3cf3b","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.18.6","useDigest":true}` | Clustermesh API server image. |
|
||||
| clustermesh.apiserver.image | object | `{"digest":"sha256:56d6c3dc13b50126b80ecb571707a0ea97f6db694182b9d61efd386d04e5bb28","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/clustermesh-apiserver","tag":"v1.19.1","useDigest":true}` | Clustermesh API server image. |
|
||||
| clustermesh.apiserver.kvstoremesh.enabled | bool | `true` | Enable KVStoreMesh. KVStoreMesh caches the information retrieved from the remote clusters in the local etcd instance (deprecated - KVStoreMesh will always be enabled once the option is removed). |
|
||||
| clustermesh.apiserver.kvstoremesh.extraArgs | list | `[]` | Additional KVStoreMesh arguments. |
|
||||
| clustermesh.apiserver.kvstoremesh.extraEnv | list | `[]` | Additional KVStoreMesh environment variables. |
|
||||
@@ -255,38 +264,65 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| clustermesh.apiserver.service.annotations | object | `{}` | Annotations for the clustermesh-apiserver service. Example annotations to configure an internal load balancer on different cloud providers: * AKS: service.beta.kubernetes.io/azure-load-balancer-internal: "true" * EKS: service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" * GKE: networking.gke.io/load-balancer-type: "Internal" |
|
||||
| clustermesh.apiserver.service.enableSessionAffinity | string | `"HAOnly"` | Defines when to enable session affinity. Each replica in a clustermesh-apiserver deployment runs its own discrete etcd cluster. Remote clients connect to one of the replicas through a shared Kubernetes Service. A client reconnecting to a different backend will require a full resync to ensure data integrity. Session affinity can reduce the likelihood of this happening, but may not be supported by all cloud providers. Possible values: - "HAOnly" (default) Only enable session affinity for deployments with more than 1 replica. - "Always" Always enable session affinity. - "Never" Never enable session affinity. Useful in environments where session affinity is not supported, but may lead to slightly degraded performance due to more frequent reconnections. |
|
||||
| clustermesh.apiserver.service.externalTrafficPolicy | string | `"Cluster"` | The externalTrafficPolicy of service used for apiserver access. |
|
||||
| clustermesh.apiserver.service.externallyCreated | bool | `false` | Set externallyCreated to true to create the clustermesh-apiserver service outside this helm chart. For example after external load balancer controllers are created. |
|
||||
| clustermesh.apiserver.service.internalTrafficPolicy | string | `"Cluster"` | The internalTrafficPolicy of service used for apiserver access. |
|
||||
| clustermesh.apiserver.service.labels | object | `{}` | Labels for the clustermesh-apiserver service. |
|
||||
| clustermesh.apiserver.service.loadBalancerClass | string | `nil` | Configure a loadBalancerClass. Allows to configure the loadBalancerClass on the clustermesh-apiserver LB service in case the Service type is set to LoadBalancer (requires Kubernetes 1.24+). |
|
||||
| clustermesh.apiserver.service.loadBalancerIP | string | `nil` | Configure a specific loadBalancerIP. Allows to configure a specific loadBalancerIP on the clustermesh-apiserver LB service in case the Service type is set to LoadBalancer. |
|
||||
| clustermesh.apiserver.service.loadBalancerSourceRanges | list | `[]` | Configure loadBalancerSourceRanges. Allows to configure the source IP ranges allowed to access the clustermesh-apiserver LB service in case the Service type is set to LoadBalancer. |
|
||||
| clustermesh.apiserver.service.nodePort | int | `32379` | Optional port to use as the node port for apiserver access. WARNING: make sure to configure a different NodePort in each cluster if kube-proxy replacement is enabled, as Cilium is currently affected by a known bug (#24692) when NodePorts are handled by the KPR implementation. If a service with the same NodePort exists both in the local and the remote cluster, all traffic originating from inside the cluster and targeting the corresponding NodePort will be redirected to a local backend, regardless of whether the destination node belongs to the local or the remote cluster. |
|
||||
| clustermesh.apiserver.service.nodePort | int | `32379` | Optional port to use as the node port for apiserver access. |
|
||||
| clustermesh.apiserver.service.type | string | `"NodePort"` | The type of service used for apiserver access. |
|
||||
| clustermesh.apiserver.terminationGracePeriodSeconds | int | `30` | terminationGracePeriodSeconds for the clustermesh-apiserver deployment |
|
||||
| clustermesh.apiserver.tls.admin | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver admin certificate and private key. Used if 'auto' is not enabled. |
|
||||
| clustermesh.apiserver.tls.authMode | string | `"legacy"` | Configure the clustermesh authentication mode. Supported values: - legacy: All clusters access remote clustermesh instances with the same username (i.e., remote). The "remote" certificate must be generated with CN=remote if provided manually. - migration: Intermediate mode required to upgrade from legacy to cluster (and vice versa) with no disruption. Specifically, it enables the creation of the per-cluster usernames, while still using the common one for authentication. The "remote" certificate must be generated with CN=remote if provided manually (same as legacy). - cluster: Each cluster accesses remote etcd instances with a username depending on the local cluster name (i.e., remote-<cluster-name>). The "remote" certificate must be generated with CN=remote-<cluster-name> if provided manually. Cluster mode is meaningful only when the same CA is shared across all clusters part of the mesh. |
|
||||
| clustermesh.apiserver.tls.admin.cert | string | `""` | Deprecated, as secrets will always need to be created externally if `auto` is disabled. |
|
||||
| clustermesh.apiserver.tls.admin.key | string | `""` | Deprecated, as secrets will always need to be created externally if `auto` is disabled. |
|
||||
| clustermesh.apiserver.tls.authMode | string | `"migration"` | Configure the clustermesh authentication mode. Supported values: - legacy: All clusters access remote clustermesh instances with the same username (i.e., remote). The "remote" certificate must be generated with CN=remote if provided manually. - migration: Intermediate mode required to upgrade from legacy to cluster (and vice versa) with no disruption. Specifically, it enables the creation of the per-cluster usernames, while still using the common one for authentication. The "remote" certificate must be generated with CN=remote if provided manually (same as legacy). - cluster: Each cluster accesses remote etcd instances with a username depending on the local cluster name (i.e., remote-<cluster-name>). The "remote" certificate must be generated with CN=remote-<cluster-name> if provided manually. Cluster mode is meaningful only when the same CA is shared across all clusters part of the mesh. |
|
||||
| clustermesh.apiserver.tls.auto | object | `{"certManagerIssuerRef":{},"certValidityDuration":1095,"enabled":true,"method":"helm"}` | Configure automatic TLS certificates generation. A Kubernetes CronJob is used the generate any certificates not provided by the user at installation time. |
|
||||
| clustermesh.apiserver.tls.auto.certManagerIssuerRef | object | `{}` | certmanager issuer used when clustermesh.apiserver.tls.auto.method=certmanager. |
|
||||
| clustermesh.apiserver.tls.auto.certValidityDuration | int | `1095` | Generated certificates validity duration in days. |
|
||||
| clustermesh.apiserver.tls.auto.enabled | bool | `true` | When set to true, automatically generate a CA and certificates to enable mTLS between clustermesh-apiserver and external workload instances. If set to false, the certs to be provided by setting appropriate values below. |
|
||||
| clustermesh.apiserver.tls.client | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver client certificate and private key. Used if 'auto' is not enabled. |
|
||||
| clustermesh.apiserver.tls.enableSecrets | bool | `true` | Allow users to provide their own certificates Users may need to provide their certificates using a mechanism that requires they provide their own secrets. This setting does not apply to any of the auto-generated mechanisms below, it only restricts the creation of secrets via the `tls-provided` templates. |
|
||||
| clustermesh.apiserver.tls.auto.enabled | bool | `true` | When set to true, automatically generate a CA and certificates to enable mTLS between clustermesh-apiserver and external workload instances. When set to false you need to pre-create the following secrets: - clustermesh-apiserver-server-cert - clustermesh-apiserver-admin-cert - clustermesh-apiserver-remote-cert - clustermesh-apiserver-local-cert The above secret should at least contains the keys `tls.crt` and `tls.key` and optionally `ca.crt` if a CA bundle is not configured. |
|
||||
| clustermesh.apiserver.tls.enableSecrets | deprecated | `true` | Allow users to provide their own certificates Users may need to provide their certificates using a mechanism that requires they provide their own secrets. This setting does not apply to any of the auto-generated mechanisms below, it only restricts the creation of secrets via the `tls-provided` templates. This option is deprecated as secrets are expected to be created externally when 'auto' is not enabled. |
|
||||
| clustermesh.apiserver.tls.remote | object | `{"cert":"","key":""}` | base64 encoded PEM values for the clustermesh-apiserver remote cluster certificate and private key. Used if 'auto' is not enabled. |
|
||||
| clustermesh.apiserver.tls.remote.cert | string | `""` | Deprecated, as secrets will always need to be created externally if `auto` is disabled. |
|
||||
| clustermesh.apiserver.tls.remote.key | string | `""` | Deprecated, as secrets will always need to be created externally if `auto` is disabled. |
|
||||
| clustermesh.apiserver.tls.server | object | `{"cert":"","extraDnsNames":[],"extraIpAddresses":[],"key":""}` | base64 encoded PEM values for the clustermesh-apiserver server certificate and private key. Used if 'auto' is not enabled. |
|
||||
| clustermesh.apiserver.tls.server.cert | string | `""` | Deprecated, as secrets will always need to be created externally if `auto` is disabled. |
|
||||
| clustermesh.apiserver.tls.server.extraDnsNames | list | `[]` | Extra DNS names added to certificate when it's auto generated |
|
||||
| clustermesh.apiserver.tls.server.extraIpAddresses | list | `[]` | Extra IP addresses added to certificate when it's auto generated |
|
||||
| clustermesh.apiserver.tls.server.key | string | `""` | Deprecated, as secrets will always need to be created externally if `auto` is disabled. |
|
||||
| clustermesh.apiserver.tolerations | list | `[]` | Node tolerations for pod assignment on nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| clustermesh.apiserver.topologySpreadConstraints | list | `[]` | Pod topology spread constraints for clustermesh-apiserver |
|
||||
| clustermesh.apiserver.updateStrategy | object | `{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0},"type":"RollingUpdate"}` | clustermesh-apiserver update strategy |
|
||||
| clustermesh.cacheTTL | string | `"0s"` | The time to live for the cache of a remote cluster after connectivity is lost. If the connection is not re-established within this duration, the cached data is revoked to prevent stale state. If not specified or set to 0s, the cache is never revoked (default). |
|
||||
| clustermesh.config | object | `{"clusters":[],"domain":"mesh.cilium.io","enabled":false}` | Clustermesh explicit configuration. |
|
||||
| clustermesh.config.clusters | list | `[]` | List of clusters to be peered in the mesh. |
|
||||
| clustermesh.config.clusters | list | `[]` | Clusters to be peered in the mesh. @schema type: [object, array] @schema |
|
||||
| clustermesh.config.domain | string | `"mesh.cilium.io"` | Default dns domain for the Clustermesh API servers This is used in the case cluster addresses are not provided and IPs are used. |
|
||||
| clustermesh.config.enabled | bool | `false` | Enable the Clustermesh explicit configuration. |
|
||||
| clustermesh.config.enabled | bool | `false` | Enable the Clustermesh explicit configuration. If set to false, you need to provide the following resources yourself: - (Secret) cilium-clustermesh (used by cilium-agent/cilium-operator to connect to the local etcd instance if KVStoreMesh is enabled or the remote clusters if KVStoreMesh is disabled) - (Secret) cilium-kvstoremesh (used by KVStoreMesh to connect to the remote clusters) - (ConfigMap) clustermesh-remote-users (used to create one etcd user per remote cluster if clustermesh-apiserver is used and `clustermesh.apiserver.tls.authMode` is not set to `legacy`) |
|
||||
| clustermesh.enableEndpointSliceSynchronization | bool | `false` | Enable the synchronization of Kubernetes EndpointSlices corresponding to the remote endpoints of appropriately-annotated global services through ClusterMesh |
|
||||
| clustermesh.enableMCSAPISupport | bool | `false` | Enable Multi-Cluster Services API support |
|
||||
| clustermesh.enableMCSAPISupport | bool | `false` | Enable Multi-Cluster Services API support (deprecated; use clustermesh.mcsapi.enabled) |
|
||||
| clustermesh.maxConnectedClusters | int | `255` | The maximum number of clusters to support in a ClusterMesh. This value cannot be changed on running clusters, and all clusters in a ClusterMesh must be configured with the same value. Values > 255 will decrease the maximum allocatable cluster-local identities. Supported values are 255 and 511. |
|
||||
| clustermesh.policyDefaultLocalCluster | bool | `false` | Control whether policy rules assume by default the local cluster if not explicitly selected |
|
||||
| clustermesh.useAPIServer | bool | `false` | Deploy clustermesh-apiserver for clustermesh |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.affinity | object | `{}` | Affinity for coredns-mcsapi-autoconfig |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.annotations | object | `{}` | Annotations to be added to the coredns-mcsapi-autoconfig Job |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.coredns.clusterDomain | string | `"cluster.local"` | The cluster domain for the cluster CoreDNS service |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.coredns.clustersetDomain | string | `"clusterset.local"` | The clusterset domain for the cluster CoreDNS service |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.coredns.configMapName | string | `"coredns"` | The ConfigMap name for the cluster CoreDNS service |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.coredns.deploymentName | string | `"coredns"` | The Deployment for the cluster CoreDNS service |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.coredns.namespace | string | `"kube-system"` | The namespace for the cluster CoreDNS service |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.coredns.serviceAccountName | string | `"coredns"` | The Service Account name for the cluster CoreDNS service |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.enabled | bool | `false` | Enable auto-configuration of CoreDNS for Multi-Cluster Services API. CoreDNS MUST be at least in version v1.12.2 to run this. |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.extraArgs | list | `[]` | Additional arguments to `clustermesh-apiserver coredns-mcsapi-auto-configure`. |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.extraVolumeMounts | list | `[]` | Additional coredns-mcsapi-autoconfig volumeMounts. |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.extraVolumes | list | `[]` | Additional coredns-mcsapi-autoconfig volumes. |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.nodeSelector | object | `{}` | Node selector for coredns-mcsapi-autoconfig ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.podLabels | object | `{}` | Labels to be added to coredns-mcsapi-autoconfig pods |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.priorityClassName | string | `""` | Priority class for coredns-mcsapi-autoconfig ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.resources | object | `{}` | Resource limits for coredns-mcsapi-autoconfig ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.tolerations | list | `[]` | Node tolerations for pod assignment on nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| clustermesh.mcsapi.corednsAutoConfigure.ttlSecondsAfterFinished | int | `1800` | Seconds after which the completed job pod will be deleted |
|
||||
| clustermesh.mcsapi.enabled | bool | `false` | Enable Multi-Cluster Services API support |
|
||||
| clustermesh.mcsapi.installCRDs | bool | `true` | Enabled MCS-API CRDs auto-installation |
|
||||
| clustermesh.policyDefaultLocalCluster | bool | `true` | Control whether policy rules assume by default the local cluster if not explicitly selected |
|
||||
| clustermesh.useAPIServer | bool | `false` | Deploy clustermesh-apiserver for clustermesh. This option is typically used with ``clustermesh.config.enabled=true``. Refer to the ``clustermesh.config.enabled=true``documentation for more information. |
|
||||
| cni.binPath | string | `"/opt/cni/bin"` | Configure the path to the CNI binary directory on the host. |
|
||||
| cni.chainingMode | string | `nil` | Configure chaining on top of other CNI plugins. Possible values: - none - aws-cni - flannel - generic-veth - portmap |
|
||||
| cni.chainingTarget | string | `nil` | A CNI network name in to which the Cilium plugin should be added as a chained plugin. This will cause the agent to watch for a CNI network with this network name. When it is found, this will be used as the basis for Cilium's CNI configuration file. If this is set, it assumes a chaining mode of generic-veth. As a special case, a chaining mode of aws-cni implies a chainingTarget of aws-cni. |
|
||||
@@ -301,15 +337,13 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| cni.install | bool | `true` | Install the CNI configuration and binary files into the filesystem. |
|
||||
| cni.iptablesRemoveAWSRules | bool | `true` | Enable the removal of iptables rules created by the AWS CNI VPC plugin. |
|
||||
| cni.logFile | string | `"/var/run/cilium/cilium-cni.log"` | Configure the log file for CNI logging with retention policy of 7 days. Disable CNI file logging by setting this field to empty explicitly. |
|
||||
| cni.resources | object | `{"requests":{"cpu":"100m","memory":"10Mi"}}` | Specifies the resources for the cni initContainer |
|
||||
| cni.resources | object | `{"limits":{"cpu":1,"memory":"1Gi"},"requests":{"cpu":"100m","memory":"10Mi"}}` | Specifies the resources for the cni initContainer |
|
||||
| cni.uninstall | bool | `false` | Remove the CNI configuration and binary files on agent shutdown. Enable this if you're removing Cilium from the cluster. Disable this to prevent the CNI configuration file from being removed during agent upgrade, which can cause nodes to go unmanageable. |
|
||||
| commonLabels | object | `{}` | commonLabels allows users to add common labels for all Cilium resources. |
|
||||
| connectivityProbeFrequencyRatio | float64 | `0.5` | Ratio of the connectivity probe frequency vs resource usage, a float in [0, 1]. 0 will give more frequent probing, 1 will give less frequent probing. Probing frequency is dynamically adjusted based on the cluster size. |
|
||||
| conntrackGCInterval | string | `"0s"` | Configure how frequently garbage collection should occur for the datapath connection tracking table. |
|
||||
| conntrackGCMaxInterval | string | `""` | Configure the maximum frequency for the garbage collection of the connection tracking table. Only affects the automatic computation for the frequency and has no effect when 'conntrackGCInterval' is set. This can be set to more frequently clean up unused identities created from ToFQDN policies. |
|
||||
| crdWaitTimeout | string | `"5m"` | Configure timeout in which Cilium will exit if CRDs are not available |
|
||||
| customCalls | object | `{"enabled":false}` | Tail call hooks for custom eBPF programs. |
|
||||
| customCalls.enabled | bool | `false` | Enable tail call hooks for custom eBPF programs. |
|
||||
| daemon.allowedConfigOverrides | string | `nil` | allowedConfigOverrides is a list of config-map keys that can be overridden. That is to say, if this value is set, config sources (excepting the first one) can only override keys in this list. This takes precedence over blockedConfigOverrides. By default, all keys may be overridden. To disable overrides, set this to "none" or change the configSources variable. |
|
||||
| daemon.blockedConfigOverrides | string | `nil` | blockedConfigOverrides is a list of config-map keys that may not be overridden. In other words, if any of these keys appear in a configuration source excepting the first one, they will be ignored This is ignored if allowedConfigOverrides is set. By default, all keys may be overridden. |
|
||||
| daemon.configSources | string | `nil` | Configure a custom list of possible configuration override sources The default is "config-map:cilium-config,cilium-node-config". For supported values, see the help text for the build-config subcommand. Note that this value should be a comma-separated string. |
|
||||
@@ -318,8 +352,8 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for cilium-agent grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards |
|
||||
| debug.enabled | bool | `false` | Enable debug logging |
|
||||
| debug.metricsSamplingInterval | string | `"5m"` | Set the agent-internal metrics sampling frequency. This sets the frequency of the internal sampling of the agent metrics. These are available via the "cilium-dbg shell -- metrics -s" command and are part of the metrics HTML page included in the sysdump. @schema type: [null, string] @schema |
|
||||
| debug.verbose | string | `nil` | Configure verbosity levels for debug logging This option is used to enable debug messages for operations related to such sub-system such as (e.g. kvstore, envoy, datapath or policy), and flow is for enabling debug messages emitted per request, message and connection. Multiple values can be set via a space-separated string (e.g. "datapath envoy"). Applicable values: - flow - kvstore - envoy - datapath - policy |
|
||||
| defaultLBServiceIPAM | string | `"lbipam"` | defaultLBServiceIPAM indicates the default LoadBalancer Service IPAM when no LoadBalancer class is set. Applicable values: lbipam, nodeipam, none @schema type: [string] @schema |
|
||||
| debug.verbose | string | `nil` | Configure verbosity levels for debug logging This option is used to enable debug messages for operations related to such sub-system such as (e.g. kvstore, envoy, datapath, policy, or tagged), and flow is for enabling debug messages emitted per request, message and connection. Multiple values can be set via a space-separated string (e.g. "datapath envoy"). Applicable values: - flow - kvstore - envoy - datapath - policy - tagged |
|
||||
| defaultLBServiceIPAM | string | `"lbipam"` | defaultLBServiceIPAM indicates the default LoadBalancer Service IPAM when no LoadBalancer class is set. Applicable values: lbipam, nodeipam, none |
|
||||
| directRoutingSkipUnreachable | bool | `false` | Enable skipping of PodCIDR routes between worker nodes if the worker nodes are in a different L2 network segment. |
|
||||
| disableEndpointCRD | bool | `false` | Disable the usage of CiliumEndpoint CRD. |
|
||||
| dnsPolicy | string | `""` | DNS policy for Cilium agent pods. Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy |
|
||||
@@ -338,13 +372,15 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| egressGateway.reconciliationTriggerInterval | string | `"1s"` | Time between triggers of egress gateway state reconciliations |
|
||||
| enableCriticalPriorityClass | bool | `true` | Explicitly enable or disable priority class. .Capabilities.KubeVersion is unsettable in `helm template` calls, it depends on k8s libraries version that Helm was compiled against. This option allows to explicitly disable setting the priority class, which is useful for rendering charts for gke clusters in advance. |
|
||||
| enableIPv4BIGTCP | bool | `false` | Enables IPv4 BIG TCP support which increases maximum IPv4 GSO/GRO limits for nodes and pods |
|
||||
| enableIPv4Masquerade | bool | `true` unless ipam eni mode is active | Enables masquerading of IPv4 traffic leaving the node from endpoints. |
|
||||
| enableIPv4Masquerade | bool | `true` unless ipam eni mode is active | Enables masquerading of IPv4 traffic leaving the node from endpoints. |
|
||||
| enableIPv6BIGTCP | bool | `false` | Enables IPv6 BIG TCP support which increases maximum IPv6 GSO/GRO limits for nodes and pods |
|
||||
| enableIPv6Masquerade | bool | `true` | Enables masquerading of IPv6 traffic leaving the node from endpoints. |
|
||||
| enableInternalTrafficPolicy | bool | `true` | Enable Internal Traffic Policy |
|
||||
| enableLBIPAM | bool | `true` | Enable LoadBalancer IP Address Management |
|
||||
| enableMasqueradeRouteSource | bool | `false` | Enables masquerading to the source of the route for traffic leaving the node from endpoints. |
|
||||
| enableNoServiceEndpointsRoutable | bool | `true` | Enable routing to a service that has zero endpoints |
|
||||
| enableNonDefaultDenyPolicies | bool | `true` | Enable Non-Default-Deny policies |
|
||||
| enableTunnelBIGTCP | bool | `false` | Enable BIG TCP in tunneling mode and increase maximum GRO/GSO limits for VXLAN/GENEVE tunnels |
|
||||
| enableXTSocketFallback | bool | `true` | Enables the fallback compatibility solution for when the xt_socket kernel module is missing and it is needed for the datapath L7 redirection to work properly. See documentation for details on when this can be disabled: https://docs.cilium.io/en/stable/operations/system_requirements/#linux-kernel. |
|
||||
| encryption.enabled | bool | `false` | Enable transparent network encryption. |
|
||||
| encryption.ipsec.encryptedOverlay | bool | `false` | Enable IPsec encrypted overlay |
|
||||
@@ -355,11 +391,15 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| encryption.ipsec.mountPath | string | `"/etc/ipsec"` | Path to mount the secret inside the Cilium pod. |
|
||||
| encryption.ipsec.secretName | string | `"cilium-ipsec-keys"` | Name of the Kubernetes secret containing the encryption keys. |
|
||||
| encryption.nodeEncryption | bool | `false` | Enable encryption for pure node to node traffic. This option is only effective when encryption.type is set to "wireguard". |
|
||||
| encryption.strictMode | object | `{"allowRemoteNodeIdentities":false,"cidr":"","enabled":false}` | Configure the WireGuard Pod2Pod strict mode. |
|
||||
| encryption.strictMode.allowRemoteNodeIdentities | bool | `false` | Allow dynamic lookup of remote node identities. This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap. |
|
||||
| encryption.strictMode.cidr | string | `""` | CIDR for the WireGuard Pod2Pod strict mode. |
|
||||
| encryption.strictMode.enabled | bool | `false` | Enable WireGuard Pod2Pod strict mode. |
|
||||
| encryption.type | string | `"ipsec"` | Encryption method. Can be either ipsec or wireguard. |
|
||||
| encryption.strictMode | object | `{"allowRemoteNodeIdentities":false,"cidr":"","egress":{"allowRemoteNodeIdentities":false,"cidr":"","enabled":false},"enabled":false,"ingress":{"enabled":false}}` | Configure the Encryption Pod2Pod strict mode. |
|
||||
| encryption.strictMode.allowRemoteNodeIdentities | bool | `false` | Allow dynamic lookup of remote node identities. (deprecated: please use encryption.strictMode.egress.allowRemoteNodeIdentities) This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap. |
|
||||
| encryption.strictMode.cidr | string | `""` | CIDR for the Encryption Pod2Pod strict mode. (deprecated: please use encryption.strictMode.egress.cidr) |
|
||||
| encryption.strictMode.egress.allowRemoteNodeIdentities | bool | `false` | Allow dynamic lookup of remote node identities. This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap. |
|
||||
| encryption.strictMode.egress.cidr | string | `""` | CIDR for the Encryption Pod2Pod strict egress mode. |
|
||||
| encryption.strictMode.egress.enabled | bool | `false` | Enable strict egress encryption. |
|
||||
| encryption.strictMode.enabled | bool | `false` | Enable Encryption Pod2Pod strict mode. (deprecated: please use encryption.strictMode.egress.enabled) |
|
||||
| encryption.strictMode.ingress.enabled | bool | `false` | Enable strict ingress encryption. When enabled, all unencrypted overlay ingress traffic will be dropped. This option is only applicable when WireGuard and tunneling are enabled. |
|
||||
| encryption.type | string | `"ipsec"` | Encryption method. Can be one of ipsec, wireguard or ztunnel. |
|
||||
| encryption.wireguard.persistentKeepalive | string | `"0s"` | Controls WireGuard PersistentKeepalive option. Set 0s to disable. |
|
||||
| endpointHealthChecking.enabled | bool | `true` | Enable connectivity health checking between virtual endpoints. |
|
||||
| endpointLockdownOnMapOverflow | bool | `false` | Enable endpoint lockdown on policy map overflow. |
|
||||
@@ -373,12 +413,24 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| eni.gcTags | object | `{"io.cilium/cilium-managed":"true,"io.cilium/cluster-name":"<auto-detected>"}` | Additional tags attached to ENIs created by Cilium. Dangling ENIs with this tag will be garbage collected |
|
||||
| eni.iamRole | string | `""` | If using IAM role for Service Accounts will not try to inject identity values from cilium-aws kubernetes secret. Adds annotation to service account if managed by Helm. See https://github.com/aws/amazon-eks-pod-identity-webhook |
|
||||
| eni.instanceTagsFilter | list | `[]` | Filter via AWS EC2 Instance tags (k=v) which will dictate which AWS EC2 Instances are going to be used to create new ENIs |
|
||||
| eni.nodeSpec | object | `{"deleteOnTermination":null,"disablePrefixDelegation":false,"excludeInterfaceTags":[],"firstInterfaceIndex":null,"securityGroupTags":[],"securityGroups":[],"subnetIDs":[],"subnetTags":[],"usePrimaryAddress":false}` | NodeSpec configuration for the ENI |
|
||||
| eni.nodeSpec.deleteOnTermination | string | `nil` | Delete ENI on termination @schema type: [null, boolean] @schema |
|
||||
| eni.nodeSpec.disablePrefixDelegation | bool | `false` | Disable prefix delegation for IP allocation |
|
||||
| eni.nodeSpec.excludeInterfaceTags | list | `[]` | Exclude interface tags to use for IP allocation |
|
||||
| eni.nodeSpec.firstInterfaceIndex | string | `nil` | First interface index to use for IP allocation @schema type: [null, integer] @schema |
|
||||
| eni.nodeSpec.securityGroupTags | list | `[]` | Security group tags to use for IP allocation |
|
||||
| eni.nodeSpec.securityGroups | list | `[]` | Security groups to use for IP allocation |
|
||||
| eni.nodeSpec.subnetIDs | list | `[]` | Subnet IDs to use for IP allocation |
|
||||
| eni.nodeSpec.subnetTags | list | `[]` | Subnet tags to use for IP allocation |
|
||||
| eni.nodeSpec.usePrimaryAddress | bool | `false` | Use primary address for IP allocation |
|
||||
| eni.subnetIDsFilter | list | `[]` | Filter via subnet IDs which will dictate which subnets are going to be used to create new ENIs Important note: This requires that each instance has an ENI with a matching subnet attached when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium, use the CNI configuration file settings (cni.customConf) instead. |
|
||||
| eni.subnetTagsFilter | list | `[]` | Filter via tags (k=v) which will dictate which subnets are going to be used to create new ENIs Important note: This requires that each instance has an ENI with a matching subnet attached when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium, use the CNI configuration file settings (cni.customConf) instead. |
|
||||
| envoy.affinity | object | `{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"cilium.io/no-schedule","operator":"NotIn","values":["true"]}]}]}},"podAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium"}},"topologyKey":"kubernetes.io/hostname"}]},"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium-envoy"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-envoy. |
|
||||
| envoy.annotations | object | `{}` | Annotations to be added to all top-level cilium-envoy objects (resources under templates/cilium-envoy) |
|
||||
| envoy.baseID | int | `0` | Set Envoy'--base-id' to use when allocating shared memory regions. Only needs to be changed if multiple Envoy instances will run on the same node and may have conflicts. Supported values: 0 - 4294967295. Defaults to '0' |
|
||||
| envoy.bootstrapConfigMap | string | `nil` | ADVANCED OPTION: Bring your own custom Envoy bootstrap ConfigMap. Provide the name of a ConfigMap with a `bootstrap-config.json` key. When specified, Envoy will use this ConfigMap instead of the default provided by the chart. WARNING: Use of this setting has the potential to prevent cilium-envoy from starting up, and can cause unexpected behavior (e.g. due to syntax error or semantically incorrect configuration). Before submitting an issue, please ensure you have disabled this feature, as support cannot be provided for custom Envoy bootstrap configs. @schema type: [null, string] @schema |
|
||||
| envoy.clusterMaxConnections | int | `1024` | Maximum number of connections on Envoy clusters |
|
||||
| envoy.clusterMaxRequests | int | `1024` | Maximum number of requests on Envoy clusters |
|
||||
| envoy.connectTimeoutSeconds | int | `2` | Time in seconds after which a TCP connection attempt times out |
|
||||
| envoy.debug.admin.enabled | bool | `false` | Enable admin interface for cilium-envoy. This is useful for debugging and should not be enabled in production. |
|
||||
| envoy.debug.admin.port | int | `9901` | Port number (bound to loopback interface). kubectl port-forward can be used to access the admin interface. |
|
||||
@@ -394,7 +446,8 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| envoy.httpRetryCount | int | `3` | Maximum number of retries for each HTTP request |
|
||||
| envoy.httpUpstreamLingerTimeout | string | `nil` | Time in seconds to block Envoy worker thread while an upstream HTTP connection is closing. If set to 0, the connection is closed immediately (with TCP RST). If set to -1, the connection is closed asynchronously in the background. |
|
||||
| envoy.idleTimeoutDurationSeconds | int | `60` | Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s |
|
||||
| envoy.image | object | `{"digest":"sha256:81398e449f2d3d0a6a70527e4f641aaa685d3156bea0bb30712fae3fd8822b86","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.35.9-1767794330-db497dd19e346b39d81d7b5c0dedf6c812bcc5c9","useDigest":true}` | Envoy container image. |
|
||||
| envoy.image | object | `{"digest":"sha256:8188114a2768b5f49d6ce58e168b20d765e0fbc64eee0d83241aa2b150ccd788","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.35.9-1770979049-232ed4a26881e4ab4f766f251f258ed424fff663","useDigest":true}` | Envoy container image. |
|
||||
| envoy.initContainers | list | `[]` | Init containers added to the cilium Envoy DaemonSet. |
|
||||
| envoy.initialFetchTimeoutSeconds | int | `30` | Time in seconds after which the initial fetch on an xDS stream is considered timed out |
|
||||
| envoy.livenessProbe.enabled | bool | `true` | Enable liveness probe for cilium-envoy |
|
||||
| envoy.livenessProbe.failureThreshold | int | `10` | failure threshold of liveness probe |
|
||||
@@ -406,6 +459,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| envoy.log.path | string | `""` | Path to a separate Envoy log file, if any. Defaults to /dev/stdout. |
|
||||
| envoy.maxConcurrentRetries | int | `128` | Maximum number of concurrent retries on Envoy clusters |
|
||||
| envoy.maxConnectionDurationSeconds | int | `0` | Set Envoy HTTP option max_connection_duration seconds. Default 0 (disable) |
|
||||
| envoy.maxGlobalDownstreamConnections | int | `50000` | Maximum number of global downstream connections |
|
||||
| envoy.maxRequestsPerConnection | int | `0` | ProxyMaxRequestsPerConnection specifies the max_requests_per_connection setting for Envoy |
|
||||
| envoy.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node selector for cilium-envoy. |
|
||||
| envoy.podAnnotations | object | `{}` | Annotations to be added to envoy pods |
|
||||
@@ -439,6 +493,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| envoy.terminationGracePeriodSeconds | int | `1` | Configure termination grace period for cilium-envoy DaemonSet. |
|
||||
| envoy.tolerations | list | `[{"operator":"Exists"}]` | Node tolerations for envoy scheduling to nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| envoy.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":2},"type":"RollingUpdate"}` | cilium-envoy update strategy ref: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#updating-a-daemonset |
|
||||
| envoy.useOriginalSourceAddress | bool | `true` | For cases when CiliumEnvoyConfig is not used directly (Ingress, Gateway), configures Cilium BPF Metadata listener filter to use the original source address when extracting the metadata for a request. |
|
||||
| envoy.xffNumTrustedHopsL7PolicyEgress | int | `0` | Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the egress L7 policy enforcement Envoy listeners. |
|
||||
| envoy.xffNumTrustedHopsL7PolicyIngress | int | `0` | Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the ingress L7 policy enforcement Envoy listeners. |
|
||||
| envoyConfig.enabled | bool | `false` | Enable CiliumEnvoyConfig CRD CiliumEnvoyConfig CRD can also be implicitly enabled by other options. |
|
||||
@@ -482,12 +537,13 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.dropEventEmitter.interval | string | `"2m"` | - Minimum time between emitting same events. |
|
||||
| hubble.dropEventEmitter.reasons | list | `["auth_required","policy_denied"]` | - Drop reasons to emit events for. ref: https://docs.cilium.io/en/stable/_api/v1/flow/README/#dropreason |
|
||||
| hubble.enabled | bool | `true` | Enable Hubble (true by default). |
|
||||
| hubble.export | object | `{"dynamic":{"config":{"configMapName":"cilium-flowlog-config","content":[{"excludeFilters":[],"fieldMask":[],"fileCompress":false,"fileMaxBackups":5,"fileMaxSizeMb":10,"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}],"createConfigMap":true},"enabled":false},"static":{"allowList":[],"denyList":[],"enabled":false,"fieldMask":[],"fileCompress":false,"fileMaxBackups":5,"fileMaxSizeMb":10,"filePath":"/var/run/cilium/hubble/events.log"}}` | Hubble flows export. |
|
||||
| hubble.export.dynamic | object | `{"config":{"configMapName":"cilium-flowlog-config","content":[{"excludeFilters":[],"fieldMask":[],"fileCompress":false,"fileMaxBackups":5,"fileMaxSizeMb":10,"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}],"createConfigMap":true},"enabled":false}` | - Dynamic exporters configuration. Dynamic exporters may be reconfigured without a need of agent restarts. |
|
||||
| hubble.export | object | `{"dynamic":{"config":{"configMapName":"cilium-flowlog-config","content":[{"aggregationInterval":"0s","excludeFilters":[],"fieldAggregate":[],"fieldMask":[],"fileCompress":false,"fileMaxBackups":5,"fileMaxSizeMb":10,"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}],"createConfigMap":true},"enabled":false},"static":{"aggregationInterval":"0s","allowList":[],"denyList":[],"enabled":false,"fieldAggregate":[],"fieldMask":[],"fileCompress":false,"fileMaxBackups":5,"fileMaxSizeMb":10,"filePath":"/var/run/cilium/hubble/events.log"}}` | Hubble flows export. |
|
||||
| hubble.export.dynamic | object | `{"config":{"configMapName":"cilium-flowlog-config","content":[{"aggregationInterval":"0s","excludeFilters":[],"fieldAggregate":[],"fieldMask":[],"fileCompress":false,"fileMaxBackups":5,"fileMaxSizeMb":10,"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}],"createConfigMap":true},"enabled":false}` | - Dynamic exporters configuration. Dynamic exporters may be reconfigured without a need of agent restarts. |
|
||||
| hubble.export.dynamic.config.configMapName | string | `"cilium-flowlog-config"` | -- Name of configmap with configuration that may be altered to reconfigure exporters within a running agents. |
|
||||
| hubble.export.dynamic.config.content | list | `[{"excludeFilters":[],"fieldMask":[],"fileCompress":false,"fileMaxBackups":5,"fileMaxSizeMb":10,"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}]` | -- Exporters configuration in YAML format. |
|
||||
| hubble.export.dynamic.config.content | list | `[{"aggregationInterval":"0s","excludeFilters":[],"fieldAggregate":[],"fieldMask":[],"fileCompress":false,"fileMaxBackups":5,"fileMaxSizeMb":10,"filePath":"/var/run/cilium/hubble/events.log","includeFilters":[],"name":"all"}]` | -- Exporters configuration in YAML format. |
|
||||
| hubble.export.dynamic.config.createConfigMap | bool | `true` | -- True if helm installer should create config map. Switch to false if you want to self maintain the file content. |
|
||||
| hubble.export.static | object | `{"allowList":[],"denyList":[],"enabled":false,"fieldMask":[],"fileCompress":false,"fileMaxBackups":5,"fileMaxSizeMb":10,"filePath":"/var/run/cilium/hubble/events.log"}` | - Static exporter configuration. Static exporter is bound to agent lifecycle. |
|
||||
| hubble.export.static | object | `{"aggregationInterval":"0s","allowList":[],"denyList":[],"enabled":false,"fieldAggregate":[],"fieldMask":[],"fileCompress":false,"fileMaxBackups":5,"fileMaxSizeMb":10,"filePath":"/var/run/cilium/hubble/events.log"}` | - Static exporter configuration. Static exporter is bound to agent lifecycle. |
|
||||
| hubble.export.static.aggregationInterval | string | `"0s"` | - Defines the interval at which to aggregate before exporting Hubble flows. Aggregation feature is only enabled when fieldAggregate is specified and aggregationInterval > 0s. |
|
||||
| hubble.export.static.fileCompress | bool | `false` | - Enable compression of rotated files. |
|
||||
| hubble.export.static.fileMaxBackups | int | `5` | - Defines max number of backup/rotated files. |
|
||||
| hubble.export.static.fileMaxSizeMb | int | `10` | - Defines max file size of output file before it gets rotated. |
|
||||
@@ -535,9 +591,12 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.relay.extraVolumes | list | `[]` | Additional hubble-relay volumes. |
|
||||
| hubble.relay.gops.enabled | bool | `true` | Enable gops for hubble-relay |
|
||||
| hubble.relay.gops.port | int | `9893` | Configure gops listen port for hubble-relay |
|
||||
| hubble.relay.image | object | `{"digest":"sha256:fb6135e34c31e5f175cb5e75f86cea52ef2ff12b49bcefb7088ed93f5009eb8e","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.18.6","useDigest":true}` | Hubble-relay container image. |
|
||||
| hubble.relay.image | object | `{"digest":"sha256:d8c4e13bc36a56179292bb52bc6255379cb94cb873700d316ea3139b1bdb8165","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/hubble-relay","tag":"v1.19.1","useDigest":true}` | Hubble-relay container image. |
|
||||
| hubble.relay.listenHost | string | `""` | Host to listen to. Specify an empty string to bind to all the interfaces. |
|
||||
| hubble.relay.listenPort | string | `"4245"` | Port to listen to. |
|
||||
| hubble.relay.logOptions | object | `{"format":null,"level":null}` | Logging configuration for hubble-relay. |
|
||||
| hubble.relay.logOptions.format | string | text-ts | Log format for hubble-relay. Valid values are: text, text-ts, json, json-ts. |
|
||||
| hubble.relay.logOptions.level | string | info | Log level for hubble-relay. Valid values are: debug, info, warn, error. |
|
||||
| hubble.relay.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| hubble.relay.podAnnotations | object | `{}` | Annotations to be added to hubble-relay pods |
|
||||
| hubble.relay.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |
|
||||
@@ -547,7 +606,9 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.relay.podLabels | object | `{}` | Labels to be added to hubble-relay pods |
|
||||
| hubble.relay.podSecurityContext | object | `{"fsGroup":65532,"seccompProfile":{"type":"RuntimeDefault"}}` | hubble-relay pod security context |
|
||||
| hubble.relay.pprof.address | string | `"localhost"` | Configure pprof listen address for hubble-relay |
|
||||
| hubble.relay.pprof.blockProfileRate | int | `0` | Enable goroutine blocking profiling for hubble-relay and set the rate of sampled events in nanoseconds (set to 1 to sample all events [warning: performance overhead]) |
|
||||
| hubble.relay.pprof.enabled | bool | `false` | Enable pprof for hubble-relay |
|
||||
| hubble.relay.pprof.mutexProfileFraction | int | `0` | Enable mutex contention profiling for hubble-relay and set the fraction of sampled events (set to 1 to sample all events) |
|
||||
| hubble.relay.pprof.port | int | `6062` | Configure pprof listen port for hubble-relay |
|
||||
| hubble.relay.priorityClassName | string | `""` | The priority class to use for hubble-relay |
|
||||
| hubble.relay.prometheus | object | `{"enabled":false,"port":9966,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","labels":{},"metricRelabelings":null,"relabelings":null,"scrapeTimeout":null}}` | Enable prometheus metrics for hubble-relay on the configured port at /metrics |
|
||||
@@ -641,13 +702,14 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| hubble.ui.tls.client.cert | string | `""` | base64 encoded PEM values for the Hubble UI client certificate (deprecated). Use existingSecret instead. |
|
||||
| hubble.ui.tls.client.existingSecret | string | `""` | Name of the Secret containing the client certificate and key for Hubble UI If specified, cert and key are ignored. |
|
||||
| hubble.ui.tls.client.key | string | `""` | base64 encoded PEM values for the Hubble UI client key (deprecated). Use existingSecret instead. |
|
||||
| hubble.ui.tmpVolume | object | `{}` | Configure temporary volume for hubble-ui |
|
||||
| hubble.ui.tolerations | list | `[]` | Node tolerations for pod assignment on nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| hubble.ui.topologySpreadConstraints | list | `[]` | Pod topology spread constraints for hubble-ui |
|
||||
| hubble.ui.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":1},"type":"RollingUpdate"}` | hubble-ui update strategy. |
|
||||
| identityAllocationMode | string | `"crd"` | Method to use for identity allocation (`crd`, `kvstore` or `doublewrite-readkvstore` / `doublewrite-readcrd` for migrating between identity backends). |
|
||||
| identityChangeGracePeriod | string | `"5s"` | Time to wait before using new identity on endpoint identity change. |
|
||||
| identityManagementMode | string | `"agent"` | Control whether CiliumIdentities are created by the agent ("agent"), the operator ("operator") or both ("both"). "Both" should be used only to migrate between "agent" and "operator". Operator-managed identities is a beta feature. |
|
||||
| image | object | `{"digest":"sha256:42ec562a5ff6c8a860c0639f5a7611685e253fd9eb2d2fcdade693724c9166a4","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.18.6","useDigest":true}` | Agent container image. |
|
||||
| image | object | `{"digest":"sha256:41f1f74a0000de8656f1de4088ea00c8f2d49d6edea579034c73c5fd5fe01792","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.19.1","useDigest":true}` | Agent container image. |
|
||||
| imagePullSecrets | list | `[]` | Configure image pull secrets for pulling container images |
|
||||
| ingressController.default | bool | `false` | Set cilium ingress controller to be the default ingress controller This will let cilium ingress controller route entries without ingress class set |
|
||||
| ingressController.defaultSecretName | string | `nil` | Default secret name for ingresses without .spec.tls[].secretName set. |
|
||||
@@ -682,6 +744,11 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| ipam.installUplinkRoutesForDelegatedIPAM | bool | `false` | Install ingress/egress routes through uplink on host for Pods when working with delegated IPAM plugin. |
|
||||
| ipam.mode | string | `"cluster-pool"` | Configure IP Address Management mode. ref: https://docs.cilium.io/en/stable/network/concepts/ipam/ |
|
||||
| ipam.multiPoolPreAllocation | string | `""` | Pre-allocation settings for IPAM in Multi-Pool mode |
|
||||
| ipam.nodeSpec | object | `{"ipamMaxAllocate":null,"ipamMinAllocate":null,"ipamPreAllocate":null,"ipamStaticIPTags":[]}` | NodeSpec configuration for the IPAM |
|
||||
| ipam.nodeSpec.ipamMaxAllocate | string | `nil` | IPAM max allocate @schema type: [null, integer] @schema |
|
||||
| ipam.nodeSpec.ipamMinAllocate | string | `nil` | IPAM min allocate @schema type: [null, integer] @schema |
|
||||
| ipam.nodeSpec.ipamPreAllocate | string | `nil` | IPAM pre allocate @schema type: [null, integer] @schema |
|
||||
| ipam.nodeSpec.ipamStaticIPTags | list | `[]` | IPAM static IP tags (currently only works with AWS and Azure) |
|
||||
| ipam.operator.autoCreateCiliumPodIPPools | object | `{}` | IP pools to auto-create in multi-pool IPAM mode. |
|
||||
| ipam.operator.clusterPoolIPv4MaskSize | int | `24` | IPv4 CIDR mask size to delegate to individual nodes for IPAM. |
|
||||
| ipam.operator.clusterPoolIPv4PodCIDRList | list | `["10.0.0.0/8"]` | IPv4 CIDR list range to delegate to individual nodes for IPAM. |
|
||||
@@ -744,19 +811,18 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| monitor | object | `{"enabled":false}` | cilium-monitor sidecar. |
|
||||
| monitor.enabled | bool | `false` | Enable the cilium-monitor sidecar. |
|
||||
| name | string | `"cilium"` | Agent daemonset name. |
|
||||
| namespaceOverride | string | `""` | namespaceOverride allows to override the destination namespace for Cilium resources. This property allows to use Cilium as part of an Umbrella Chart with different targets. |
|
||||
| namespaceOverride | string | `""` | namespaceOverride allows to override the destination namespace for Cilium resources. |
|
||||
| nat.mapStatsEntries | int | `32` | Number of the top-k SNAT map connections to track in Cilium statedb. |
|
||||
| nat.mapStatsInterval | string | `"30s"` | Interval between how often SNAT map is counted for stats. |
|
||||
| nat46x64Gateway | object | `{"enabled":false}` | Configure standalone NAT46/NAT64 gateway |
|
||||
| nat46x64Gateway.enabled | bool | `false` | Enable RFC6052-prefixed translation |
|
||||
| nodeIPAM.enabled | bool | `false` | Configure Node IPAM ref: https://docs.cilium.io/en/stable/network/node-ipam/ |
|
||||
| nodePort | object | `{"addresses":null,"autoProtectPortRange":true,"bindProtection":true,"enableHealthCheck":true,"enableHealthCheckLoadBalancerIP":false,"enabled":false}` | Configure N-S k8s service loadbalancing |
|
||||
| nodePort | object | `{"addresses":null,"autoProtectPortRange":true,"bindProtection":true,"enableHealthCheck":true,"enableHealthCheckLoadBalancerIP":false}` | Configure N-S k8s service loadbalancing |
|
||||
| nodePort.addresses | string | `nil` | List of CIDRs for choosing which IP addresses assigned to native devices are used for NodePort load-balancing. By default this is empty and the first suitable, preferably private, IPv4 and IPv6 address assigned to each device is used. Example: addresses: ["192.168.1.0/24", "2001::/64"] |
|
||||
| nodePort.autoProtectPortRange | bool | `true` | Append NodePort range to ip_local_reserved_ports if clash with ephemeral ports is detected. |
|
||||
| nodePort.bindProtection | bool | `true` | Set to true to prevent applications binding to service ports. |
|
||||
| nodePort.enableHealthCheck | bool | `true` | Enable healthcheck nodePort server for NodePort services |
|
||||
| nodePort.enableHealthCheckLoadBalancerIP | bool | `false` | Enable access of the healthcheck nodePort on the LoadBalancerIP. Needs EnableHealthCheck to be enabled |
|
||||
| nodePort.enabled | bool | `false` | Enable the Cilium NodePort service implementation. |
|
||||
| nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node selector for cilium-agent. |
|
||||
| nodeSelectorLabels | bool | `false` | Enable/Disable use of node label based identity |
|
||||
| nodeinit.affinity | object | `{}` | Affinity for cilium-nodeinit |
|
||||
@@ -766,7 +832,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| nodeinit.extraEnv | list | `[]` | Additional nodeinit environment variables. |
|
||||
| nodeinit.extraVolumeMounts | list | `[]` | Additional nodeinit volumeMounts. |
|
||||
| nodeinit.extraVolumes | list | `[]` | Additional nodeinit volumes. |
|
||||
| nodeinit.image | object | `{"digest":"sha256:5bdca3c2dec2c79f58d45a7a560bf1098c2126350c901379fe850b7f78d3d757","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/startup-script","tag":"1755531540-60ee83e","useDigest":true}` | node-init image. |
|
||||
| nodeinit.image | object | `{"digest":"sha256:50b9cf9c280096b59b80d2fc8ee6638facef79ac18998a22f0cbc40d5d28c16f","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/startup-script","tag":"1763560095-8f36c34","useDigest":true}` | node-init image. |
|
||||
| nodeinit.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for nodeinit pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| nodeinit.podAnnotations | object | `{}` | Annotations to be added to node-init pods. |
|
||||
| nodeinit.podLabels | object | `{}` | Labels to be added to node-init pods. |
|
||||
@@ -779,6 +845,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| nodeinit.startup | object | `{"postScript":"","preScript":""}` | startup offers way to customize startup nodeinit script (pre and post position) |
|
||||
| nodeinit.tolerations | list | `[{"operator":"Exists"}]` | Node tolerations for nodeinit scheduling to nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| nodeinit.updateStrategy | object | `{"type":"RollingUpdate"}` | node-init update strategy |
|
||||
| nodeinit.waitForCloudInit | bool | `false` | wait for Cloud init to finish on the host and assume the node has cloud init installed |
|
||||
| operator.affinity | object | `{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"io.cilium/app":"operator"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-operator |
|
||||
| operator.annotations | object | `{}` | Annotations to be added to all top-level cilium-operator objects (resources under templates/cilium-operator) |
|
||||
| operator.dashboards | object | `{"annotations":{},"enabled":false,"label":"grafana_dashboard","labelValue":"1","namespace":null}` | Grafana dashboards for cilium-operator grafana can import dashboards based on the label and value ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards |
|
||||
@@ -793,7 +860,7 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| operator.hostNetwork | bool | `true` | HostNetwork setting |
|
||||
| operator.identityGCInterval | string | `"15m0s"` | Interval for identity garbage collection. |
|
||||
| operator.identityHeartbeatTimeout | string | `"30m0s"` | Timeout for identity heartbeats. |
|
||||
| operator.image | object | `{"alibabacloudDigest":"sha256:212c4cbe27da3772bcb952b8f8cbaa0b0eef72488b52edf90ad2b32072a3ca4c","awsDigest":"sha256:47dbc1a5bd483fec170dab7fb0bf2cca3585a4893675b0324d41d97bac8be5eb","azureDigest":"sha256:a57aff47aeb32eccfedaa2a49d1af984d996d6d6de79609c232e0c4cf9ce97a1","genericDigest":"sha256:34a827ce9ed021c8adf8f0feca131f53b3c54a3ef529053d871d0347ec4d69af","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.18.6","useDigest":true}` | cilium-operator image. |
|
||||
| operator.image | object | `{"alibabacloudDigest":"sha256:837b12f4239e88ea5b4b5708ab982c319a94ee05edaecaafe5fd0e5b1962f554","awsDigest":"sha256:18913d05a6c4d205f0b7126c4723bb9ccbd4dc24403da46ed0f9f4bf2a142804","azureDigest":"sha256:82bce78603056e709d4c4e9f9ebb25c222c36d8a07f8c05381c2372d9078eca8","genericDigest":"sha256:e7278d763e448bf6c184b0682cf98cdca078d58a27e1b2f3c906792670aa211a","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/operator","suffix":"","tag":"v1.19.1","useDigest":true}` | cilium-operator image. |
|
||||
| operator.nodeGCInterval | string | `"5m0s"` | Interval for cilium node garbage collection. |
|
||||
| operator.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for cilium-operator pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| operator.podAnnotations | object | `{}` | Annotations to be added to cilium-operator pods |
|
||||
@@ -804,10 +871,12 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| operator.podLabels | object | `{}` | Labels to be added to cilium-operator pods |
|
||||
| operator.podSecurityContext | object | `{"seccompProfile":{"type":"RuntimeDefault"}}` | Security context to be added to cilium-operator pods |
|
||||
| operator.pprof.address | string | `"localhost"` | Configure pprof listen address for cilium-operator |
|
||||
| operator.pprof.blockProfileRate | int | `0` | Enable goroutine blocking profiling for cilium-operator and set the rate of sampled events in nanoseconds (set to 1 to sample all events [warning: performance overhead]) |
|
||||
| operator.pprof.enabled | bool | `false` | Enable pprof for cilium-operator |
|
||||
| operator.pprof.mutexProfileFraction | int | `0` | Enable mutex contention profiling for cilium-operator and set the fraction of sampled events (set to 1 to sample all events) |
|
||||
| operator.pprof.port | int | `6061` | Configure pprof listen port for cilium-operator |
|
||||
| operator.priorityClassName | string | `""` | The priority class to use for cilium-operator |
|
||||
| operator.prometheus | object | `{"enabled":true,"metricsService":false,"port":9963,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","jobLabel":"","labels":{},"metricRelabelings":null,"relabelings":null,"scrapeTimeout":null}}` | Enable prometheus metrics for cilium-operator on the configured port at /metrics |
|
||||
| operator.prometheus | object | `{"enabled":true,"metricsService":false,"port":9963,"serviceMonitor":{"annotations":{},"enabled":false,"interval":"10s","jobLabel":"","labels":{},"metricRelabelings":null,"relabelings":null,"scrapeTimeout":null},"tls":{"enabled":false,"server":{"existingSecret":"","mtls":{"enabled":false}}}}` | Enable prometheus metrics for cilium-operator on the configured port at /metrics |
|
||||
| operator.prometheus.serviceMonitor.annotations | object | `{}` | Annotations to add to ServiceMonitor cilium-operator |
|
||||
| operator.prometheus.serviceMonitor.enabled | bool | `false` | Enable service monitors. This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml) |
|
||||
| operator.prometheus.serviceMonitor.interval | string | `"10s"` | Interval for scrape metrics. |
|
||||
@@ -816,6 +885,8 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| operator.prometheus.serviceMonitor.metricRelabelings | string | `nil` | Metrics relabeling configs for the ServiceMonitor cilium-operator |
|
||||
| operator.prometheus.serviceMonitor.relabelings | string | `nil` | Relabeling configs for the ServiceMonitor cilium-operator |
|
||||
| operator.prometheus.serviceMonitor.scrapeTimeout | string | `nil` | Timeout after which scrape is considered to be failed. |
|
||||
| operator.prometheus.tls | object | `{"enabled":false,"server":{"existingSecret":"","mtls":{"enabled":false}}}` | TLS configuration for Prometheus |
|
||||
| operator.prometheus.tls.server.existingSecret | string | `""` | Name of the Secret containing the certificate, key and CA files for the Prometheus server. |
|
||||
| operator.removeNodeTaints | bool | `true` | Remove Cilium node taint from Kubernetes nodes that have a healthy Cilium pod running. |
|
||||
| operator.replicas | int | `2` | Number of replicas to run for the cilium-operator deployment |
|
||||
| operator.resources | object | `{}` | cilium-operator resource limits & requests ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
|
||||
@@ -828,25 +899,30 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| operator.topologySpreadConstraints | list | `[]` | Pod topology spread constraints for cilium-operator |
|
||||
| operator.unmanagedPodWatcher.intervalSeconds | int | `15` | Interval, in seconds, to check if there are any pods that are not managed by Cilium. |
|
||||
| operator.unmanagedPodWatcher.restart | bool | `true` | Restart any pod that are not managed by Cilium. |
|
||||
| operator.unmanagedPodWatcher.selector | string | `nil` | Selector for pods that should be restarted when not managed by Cilium. If not set, defaults to built-in selector "k8s-app=kube-dns". Set to empty string to select all pods. @schema type: [null, string] @schema |
|
||||
| operator.updateStrategy | object | `{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"50%"},"type":"RollingUpdate"}` | cilium-operator update strategy |
|
||||
| pmtuDiscovery.enabled | bool | `false` | Enable path MTU discovery to send ICMP fragmentation-needed replies to the client. |
|
||||
| pmtuDiscovery.packetizationLayerPMTUDMode | string | `"blackhole"` | Enable kernel probing path MTU discovery for Pods which uses different message sizes to search for correct MTU value. Valid values are: always, blackhole, disabled and unset (or empty). If value is 'unset' or left empty then will not try to override setting. |
|
||||
| podAnnotations | object | `{}` | Annotations to be added to agent pods |
|
||||
| podLabels | object | `{}` | Labels to be added to agent pods |
|
||||
| podSecurityContext | object | `{"appArmorProfile":{"type":"Unconfined"},"seccompProfile":{"type":"Unconfined"}}` | Security Context for cilium-agent pods. |
|
||||
| podSecurityContext.appArmorProfile | object | `{"type":"Unconfined"}` | AppArmorProfile options for the `cilium-agent` and init containers |
|
||||
| policyCIDRMatchMode | string | `nil` | policyCIDRMatchMode is a list of entities that may be selected by CIDR selector. The possible value is "nodes". |
|
||||
| policyDenyResponse | string | `"none"` | Configure what the response should be to pod egress traffic denied by network policy. Possible values: - none (default) - icmp |
|
||||
| policyEnforcementMode | string | `"default"` | The agent can be put into one of the three policy enforcement modes: default, always and never. ref: https://docs.cilium.io/en/stable/security/policy/intro/#policy-enforcement-modes |
|
||||
| pprof.address | string | `"localhost"` | Configure pprof listen address for cilium-agent |
|
||||
| pprof.blockProfileRate | int | `0` | Enable goroutine blocking profiling for cilium-agent and set the rate of sampled events in nanoseconds (set to 1 to sample all events [warning: performance overhead]) |
|
||||
| pprof.enabled | bool | `false` | Enable pprof for cilium-agent |
|
||||
| pprof.mutexProfileFraction | int | `0` | Enable mutex contention profiling for cilium-agent and set the fraction of sampled events (set to 1 to sample all events) |
|
||||
| pprof.port | int | `6060` | Configure pprof listen port for cilium-agent |
|
||||
| preflight.affinity | object | `{"podAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"cilium"}},"topologyKey":"kubernetes.io/hostname"}]}}` | Affinity for cilium-preflight |
|
||||
| preflight.annotations | object | `{}` | Annotations to be added to all top-level preflight objects (resources under templates/cilium-preflight) |
|
||||
| preflight.enabled | bool | `false` | Enable Cilium pre-flight resources (required for upgrade) |
|
||||
| preflight.envoy.image | object | `{"digest":"sha256:81398e449f2d3d0a6a70527e4f641aaa685d3156bea0bb30712fae3fd8822b86","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.35.9-1767794330-db497dd19e346b39d81d7b5c0dedf6c812bcc5c9","useDigest":true}` | Envoy pre-flight image. |
|
||||
| preflight.envoy.image | object | `{"digest":"sha256:8188114a2768b5f49d6ce58e168b20d765e0fbc64eee0d83241aa2b150ccd788","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium-envoy","tag":"v1.35.9-1770979049-232ed4a26881e4ab4f766f251f258ed424fff663","useDigest":true}` | Envoy pre-flight image. |
|
||||
| preflight.extraEnv | list | `[]` | Additional preflight environment variables. |
|
||||
| preflight.extraVolumeMounts | list | `[]` | Additional preflight volumeMounts. |
|
||||
| preflight.extraVolumes | list | `[]` | Additional preflight volumes. |
|
||||
| preflight.image | object | `{"digest":"sha256:42ec562a5ff6c8a860c0639f5a7611685e253fd9eb2d2fcdade693724c9166a4","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.18.6","useDigest":true}` | Cilium pre-flight image. |
|
||||
| preflight.image | object | `{"digest":"sha256:41f1f74a0000de8656f1de4088ea00c8f2d49d6edea579034c73c5fd5fe01792","override":null,"pullPolicy":"IfNotPresent","repository":"quay.io/cilium/cilium","tag":"v1.19.1","useDigest":true}` | Cilium pre-flight image. |
|
||||
| preflight.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Node labels for preflight pod assignment ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector |
|
||||
| preflight.podAnnotations | object | `{}` | Annotations to be added to preflight pods |
|
||||
| preflight.podDisruptionBudget.enabled | bool | `false` | enable PodDisruptionBudget ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ |
|
||||
@@ -890,24 +966,36 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| sctp | object | `{"enabled":false}` | SCTP Configuration Values |
|
||||
| sctp.enabled | bool | `false` | Enable SCTP support. NOTE: Currently, SCTP support does not support rewriting ports or multihoming. |
|
||||
| secretsNamespaceAnnotations | object | `{}` | Annotations to be added to all cilium-secret namespaces (resources under templates/cilium-secrets-namespace) |
|
||||
| secretsNamespaceLabels | object | `{}` | Labels to be added to all cilium-secret namespaces (resources under templates/cilium-secrets-namespace) |
|
||||
| securityContext.allowPrivilegeEscalation | bool | `false` | disable privilege escalation |
|
||||
| securityContext.capabilities.applySysctlOverwrites | list | `["SYS_ADMIN","SYS_CHROOT","SYS_PTRACE"]` | capabilities for the `apply-sysctl-overwrites` init container |
|
||||
| securityContext.capabilities.ciliumAgent | list | `["CHOWN","KILL","NET_ADMIN","NET_RAW","IPC_LOCK","SYS_MODULE","SYS_ADMIN","SYS_RESOURCE","DAC_OVERRIDE","FOWNER","SETGID","SETUID"]` | Capabilities for the `cilium-agent` container |
|
||||
| securityContext.capabilities.ciliumAgent | list | `["CHOWN","KILL","NET_ADMIN","NET_RAW","IPC_LOCK","SYS_MODULE","SYS_ADMIN","SYS_RESOURCE","DAC_OVERRIDE","FOWNER","SETGID","SETUID","SYSLOG"]` | Capabilities for the `cilium-agent` container |
|
||||
| securityContext.capabilities.cleanCiliumState | list | `["NET_ADMIN","SYS_MODULE","SYS_ADMIN","SYS_RESOURCE"]` | Capabilities for the `clean-cilium-state` init container |
|
||||
| securityContext.capabilities.mountCgroup | list | `["SYS_ADMIN","SYS_CHROOT","SYS_PTRACE"]` | Capabilities for the `mount-cgroup` init container |
|
||||
| securityContext.privileged | bool | `false` | Run the pod with elevated privileges |
|
||||
| securityContext.seLinuxOptions | object | `{"level":"s0","type":"spc_t"}` | SELinux options for the `cilium-agent` and init containers |
|
||||
| serviceAccounts | object | Component's fully qualified name. | Define serviceAccount names for components. |
|
||||
| serviceAccounts.clustermeshcertgen | object | `{"annotations":{},"automount":true,"create":true,"name":"clustermesh-apiserver-generate-certs"}` | Clustermeshcertgen is used if clustermesh.apiserver.tls.auto.method=cronJob |
|
||||
| serviceAccounts.corednsMCSAPI | object | `{"annotations":{},"automount":true,"create":true,"name":"cilium-coredns-mcsapi-autoconfig"}` | CorednsMCSAPI is used if clustermesh.mcsapi.corednsAutoConfigure.enabled=true |
|
||||
| serviceAccounts.hubblecertgen | object | `{"annotations":{},"automount":true,"create":true,"name":"hubble-generate-certs"}` | Hubblecertgen is used if hubble.tls.auto.method=cronJob |
|
||||
| serviceAccounts.nodeinit.enabled | bool | `false` | Enabled is temporary until https://github.com/cilium/cilium-cli/issues/1396 is implemented. Cilium CLI doesn't create the SAs for node-init, thus the workaround. Helm is not affected by this issue. Name and automount can be configured, if enabled is set to true. Otherwise, they are ignored. Enabled can be removed once the issue is fixed. Cilium-nodeinit DS must also be fixed. |
|
||||
| serviceNoBackendResponse | string | `"reject"` | Configure what the response should be to traffic for a service without backends. Possible values: - reject (default) - drop |
|
||||
| sleepAfterInit | bool | `false` | Do not run Cilium agent when running with clean mode. Useful to completely uninstall Cilium as it will stop Cilium from starting and create artifacts in the node. |
|
||||
| socketLB | object | `{"enabled":false}` | Configure socket LB |
|
||||
| socketLB.enabled | bool | `false` | Enable socket LB |
|
||||
| standaloneDnsProxy | object | `{"annotations":{},"automountServiceAccountToken":false,"debug":false,"enabled":false,"image":{"digest":"","override":null,"pullPolicy":"IfNotPresent","repository":"","tag":"","useDigest":true},"nodeSelector":{"kubernetes.io/os":"linux"},"rollOutPods":false,"serverPort":10095,"tolerations":[],"updateStrategy":{"rollingUpdate":{"maxSurge":2,"maxUnavailable":0},"type":"RollingUpdate"}}` | Standalone DNS Proxy Configuration Note: The standalone DNS proxy uses the agent's dnsProxy.* configuration for DNS settings (proxyPort, enableDnsCompression) to ensure consistency. |
|
||||
| standaloneDnsProxy.annotations | object | `{}` | Standalone DNS proxy annotations |
|
||||
| standaloneDnsProxy.automountServiceAccountToken | bool | `false` | Standalone DNS proxy auto mount service account token |
|
||||
| standaloneDnsProxy.debug | bool | `false` | Standalone DNS proxy debug mode |
|
||||
| standaloneDnsProxy.enabled | bool | `false` | Enable standalone DNS proxy (alpha feature) |
|
||||
| standaloneDnsProxy.image | object | `{"digest":"","override":null,"pullPolicy":"IfNotPresent","repository":"","tag":"","useDigest":true}` | Standalone DNS proxy image |
|
||||
| standaloneDnsProxy.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Standalone DNS proxy Node Selector |
|
||||
| standaloneDnsProxy.rollOutPods | bool | `false` | Roll out Standalone DNS proxy automatically when configmap is updated. |
|
||||
| standaloneDnsProxy.serverPort | int | `10095` | Standalone DNS proxy server port |
|
||||
| standaloneDnsProxy.tolerations | list | `[]` | Standalone DNS proxy tolerations |
|
||||
| standaloneDnsProxy.updateStrategy | object | `{"rollingUpdate":{"maxSurge":2,"maxUnavailable":0},"type":"RollingUpdate"}` | Standalone DNS proxy update strategy |
|
||||
| startupProbe.failureThreshold | int | `300` | failure threshold of startup probe. Allow Cilium to take up to 600s to start up (300 attempts with 2s between attempts). |
|
||||
| startupProbe.periodSeconds | int | `2` | interval between checks of the startup probe |
|
||||
| svcSourceRangeCheck | bool | `true` | Enable check of service source ranges (currently, only for LoadBalancer). |
|
||||
| synchronizeK8sNodes | bool | `true` | Synchronize Kubernetes nodes to kvstore and perform CNP GC. |
|
||||
| sysctlfix | object | `{"enabled":true}` | Configure sysctl override described in #20072. |
|
||||
| sysctlfix.enabled | bool | `true` | Enable the sysctl override. When enabled, the init container will mount the /proc of the host so that the `sysctlfix` utility can execute. |
|
||||
@@ -929,11 +1017,12 @@ contributors across the globe, there is almost always someone available to help.
|
||||
| tls.secretsNamespace | object | `{"create":true,"name":"cilium-secrets"}` | Configures where secrets used in CiliumNetworkPolicies will be looked for |
|
||||
| tls.secretsNamespace.create | bool | `true` | Create secrets namespace for TLS Interception secrets. |
|
||||
| tls.secretsNamespace.name | string | `"cilium-secrets"` | Name of TLS Interception secret namespace. |
|
||||
| tmpVolume | object | `{}` | Configure temporary volume for cilium-agent |
|
||||
| tolerations | list | `[{"operator":"Exists"}]` | Node tolerations for agent scheduling to nodes with taints ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ |
|
||||
| tunnelPort | int | Port 8472 for VXLAN, Port 6081 for Geneve | Configure VXLAN and Geneve tunnel port. |
|
||||
| tunnelProtocol | string | `"vxlan"` | Tunneling protocol to use in tunneling mode and for ad-hoc tunnels. Possible values: - "" - vxlan - geneve |
|
||||
| tunnelSourcePortRange | string | 0-0 to let the kernel driver decide the range | Configure VXLAN and Geneve tunnel source port range hint. |
|
||||
| underlayProtocol | string | `"ipv4"` | IP family for the underlay. |
|
||||
| underlayProtocol | string | `"ipv4"` | IP family for the underlay. Possible values: - "ipv4" - "ipv6" |
|
||||
| updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":2},"type":"RollingUpdate"}` | Cilium agent update strategy |
|
||||
| upgradeCompatibility | string | `nil` | upgradeCompatibility helps users upgrading to ensure that the configMap for Cilium will not change critical values to ensure continued operation This flag is not required for new installations. For example: '1.7', '1.8', '1.9' |
|
||||
| vtep.cidr | string | `""` | A space separated list of VTEP device CIDRs, for example "1.1.1.0/24 1.1.2.0/24" |
|
||||
|
||||
@@ -292,7 +292,7 @@ overloadManager:
|
||||
- name: "envoy.resource_monitors.global_downstream_max_connections"
|
||||
typedConfig:
|
||||
"@type": "type.googleapis.com/envoy.extensions.resource_monitors.downstream_connections.v3.DownstreamConnectionsConfig"
|
||||
max_active_downstream_connections: "50000"
|
||||
max_active_downstream_connections: "{{ .Values.envoy.maxGlobalDownstreamConnections }}"
|
||||
applicationLogConfig:
|
||||
logFormat:
|
||||
{{- if .Values.envoy.log.format_json }}
|
||||
|
||||
@@ -156,6 +156,14 @@ fi
|
||||
iptables -w -t nat -D POSTROUTING -m comment --comment "ip-masq: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom IP-MASQ chain" -m addrtype ! --dst-type LOCAL -j IP-MASQ || true
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.nodeinit.waitForCloudInit }}
|
||||
echo "Waiting for cloud-init..."
|
||||
if command -v cloud-init >/dev/null 2>&1; then
|
||||
cloud-init status --wait
|
||||
echo "cloud-init completed!"
|
||||
fi
|
||||
{{- end }}
|
||||
|
||||
{{- if not (eq .Values.nodeinit.bootstrapFile "") }}
|
||||
mkdir -p {{ .Values.nodeinit.bootstrapFile | dir | quote }}
|
||||
date > {{ .Values.nodeinit.bootstrapFile | quote }}
|
||||
|
||||
@@ -21,6 +21,24 @@ dnsPolicy: {{ .Values.dnsPolicy }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to add extra volumes to cilium-operator.
|
||||
*/}}
|
||||
{{- define "cilium-operator.volumes.extra" }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "cilium-operator.volumeMounts.extra" }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to set securityContext for cilium-operator.
|
||||
*/}}
|
||||
{{- define "cilium.operator.securityContext" }}
|
||||
{{- with .Values.operator.securityContext }}
|
||||
{{ toYaml . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Intentionally empty to allow downstream chart packagers to add extra
|
||||
containers to hubble-relay without having to modify the deployment manifest
|
||||
@@ -72,3 +90,87 @@ Allow packagers to add extra configuration to certgen.
|
||||
*/}}
|
||||
{{- define "certgen.config.extra" -}}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to add extra arguments to the clustermesh-apiserver apiserver container.
|
||||
*/}}
|
||||
{{- define "clustermesh.apiserver.args.extra" -}}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to add extra arguments to the clustermesh-apiserver kvstoremesh container.
|
||||
*/}}
|
||||
{{- define "clustermesh.kvstoremesh.args.extra" -}}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to add init containers to the cilium-envoy pods.
|
||||
*/}}
|
||||
{{- define "envoy.initContainers" -}}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to add extra args to the cilium-envoy container.
|
||||
*/}}
|
||||
{{- define "envoy.args.extra" -}}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to add extra env vars to the cilium-envoy container.
|
||||
*/}}
|
||||
{{- define "envoy.env.extra" -}}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to add extra volume mounts to the cilium-envoy container.
|
||||
*/}}
|
||||
{{- define "envoy.volumeMounts.extra" -}}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to add extra host path mounts to the cilium-envoy container.
|
||||
*/}}
|
||||
{{- define "envoy.hostPathMounts.extra" -}}
|
||||
{{- end }}
|
||||
|
||||
|
||||
{{/*
|
||||
Allow packagers to define set of ports for cilium-envoy container.
|
||||
The template needs to allow overriding ports spec not just adding.
|
||||
*/}}
|
||||
{{- define "envoy.ports" -}}
|
||||
{{- if .Values.envoy.prometheus.enabled }}
|
||||
ports:
|
||||
- name: envoy-metrics
|
||||
containerPort: {{ .Values.envoy.prometheus.port }}
|
||||
hostPort: {{ .Values.envoy.prometheus.port }}
|
||||
protocol: TCP
|
||||
{{- if and .Values.envoy.debug.admin.enabled .Values.envoy.debug.admin.port }}
|
||||
- name: envoy-admin
|
||||
containerPort: {{ .Values.envoy.debug.admin.port }}
|
||||
hostPort: {{ .Values.envoy.debug.admin.port }}
|
||||
protocol: TCP
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to define update strategy for cilium-envoy pods.
|
||||
*/}}
|
||||
{{- define "envoy.updateStrategy" -}}
|
||||
{{- with .Values.envoy.updateStrategy }}
|
||||
updateStrategy:
|
||||
{{- toYaml . | trim | nindent 2 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
Allow packagers to define affinity for cilium-envoy pods.
|
||||
*/}}
|
||||
{{- define "envoy.affinity" -}}
|
||||
{{- with .Values.envoy.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 2 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
|
||||
@@ -131,12 +131,16 @@ To override the namespace and configMap when using `auto`:
|
||||
{{- define "k8sServiceHost" }}
|
||||
{{- $configmapName := default "cluster-info" .Values.k8sServiceLookupConfigMapName }}
|
||||
{{- $configmapNamespace := default "kube-public" .Values.k8sServiceLookupNamespace }}
|
||||
{{- if and (eq .Values.k8sServiceHost "auto") (lookup "v1" "ConfigMap" $configmapNamespace $configmapName) }}
|
||||
{{- if eq .Values.k8sServiceHost "auto" }}
|
||||
{{- $configmap := (lookup "v1" "ConfigMap" $configmapNamespace $configmapName) }}
|
||||
{{- $kubeconfig := get $configmap.data "kubeconfig" }}
|
||||
{{- $k8sServer := get ($kubeconfig | fromYaml) "clusters" | mustFirst | dig "cluster" "server" "" }}
|
||||
{{- $uri := (split "https://" $k8sServer)._1 | trim }}
|
||||
{{- (split ":" $uri)._0 | quote }}
|
||||
{{- if $configmap }}
|
||||
{{- $kubeconfig := get $configmap.data "kubeconfig" }}
|
||||
{{- $k8sServer := get ($kubeconfig | fromYaml) "clusters" | mustFirst | dig "cluster" "server" "" }}
|
||||
{{- $uri := (split "https://" $k8sServer)._1 | trim }}
|
||||
{{- (split ":" $uri)._0 | quote }}
|
||||
{{- else }}
|
||||
{{- fail (printf "ConfigMap %s/%s not found, please create it or set k8sServiceHost to a valid value" $configmapNamespace $configmapName) }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
{{- .Values.k8sServiceHost | quote }}
|
||||
{{- end }}
|
||||
|
||||
@@ -94,7 +94,6 @@ rules:
|
||||
- cilium.io
|
||||
resources:
|
||||
- ciliumloadbalancerippools
|
||||
- ciliumbgppeeringpolicies
|
||||
- ciliumbgpnodeconfigs
|
||||
- ciliumbgpadvertisements
|
||||
- ciliumbgppeerconfigs
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
|
||||
{{- $kubeProxyReplacement := (coalesce .Values.kubeProxyReplacement "false") -}}
|
||||
{{- $envoyDS := eq (include "envoyDaemonSetEnabled" .) "true" -}}
|
||||
{{- $buildDaemonConfig := or (kindIs "invalid" .Values.daemon.configSources) (not (regexMatch "^config-map:[^,]+$" .Values.daemon.configSources)) -}}
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
@@ -134,7 +135,7 @@ spec:
|
||||
httpGet:
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
path: /healthz
|
||||
port: {{ .Values.healthPort }}
|
||||
port: health
|
||||
scheme: HTTP
|
||||
httpHeaders:
|
||||
- name: "brief"
|
||||
@@ -154,7 +155,7 @@ spec:
|
||||
httpGet:
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
path: /healthz
|
||||
port: {{ .Values.healthPort }}
|
||||
port: health
|
||||
scheme: HTTP
|
||||
httpHeaders:
|
||||
- name: "brief"
|
||||
@@ -177,7 +178,7 @@ spec:
|
||||
httpGet:
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
path: /healthz
|
||||
port: {{ .Values.healthPort }}
|
||||
port: health
|
||||
scheme: HTTP
|
||||
httpHeaders:
|
||||
- name: "brief"
|
||||
@@ -250,12 +251,17 @@ spec:
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.prometheus.enabled (or .Values.hubble.metrics.enabled .Values.hubble.metrics.dynamic.enabled) }}
|
||||
ports:
|
||||
- name: health
|
||||
containerPort: {{ .Values.healthPort }}
|
||||
hostPort: {{ .Values.healthPort }}
|
||||
protocol: TCP
|
||||
{{- if .Values.hubble.enabled }}
|
||||
- name: peer-service
|
||||
containerPort: {{ .Values.hubble.peerService.targetPort }}
|
||||
hostPort: {{ .Values.hubble.peerService.targetPort }}
|
||||
protocol: TCP
|
||||
{{- end }}
|
||||
{{- if .Values.prometheus.enabled }}
|
||||
- name: prometheus
|
||||
containerPort: {{ .Values.prometheus.port }}
|
||||
@@ -280,7 +286,6 @@ spec:
|
||||
hostPort: {{ .Values.hubble.metrics.port }}
|
||||
protocol: TCP
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
securityContext:
|
||||
{{- if .Values.securityContext.privileged }}
|
||||
privileged: true
|
||||
@@ -375,6 +380,10 @@ spec:
|
||||
- name: cilium-ipsec-secrets
|
||||
mountPath: {{ .Values.encryption.ipsec.mountPath }}
|
||||
{{- end }}
|
||||
{{- if and .Values.encryption.enabled (eq .Values.encryption.type "ztunnel") }}
|
||||
- name: cilium-ztunnel-secrets
|
||||
mountPath: /etc/ztunnel
|
||||
{{- end }}
|
||||
{{- if .Values.kubeConfigPath }}
|
||||
- name: kube-config
|
||||
mountPath: {{ .Values.kubeConfigPath }}
|
||||
@@ -390,8 +399,14 @@ spec:
|
||||
mountPath: /var/lib/cilium/tls/hubble
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if $buildDaemonConfig }}
|
||||
- name: tmp
|
||||
mountPath: /tmp
|
||||
{{- else }}
|
||||
- name: cilium-config-path
|
||||
mountPath: /tmp/cilium/config-map
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- range .Values.extraHostPathMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
@@ -447,6 +462,7 @@ spec:
|
||||
{{- toYaml .Values.extraContainers | nindent 6 }}
|
||||
{{- end }}
|
||||
initContainers:
|
||||
{{- if $buildDaemonConfig }}
|
||||
- name: config
|
||||
image: {{ include "cilium.image" .Values.image | quote }}
|
||||
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||
@@ -513,6 +529,17 @@ spec:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
securityContext:
|
||||
{{- if .Values.securityContext.privileged }}
|
||||
privileged: true
|
||||
{{- else }}
|
||||
capabilities:
|
||||
add:
|
||||
- NET_ADMIN
|
||||
drop:
|
||||
- ALL
|
||||
{{- end}}
|
||||
{{- end }}
|
||||
{{- if .Values.cgroup.autoMount.enabled }}
|
||||
# Required to mount cgroup2 filesystem on the underlying Kubernetes node.
|
||||
# We use nsenter command with host's cgroup and mount namespaces enabled.
|
||||
@@ -524,9 +551,12 @@ spec:
|
||||
value: {{ .Values.cgroup.hostRoot }}
|
||||
- name: BIN_PATH
|
||||
value: {{ .Values.cni.binPath }}
|
||||
{{- with .Values.cgroup.autoMount.resources }}
|
||||
{{- if .Values.cgroup.autoMount.resources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- toYaml .Values.cgroup.autoMount.resources | trim | nindent 10 }}
|
||||
{{- else if .Values.initResources }}
|
||||
resources:
|
||||
{{- toYaml .Values.initResources | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
command:
|
||||
- sh
|
||||
@@ -821,7 +851,7 @@ spec:
|
||||
{{- end }}
|
||||
{{- if and .Values.clustermesh.config.enabled (not (and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.kvstoremesh.enabled )) }}
|
||||
hostAliases:
|
||||
{{- range $cluster := .Values.clustermesh.config.clusters }}
|
||||
{{- range $_, $cluster := (include "clustermesh-clusters" . | fromJson) }}
|
||||
{{- range $ip := $cluster.ips }}
|
||||
- ip: {{ $ip }}
|
||||
hostnames: [ "{{ $cluster.name }}.{{ $.Values.clustermesh.config.domain }}" ]
|
||||
@@ -829,9 +859,20 @@ spec:
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
# For sharing configuration between the "config" initContainer and the agent
|
||||
{{- if $buildDaemonConfig }}
|
||||
# For sharing configuration between the "config" initContainer and the agent
|
||||
- name: tmp
|
||||
{{- if .Values.tmpVolume }}
|
||||
{{- toYaml .Values.tmpVolume | nindent 8 }}
|
||||
{{- else }}
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
# To read the configuration from the config map
|
||||
- name: cilium-config-path
|
||||
configMap:
|
||||
name: {{ trimPrefix "config-map:" .Values.daemon.configSources }}
|
||||
{{- end }}
|
||||
# To keep state between restarts / upgrades
|
||||
- name: cilium-run
|
||||
hostPath:
|
||||
@@ -992,6 +1033,11 @@ spec:
|
||||
secret:
|
||||
secretName: {{ .Values.encryption.ipsec.secretName }}
|
||||
{{- end }}
|
||||
{{- if and .Values.encryption.enabled (eq .Values.encryption.type "ztunnel") }}
|
||||
- name: cilium-ztunnel-secrets
|
||||
secret:
|
||||
secretName: cilium-ztunnel-secrets
|
||||
{{- end }}
|
||||
{{- if .Values.cni.configMap }}
|
||||
- name: cni-configuration
|
||||
configMap:
|
||||
|
||||
@@ -120,7 +120,7 @@ rules:
|
||||
- watch
|
||||
{{- end}}
|
||||
|
||||
{{- if and .Values.operator.enabled .Values.serviceAccounts.operator.create $readSecretsOnlyFromSecretsNamespace .Values.tls.secretsNamespace.name }}
|
||||
{{- if and .Values.agent (not .Values.preflight.enabled) .Values.serviceAccounts.cilium.create $readSecretsOnlyFromSecretsNamespace .Values.tls.secretsNamespace.name }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
|
||||
@@ -126,7 +126,7 @@ subjects:
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- end}}
|
||||
|
||||
{{- if and (not .Values.preflight.enabled) $readSecretsOnlyFromSecretsNamespace .Values.tls.secretsNamespace.name }}
|
||||
{{- if and .Values.agent (not .Values.preflight.enabled) .Values.serviceAccounts.cilium.create $readSecretsOnlyFromSecretsNamespace .Values.tls.secretsNamespace.name }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
|
||||
@@ -11,8 +11,13 @@ kind: Secret
|
||||
metadata:
|
||||
name: {{ .commonCASecretName }}
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
cilium.io/helm-template-non-idempotent: "true"
|
||||
{{- with .Values.nonIdempotentAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
data:
|
||||
|
||||
@@ -33,12 +33,6 @@
|
||||
{{- end }}
|
||||
{{- $defaultBpfCtTcpMax = 0 -}}
|
||||
{{- $defaultBpfCtAnyMax = 0 -}}
|
||||
{{- $defaultKubeProxyReplacement = "probe" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- /* Default values when 1.9 was initially deployed */ -}}
|
||||
{{- if semverCompare ">=1.9" (default "1.9" .Values.upgradeCompatibility) -}}
|
||||
{{- $defaultKubeProxyReplacement = "probe" -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- /* Default values when 1.10 was initially deployed */ -}}
|
||||
@@ -52,7 +46,6 @@
|
||||
{{- if .Values.azure.enabled }}
|
||||
{{- $azureUsePrimaryAddress = "false" -}}
|
||||
{{- end }}
|
||||
{{- $defaultKubeProxyReplacement = "disabled" -}}
|
||||
{{- $defaultDNSProxyEnableTransparentMode = "true" -}}
|
||||
{{- end -}}
|
||||
|
||||
@@ -71,6 +64,10 @@
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
{{- $ipam := (coalesce .Values.ipam.mode $defaultIPAM) -}}
|
||||
{{- if .Values.eni.enabled }}
|
||||
{{- $ipam = "eni" -}}
|
||||
{{- end }}
|
||||
|
||||
{{- $bpfCtTcpMax := (coalesce .Values.bpf.ctTcpMax $defaultBpfCtTcpMax) -}}
|
||||
{{- $bpfCtAnyMax := (coalesce .Values.bpf.ctAnyMax $defaultBpfCtAnyMax) -}}
|
||||
{{- $stringValueKPR := (toString .Values.kubeProxyReplacement) -}}
|
||||
@@ -258,6 +255,14 @@ data:
|
||||
operator-prometheus-serve-addr: ":{{ .Values.operator.prometheus.port }}"
|
||||
enable-metrics: "true"
|
||||
{{- end }}
|
||||
{{- if and .Values.operator.prometheus.enabled .Values.operator.prometheus.tls.enabled }}
|
||||
operator-prometheus-enable-tls: "true"
|
||||
operator-prometheus-tls-cert-file: /var/lib/cilium/tls/prometheus/server.crt
|
||||
operator-prometheus-tls-key-file: /var/lib/cilium/tls/prometheus/server.key
|
||||
{{- if .Values.operator.prometheus.tls.server.mtls.enabled }}
|
||||
operator-prometheus-tls-client-ca-files: /var/lib/cilium/tls/prometheus/client-ca.crt
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.operator.skipCRDCreation }}
|
||||
skip-crd-creation: "true"
|
||||
@@ -456,6 +461,9 @@ data:
|
||||
# policy map (per endpoint)
|
||||
bpf-policy-map-max: "{{ .Values.bpf.policyMapMax | int }}"
|
||||
{{- end }}
|
||||
{{- if has (kindOf .Values.bpf.policyMapPressureMetricsThreshold) (list "int64" "float64") }}
|
||||
bpf-policy-map-pressure-metrics-threshold: {{ .Values.bpf.policyMapPressureMetricsThreshold | quote }}
|
||||
{{- end }}
|
||||
{{- if hasKey .Values.bpf "policyStatsMapMax" }}
|
||||
# bpf-policy-stats-map-max specifies the maximum number of entries in global
|
||||
# policy stats map
|
||||
@@ -505,7 +513,7 @@ data:
|
||||
cluster-name: {{ .Values.cluster.name | quote }}
|
||||
|
||||
{{- if hasKey .Values.cluster "id" }}
|
||||
# Unique ID of the cluster. Must be unique across all conneted clusters and
|
||||
# Unique ID of the cluster. Must be unique across all connected clusters and
|
||||
# in the range of 1 and 255. Only relevant when building a mesh of clusters.
|
||||
cluster-id: "{{ .Values.cluster.id }}"
|
||||
{{- end }}
|
||||
@@ -526,7 +534,7 @@ data:
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
routing-mode: {{ .Values.routingMode | default (ternary "native" "tunnel" .Values.gke.enabled) | quote }}
|
||||
routing-mode: {{ .Values.routingMode | default (ternary "native" "tunnel" (or .Values.eni.enabled .Values.gke.enabled)) | quote }}
|
||||
tunnel-protocol: {{ .Values.tunnelProtocol | default "vxlan" | quote }}
|
||||
|
||||
{{- if eq .Values.routingMode "native" }}
|
||||
@@ -558,6 +566,10 @@ data:
|
||||
service-no-backend-response: "{{ .Values.serviceNoBackendResponse }}"
|
||||
{{- end}}
|
||||
|
||||
{{- if .Values.policyDenyResponse }}
|
||||
policy-deny-response: "{{ .Values.policyDenyResponse }}"
|
||||
{{- end}}
|
||||
|
||||
{{- if .Values.MTU }}
|
||||
mtu: {{ .Values.MTU | quote }}
|
||||
{{- end }}
|
||||
@@ -575,6 +587,35 @@ data:
|
||||
{{- end }}
|
||||
ec2-api-endpoint: {{ .Values.eni.ec2APIEndpoint | quote }}
|
||||
eni-tags: {{ .Values.eni.eniTags | toRawJson | quote }}
|
||||
{{- if .Values.eni.nodeSpec }}
|
||||
{{- if ne .Values.eni.nodeSpec.firstInterfaceIndex nil }}
|
||||
eni-first-interface-index: {{ .Values.eni.nodeSpec.firstInterfaceIndex | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.eni.nodeSpec.subnetIDs }}
|
||||
eni-subnet-ids: {{ .Values.eni.nodeSpec.subnetIDs | join "," | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.eni.nodeSpec.subnetTags }}
|
||||
eni-subnet-tags: {{ .Values.eni.nodeSpec.subnetTags | join "," | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.eni.nodeSpec.securityGroups }}
|
||||
eni-security-groups: {{ .Values.eni.nodeSpec.securityGroups | join "," | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.eni.nodeSpec.securityGroupTags }}
|
||||
eni-security-group-tags: {{ .Values.eni.nodeSpec.securityGroupTags | join "," | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.eni.nodeSpec.excludeInterfaceTags }}
|
||||
eni-exclude-interface-tags: {{ .Values.eni.nodeSpec.excludeInterfaceTags | join "," | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.eni.nodeSpec.usePrimaryAddress }}
|
||||
eni-use-primary-address: "true"
|
||||
{{- end }}
|
||||
{{- if .Values.eni.nodeSpec.disablePrefixDelegation }}
|
||||
eni-disable-prefix-delegation: "true"
|
||||
{{- end }}
|
||||
{{- if ne .Values.eni.nodeSpec.deleteOnTermination nil }}
|
||||
eni-delete-on-termination: {{ .Values.eni.nodeSpec.deleteOnTermination | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.eni.subnetIDsFilter }}
|
||||
subnet-ids-filter: {{ .Values.eni.subnetIDsFilter | join " " | quote }}
|
||||
{{- end }}
|
||||
@@ -600,17 +641,39 @@ data:
|
||||
{{- end }}
|
||||
azure-use-primary-address: {{ $azureUsePrimaryAddress | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.azure.nodeSpec.azureInterfaceName }}
|
||||
azure-interface-name: {{ .Values.azure.nodeSpec.azureInterfaceName | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.alibabacloud.enabled }}
|
||||
enable-endpoint-routes: "true"
|
||||
auto-create-cilium-node-resource: "true"
|
||||
{{- end }}
|
||||
{{- if .Values.alibabacloud.nodeSpec.vSwitches }}
|
||||
alibabacloud-vswitches: {{ .Values.alibabacloud.nodeSpec.vSwitches | join "," | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.alibabacloud.nodeSpec.vSwitchTags }}
|
||||
alibabacloud-vswitch-tags: {{ .Values.alibabacloud.nodeSpec.vSwitchTags | join "," | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.alibabacloud.nodeSpec.securityGroups }}
|
||||
alibabacloud-security-groups: {{ .Values.alibabacloud.nodeSpec.securityGroups | join "," | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.alibabacloud.nodeSpec.securityGroupTags }}
|
||||
alibabacloud-security-group-tags: {{ .Values.alibabacloud.nodeSpec.securityGroupTags | join "," | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if hasKey .Values "l7Proxy" }}
|
||||
# Enables L7 proxy for L7 policy enforcement and visibility
|
||||
enable-l7-proxy: {{ .Values.l7Proxy | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if hasKey .Values "standaloneDnsProxy" }}
|
||||
{{- if .Values.standaloneDnsProxy.enabled }}
|
||||
enable-standalone-dns-proxy: {{ .Values.standaloneDnsProxy.enabled | quote }}
|
||||
standalone-dns-proxy-server-port: {{ .Values.standaloneDnsProxy.serverPort | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- if ne $cniChainingMode "none" }}
|
||||
# Enable chaining with another CNI plugin
|
||||
#
|
||||
@@ -643,6 +706,7 @@ data:
|
||||
enable-ipv4-big-tcp: {{ .Values.enableIPv4BIGTCP | quote }}
|
||||
enable-ipv6-big-tcp: {{ .Values.enableIPv6BIGTCP | quote }}
|
||||
enable-ipv6-masquerade: {{ .Values.enableIPv6Masquerade | quote }}
|
||||
enable-tunnel-big-tcp: {{ .Values.enableTunnelBIGTCP | quote }}
|
||||
|
||||
{{- if hasKey .Values.bpf "enableTCX" }}
|
||||
enable-tcx: {{ .Values.bpf.enableTCX | quote }}
|
||||
@@ -687,6 +751,8 @@ data:
|
||||
{{- if .Values.encryption.wireguard.persistentKeepalive }}
|
||||
wireguard-persistent-keepalive: {{ .Values.encryption.wireguard.persistentKeepalive | quote }}
|
||||
{{- end }}
|
||||
{{- else if eq .Values.encryption.type "ztunnel" }}
|
||||
enable-ztunnel: {{ .Values.encryption.enabled | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.encryption.nodeEncryption }}
|
||||
encrypt-node: {{ .Values.encryption.nodeEncryption | quote }}
|
||||
@@ -694,11 +760,20 @@ data:
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.encryption.strictMode.enabled }}
|
||||
enable-encryption-strict-mode: {{ .Values.encryption.strictMode.enabled | quote }}
|
||||
# --- DEPRECATED: Please use encryption.strictMode.egress.enabled instead
|
||||
enable-encryption-strict-mode-egress: {{ .Values.encryption.strictMode.enabled | quote }}
|
||||
encryption-strict-egress-cidr: {{ .Values.encryption.strictMode.cidr | quote }}
|
||||
encryption-strict-egress-allow-remote-node-identities: {{ .Values.encryption.strictMode.allowRemoteNodeIdentities | quote }}
|
||||
{{- end }}
|
||||
|
||||
encryption-strict-mode-cidr: {{ .Values.encryption.strictMode.cidr | quote }}
|
||||
{{- if .Values.encryption.strictMode.ingress.enabled }}
|
||||
enable-encryption-strict-mode-ingress: {{ .Values.encryption.strictMode.ingress.enabled | quote }}
|
||||
{{- end }}
|
||||
|
||||
encryption-strict-mode-allow-remote-node-identities: {{ .Values.encryption.strictMode.allowRemoteNodeIdentities | quote }}
|
||||
{{- if .Values.encryption.strictMode.egress.enabled }}
|
||||
enable-encryption-strict-mode-egress: {{ .Values.encryption.strictMode.egress.enabled | quote }}
|
||||
encryption-strict-egress-cidr: {{ .Values.encryption.strictMode.egress.cidr | quote }}
|
||||
encryption-strict-egress-allow-remote-node-identities: {{ .Values.encryption.strictMode.egress.allowRemoteNodeIdentities | quote }}
|
||||
{{- end }}
|
||||
|
||||
enable-xt-socket-fallback: {{ .Values.enableXTSocketFallback | quote }}
|
||||
@@ -773,6 +848,10 @@ data:
|
||||
kube-proxy-replacement-healthz-bind-address: {{ default "" .Values.kubeProxyReplacementHealthzBindAddr | quote}}
|
||||
{{- end }}
|
||||
|
||||
{{- if hasKey .Values "enableNoServiceEndpointsRoutable" }}
|
||||
enable-no-service-endpoints-routable: {{ .Values.enableNoServiceEndpointsRoutable | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if $socketLB }}
|
||||
{{- if hasKey $socketLB "enabled" }}
|
||||
bpf-lb-sock: {{ $socketLB.enabled | quote }}
|
||||
@@ -789,9 +868,6 @@ data:
|
||||
{{- end }}
|
||||
|
||||
{{- if hasKey .Values "nodePort" }}
|
||||
{{- if eq $kubeProxyReplacement "false" }}
|
||||
enable-node-port: {{ .Values.nodePort.enabled | quote }}
|
||||
{{- end }}
|
||||
{{- if hasKey .Values.nodePort "range" }}
|
||||
node-port-range: {{ get .Values.nodePort "range" | quote }}
|
||||
{{- end }}
|
||||
@@ -832,10 +908,6 @@ data:
|
||||
enable-service-topology: {{ .Values.loadBalancer.serviceTopology | quote }}
|
||||
# {{- end }}
|
||||
|
||||
{{- if hasKey .Values.loadBalancer "protocolDifferentiation" }}
|
||||
bpf-lb-proto-diff: {{ .Values.loadBalancer.protocolDifferentiation.enabled | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- end }}
|
||||
{{- if hasKey .Values.maglev "tableSize" }}
|
||||
bpf-lb-maglev-table-size: {{ .Values.maglev.tableSize | quote}}
|
||||
@@ -843,11 +915,9 @@ data:
|
||||
{{- if hasKey .Values.maglev "hashSeed" }}
|
||||
bpf-lb-maglev-hash-seed: {{ .Values.maglev.hashSeed | quote}}
|
||||
{{- end }}
|
||||
{{- if .Values.sessionAffinity }}
|
||||
enable-session-affinity: {{ .Values.sessionAffinity | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.svcSourceRangeCheck }}
|
||||
enable-svc-source-range-check: {{ .Values.svcSourceRangeCheck | quote }}
|
||||
|
||||
{{- if .Values.bpf.monitorTraceIPOption }}
|
||||
ip-tracing-option-type: {{ .Values.bpf.monitorTraceIPOption | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if hasKey .Values "l2NeighDiscovery" }}
|
||||
@@ -860,12 +930,16 @@ data:
|
||||
pprof: {{ .Values.pprof.enabled | quote }}
|
||||
pprof-address: {{ .Values.pprof.address | quote }}
|
||||
pprof-port: {{ .Values.pprof.port | quote }}
|
||||
pprof-mutex-profile-fraction: {{ .Values.pprof.mutexProfileFraction | quote }}
|
||||
pprof-block-profile-rate: {{ .Values.pprof.blockProfileRate | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.operator.pprof.enabled }}
|
||||
operator-pprof: {{ .Values.operator.pprof.enabled | quote }}
|
||||
operator-pprof-address: {{ .Values.operator.pprof.address | quote }}
|
||||
operator-pprof-port: {{ .Values.operator.pprof.port | quote }}
|
||||
operator-pprof-mutex-profile-fraction: {{ .Values.operator.pprof.mutexProfileFraction | quote }}
|
||||
operator-pprof-block-profile-rate: {{ .Values.operator.pprof.blockProfileRate | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.logSystemLoad }}
|
||||
@@ -973,6 +1047,10 @@ data:
|
||||
# Capacity of the buffer to store recent events.
|
||||
hubble-event-buffer-capacity: {{ .Values.hubble.eventBufferCapacity | quote }}
|
||||
{{- end }}
|
||||
{{- if hasKey .Values.hubble "lostEventSendInterval" }}
|
||||
# Interval to send lost events from Observer server.
|
||||
hubble-lost-event-send-interval: {{ include "validateDuration" .Values.hubble.lostEventSendInterval | quote }}
|
||||
{{- end }}
|
||||
{{- if or .Values.hubble.metrics.enabled .Values.hubble.metrics.dynamic.enabled}}
|
||||
# Address to expose Hubble metrics (e.g. ":7070"). Metrics server will be disabled if this
|
||||
# field is not set.
|
||||
@@ -1040,8 +1118,10 @@ data:
|
||||
hubble-export-file-max-size-mb: {{ .Values.hubble.export.fileMaxSizeMb | default .Values.hubble.export.static.fileMaxSizeMb | quote }}
|
||||
hubble-export-file-max-backups: {{ .Values.hubble.export.fileMaxBackups | default .Values.hubble.export.static.fileMaxBackups | quote }}
|
||||
hubble-export-file-compress: {{ .Values.hubble.export.fileCompress | default .Values.hubble.export.static.fileCompress | quote }}
|
||||
hubble-export-aggregation-interval: {{ include "validateDuration" .Values.hubble.export.aggregationInterval | default .Values.hubble.export.static.aggregationInterval | quote }}
|
||||
hubble-export-file-path: {{ .Values.hubble.export.static.filePath | quote }}
|
||||
hubble-export-fieldmask: {{ .Values.hubble.export.static.fieldMask | join " " | quote }}
|
||||
hubble-export-fieldaggregate: {{ .Values.hubble.export.static.fieldAggregate | join " " | quote }}
|
||||
hubble-export-allowlist: {{ .Values.hubble.export.static.allowList | join " " | quote }}
|
||||
hubble-export-denylist: {{ .Values.hubble.export.static.denyList | join " " | quote }}
|
||||
{{- end }}
|
||||
@@ -1078,10 +1158,28 @@ data:
|
||||
disable-iptables-feeder-rules: {{ .Values.disableIptablesFeederRules | join " " | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.aksbyocni.enabled }}
|
||||
{{- if or (not .Values.ipam.mode) (eq .Values.ipam.mode "cluster-pool") }}
|
||||
ipam: "cluster-pool"
|
||||
{{- else if eq .Values.ipam.mode "multi-pool" }}
|
||||
ipam: "multi-pool"
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
ipam: {{ $ipam | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.ipam.nodeSpec }}
|
||||
{{- if ne .Values.ipam.nodeSpec.ipamMinAllocate nil }}
|
||||
ipam-min-allocate: {{ .Values.ipam.nodeSpec.ipamMinAllocate | quote }}
|
||||
{{- end }}
|
||||
{{- if ne .Values.ipam.nodeSpec.ipamPreAllocate nil }}
|
||||
ipam-pre-allocate: {{ .Values.ipam.nodeSpec.ipamPreAllocate | quote }}
|
||||
{{- end }}
|
||||
{{- if ne .Values.ipam.nodeSpec.ipamMaxAllocate nil }}
|
||||
ipam-max-allocate: {{ .Values.ipam.nodeSpec.ipamMaxAllocate | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.ipam.nodeSpec.ipamStaticIPTags }}
|
||||
ipam-static-ip-tags: {{ .Values.ipam.nodeSpec.ipamStaticIPTags | join "," | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.ipam.multiPoolPreAllocation }}
|
||||
ipam-multi-pool-pre-allocation: {{ .Values.ipam.multiPoolPreAllocation | quote }}
|
||||
{{- end }}
|
||||
@@ -1092,18 +1190,10 @@ data:
|
||||
|
||||
{{- if (eq $ipam "cluster-pool") }}
|
||||
{{- if .Values.ipv4.enabled }}
|
||||
{{- if hasKey .Values.ipam.operator "clusterPoolIPv4PodCIDR" }}
|
||||
{{- /* ipam.operator.clusterPoolIPv4PodCIDR removed in v1.14, remove this failsafe around v1.17 */ -}}
|
||||
{{- fail "Value ipam.operator.clusterPoolIPv4PodCIDR removed, use ipam.operator.clusterPoolIPv4PodCIDRList instead" }}
|
||||
{{- end }}
|
||||
cluster-pool-ipv4-cidr: {{ .Values.ipam.operator.clusterPoolIPv4PodCIDRList | join " " | quote }}
|
||||
cluster-pool-ipv4-mask-size: {{ .Values.ipam.operator.clusterPoolIPv4MaskSize | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.ipv6.enabled }}
|
||||
{{- if hasKey .Values.ipam.operator "clusterPoolIPv6PodCIDR" }}
|
||||
{{- /* ipam.operator.clusterPoolIPv6PodCIDR removed in v1.14, remove this failsafe around v1.17 */ -}}
|
||||
{{- fail "Value ipam.operator.clusterPoolIPv6PodCIDR removed, use ipam.operator.clusterPoolIPv6PodCIDRList instead" }}
|
||||
{{- end }}
|
||||
cluster-pool-ipv6-cidr: {{ .Values.ipam.operator.clusterPoolIPv6PodCIDRList | join " " | quote }}
|
||||
cluster-pool-ipv6-mask-size: {{ .Values.ipam.operator.clusterPoolIPv6MaskSize | quote }}
|
||||
{{- end }}
|
||||
@@ -1172,20 +1262,11 @@ data:
|
||||
crd-wait-timeout: {{ include "validateDuration" .Values.crdWaitTimeout | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.enableK8sEndpointSlice }}
|
||||
enable-k8s-endpoint-slice: {{ .Values.enableK8sEndpointSlice | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if hasKey .Values.k8s "serviceProxyName" }}
|
||||
# Configure service proxy name for Cilium.
|
||||
k8s-service-proxy-name: {{ .Values.k8s.serviceProxyName | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if and .Values.customCalls .Values.customCalls.enabled }}
|
||||
# Enable tail call hooks for custom eBPF programs.
|
||||
enable-custom-calls: {{ .Values.customCalls.enabled | quote }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.l2announcements.enabled }}
|
||||
# Enable L2 announcements
|
||||
enable-l2-announcements: {{ .Values.l2announcements.enabled | quote }}
|
||||
@@ -1222,6 +1303,8 @@ data:
|
||||
enable-pmtu-discovery: "true"
|
||||
{{- end }}
|
||||
|
||||
packetization-layer-pmtud-mode: {{ .Values.pmtuDiscovery.packetizationLayerPMTUDMode | quote }}
|
||||
|
||||
{{- if not .Values.securityContext.privileged }}
|
||||
procfs: "/host/proc"
|
||||
{{- end }}
|
||||
@@ -1296,11 +1379,20 @@ data:
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.operator.unmanagedPodWatcher.restart }}
|
||||
unmanaged-pod-watcher-interval: {{ .Values.operator.unmanagedPodWatcher.intervalSeconds | quote }}
|
||||
{{- $interval := .Values.operator.unmanagedPodWatcher.intervalSeconds }}
|
||||
{{- if kindIs "float64" $interval }}
|
||||
unmanaged-pod-watcher-interval: {{ printf "%ds" (int $interval) | quote }}
|
||||
{{- else }}
|
||||
unmanaged-pod-watcher-interval: {{ $interval | quote }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
unmanaged-pod-watcher-interval: "0"
|
||||
{{- end }}
|
||||
|
||||
{{- if ne .Values.operator.unmanagedPodWatcher.selector nil }}
|
||||
pod-restart-selector: {{ .Values.operator.unmanagedPodWatcher.selector }}
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.dnsProxy }}
|
||||
{{- if hasKey .Values.dnsProxy "enableTransparentMode" }}
|
||||
# explicit setting gets precedence
|
||||
@@ -1370,10 +1462,14 @@ data:
|
||||
proxy-xff-num-trusted-hops-egress: {{ .Values.envoy.xffNumTrustedHopsL7PolicyEgress | quote }}
|
||||
proxy-connect-timeout: {{ .Values.envoy.connectTimeoutSeconds | quote }}
|
||||
proxy-initial-fetch-timeout: {{ .Values.envoy.initialFetchTimeoutSeconds | quote }}
|
||||
proxy-max-active-downstream-connections: {{ .Values.envoy.maxGlobalDownstreamConnections | quote }}
|
||||
proxy-max-requests-per-connection: {{ .Values.envoy.maxRequestsPerConnection | quote }}
|
||||
proxy-max-connection-duration-seconds: {{ .Values.envoy.maxConnectionDurationSeconds | quote }}
|
||||
proxy-idle-timeout-seconds: {{ .Values.envoy.idleTimeoutDurationSeconds | quote }}
|
||||
proxy-max-concurrent-retries: {{ .Values.envoy.maxConcurrentRetries | quote }}
|
||||
proxy-use-original-source-address: {{ .Values.envoy.useOriginalSourceAddress | quote }}
|
||||
proxy-cluster-max-connections: {{ .Values.envoy.clusterMaxConnections | quote }}
|
||||
proxy-cluster-max-requests: {{ .Values.envoy.clusterMaxRequests | quote }}
|
||||
http-retry-count: {{ .Values.envoy.httpRetryCount | quote }}
|
||||
http-stream-idle-timeout: {{ .Values.envoy.streamIdleTimeoutDurationSeconds | quote }}
|
||||
|
||||
@@ -1402,13 +1498,14 @@ data:
|
||||
{{- if hasKey .Values.clustermesh "maxConnectedClusters" }}
|
||||
max-connected-clusters: {{ .Values.clustermesh.maxConnectedClusters | quote }}
|
||||
{{- end }}
|
||||
clustermesh-cache-ttl: {{ .Values.clustermesh.cacheTTL | quote }}
|
||||
clustermesh-enable-endpoint-sync: {{ .Values.clustermesh.enableEndpointSliceSynchronization | quote }}
|
||||
clustermesh-enable-mcs-api: {{ .Values.clustermesh.enableMCSAPISupport | quote }}
|
||||
clustermesh-enable-mcs-api: {{ (or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) | quote }}
|
||||
clustermesh-mcs-api-install-crds: {{ .Values.clustermesh.mcsapi.installCRDs | quote }}
|
||||
policy-default-local-cluster: {{ .Values.clustermesh.policyDefaultLocalCluster | quote }}
|
||||
|
||||
nat-map-stats-entries: {{ .Values.nat.mapStatsEntries | quote }}
|
||||
nat-map-stats-interval: {{ .Values.nat.mapStatsInterval | quote }}
|
||||
enable-internal-traffic-policy: {{ .Values.enableInternalTrafficPolicy | quote }}
|
||||
enable-lb-ipam: {{ .Values.enableLBIPAM | quote }}
|
||||
enable-non-default-deny-policies: {{ .Values.enableNonDefaultDenyPolicies | quote }}
|
||||
|
||||
|
||||
@@ -22,10 +22,7 @@ spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: cilium-envoy
|
||||
{{- with .Values.envoy.updateStrategy }}
|
||||
updateStrategy:
|
||||
{{- toYaml . | trim | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- include "envoy.updateStrategy" . | nindent 2 }}
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
@@ -69,6 +66,7 @@ spec:
|
||||
securityContext:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "envoy.initContainers" . | nindent 6 }}
|
||||
containers:
|
||||
- name: cilium-envoy
|
||||
image: {{ include "cilium.image" .Values.envoy.image | quote }}
|
||||
@@ -94,6 +92,7 @@ spec:
|
||||
{{- if .Values.envoy.log.path }}
|
||||
- '--log-path {{ .Values.envoy.log.path }}'
|
||||
{{- end }}
|
||||
{{- include "envoy.args.extra" . | nindent 8 }}
|
||||
{{- with .Values.envoy.extraArgs }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
@@ -157,6 +156,7 @@ spec:
|
||||
- name: KUBERNETES_SERVICE_PORT
|
||||
value: {{ include "k8sServicePort" . }}
|
||||
{{- end }}
|
||||
{{- include "envoy.env.extra" . | nindent 8 }}
|
||||
{{- with .Values.envoy.extraEnv }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
@@ -164,19 +164,7 @@ spec:
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- if .Values.envoy.prometheus.enabled }}
|
||||
ports:
|
||||
- name: envoy-metrics
|
||||
containerPort: {{ .Values.envoy.prometheus.port }}
|
||||
hostPort: {{ .Values.envoy.prometheus.port }}
|
||||
protocol: TCP
|
||||
{{- if and .Values.envoy.debug.admin.enabled .Values.envoy.debug.admin.port }}
|
||||
- name: envoy-admin
|
||||
containerPort: {{ .Values.envoy.debug.admin.port }}
|
||||
hostPort: {{ .Values.envoy.debug.admin.port }}
|
||||
protocol: TCP
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "envoy.ports" . }}
|
||||
securityContext:
|
||||
{{- if .Values.envoy.securityContext.privileged }}
|
||||
privileged: true
|
||||
@@ -209,6 +197,7 @@ spec:
|
||||
mountPath: /sys/fs/bpf
|
||||
mountPropagation: HostToContainer
|
||||
{{- end }}
|
||||
{{- include "envoy.volumeMounts.extra" . | nindent 8 }}
|
||||
{{- range .Values.envoy.extraHostPathMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
@@ -232,10 +221,7 @@ spec:
|
||||
{{- if .Values.envoy.dnsPolicy }}
|
||||
dnsPolicy: {{ .Values.envoy.dnsPolicy }}
|
||||
{{- end }}
|
||||
{{- with .Values.envoy.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "envoy.affinity" . | nindent 6 }}
|
||||
{{- with .Values.envoy.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
@@ -269,6 +255,7 @@ spec:
|
||||
path: /sys/fs/bpf
|
||||
type: DirectoryOrCreate
|
||||
{{- end }}
|
||||
{{- include "envoy.hostPathMounts.extra" . | nindent 4 }}
|
||||
{{- range .Values.envoy.extraHostPathMounts }}
|
||||
- name: {{ .name }}
|
||||
hostPath:
|
||||
|
||||
@@ -32,5 +32,5 @@ spec:
|
||||
- name: envoy-metrics
|
||||
port: {{ .Values.envoy.prometheus.port }}
|
||||
protocol: TCP
|
||||
targetPort: envoy-metrics
|
||||
targetPort: {{ .Values.envoy.prometheus.port }}
|
||||
{{- end }}
|
||||
|
||||
@@ -49,8 +49,8 @@ spec:
|
||||
externalTrafficPolicy: {{ .Values.ingressController.service.externalTrafficPolicy }}
|
||||
{{- end }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Endpoints
|
||||
apiVersion: discovery.k8s.io/v1
|
||||
kind: EndpointSlice
|
||||
metadata:
|
||||
name: {{ .Values.ingressController.service.name }}
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
@@ -65,9 +65,10 @@ metadata:
|
||||
annotations:
|
||||
{{- toYaml .Values.ingressController.service.annotations | nindent 4 }}
|
||||
{{- end }}
|
||||
subsets:
|
||||
addressType: IPv4
|
||||
endpoints:
|
||||
- addresses:
|
||||
- ip: "192.192.192.192"
|
||||
ports:
|
||||
- port: 9999
|
||||
- "192.192.192.192"
|
||||
ports:
|
||||
- port: 9999
|
||||
{{- end }}
|
||||
|
||||
@@ -122,6 +122,8 @@ spec:
|
||||
{{- if .Values.serviceAccounts.nodeinit.enabled }}
|
||||
serviceAccountName: {{ .Values.serviceAccounts.nodeinit.name | quote }}
|
||||
automountServiceAccountToken: {{ .Values.serviceAccounts.nodeinit.automount }}
|
||||
{{- else }}
|
||||
automountServiceAccountToken: false
|
||||
{{- end }}
|
||||
{{- with .Values.nodeinit.extraVolumes }}
|
||||
volumes:
|
||||
|
||||
@@ -69,7 +69,7 @@ rules:
|
||||
resources:
|
||||
- endpointslices
|
||||
verbs:
|
||||
{{- if or .Values.clustermesh.enableEndpointSliceSynchronization .Values.clustermesh.enableMCSAPISupport }}
|
||||
{{- if or .Values.clustermesh.enableEndpointSliceSynchronization .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport }}
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
@@ -78,6 +78,17 @@ rules:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
{{- if .Values.clustermesh.enableEndpointSliceSynchronization }}
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
# The controller needs to be able to set a service's finalizers to be able to create an EndpointSlice
|
||||
# resource that is owned by the service and sets blockOwnerDeletion=true in its ownerRef.
|
||||
# This is required when the admission plugin OwnerReferencesPermissionEnforcement is activated.
|
||||
- services/finalizers
|
||||
verbs:
|
||||
- update
|
||||
{{- end }}
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
@@ -114,6 +125,20 @@ rules:
|
||||
- delete
|
||||
- patch
|
||||
{{- end }}
|
||||
{{- if or .Values.ingressController.enabled .Values.gatewayAPI.enabled }}
|
||||
- apiGroups:
|
||||
- "discovery.k8s.io"
|
||||
resources:
|
||||
- endpointslices
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- delete
|
||||
- patch
|
||||
{{- end }}
|
||||
{{- if .Values.clustermesh.enableEndpointSliceSynchronization }}
|
||||
- apiGroups:
|
||||
- ""
|
||||
@@ -227,7 +252,6 @@ rules:
|
||||
- update
|
||||
resourceNames:
|
||||
- ciliumloadbalancerippools.cilium.io
|
||||
- ciliumbgppeeringpolicies.cilium.io
|
||||
- ciliumbgpclusterconfigs.cilium.io
|
||||
- ciliumbgppeerconfigs.cilium.io
|
||||
- ciliumbgpadvertisements.cilium.io
|
||||
@@ -248,12 +272,15 @@ rules:
|
||||
- ciliuml2announcementpolicies.cilium.io
|
||||
- ciliumpodippools.cilium.io
|
||||
- ciliumgatewayclassconfigs.cilium.io
|
||||
{{- if and (or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.mcsapi.installCRDs }}
|
||||
- serviceimports.multicluster.x-k8s.io
|
||||
- serviceexports.multicluster.x-k8s.io
|
||||
{{- end }}
|
||||
- apiGroups:
|
||||
- cilium.io
|
||||
resources:
|
||||
- ciliumloadbalancerippools
|
||||
- ciliumpodippools
|
||||
- ciliumbgppeeringpolicies
|
||||
- ciliumbgpclusterconfigs
|
||||
- ciliumbgpnodeconfigoverrides
|
||||
- ciliumbgppeerconfigs
|
||||
@@ -301,6 +328,10 @@ rules:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- ingresses/status # To update ingress status with load balancer IP.
|
||||
# The controller needs to be able to set ingress finalizers to be able to create a CiliumEnvoyConfig
|
||||
# resource that is owned by the ingress, and set blockOwnerDeletion=true in its ownerRef.
|
||||
# This is required when the admission plugin OwnerReferencesPermissionEnforcement is activated.
|
||||
- ingresses/finalizers
|
||||
verbs:
|
||||
- update
|
||||
{{- end }}
|
||||
@@ -352,7 +383,7 @@ rules:
|
||||
- update
|
||||
- patch
|
||||
{{- end }}
|
||||
{{- if or .Values.gatewayAPI.enabled .Values.clustermesh.enableMCSAPISupport }}
|
||||
{{- if or .Values.gatewayAPI.enabled .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport }}
|
||||
- apiGroups:
|
||||
- multicluster.x-k8s.io
|
||||
resources:
|
||||
@@ -361,14 +392,14 @@ rules:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
{{- if .Values.clustermesh.enableMCSAPISupport }}
|
||||
{{- if or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport }}
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if .Values.clustermesh.enableMCSAPISupport }}
|
||||
{{- if or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport }}
|
||||
- apiGroups:
|
||||
- multicluster.x-k8s.io
|
||||
resources:
|
||||
@@ -401,4 +432,10 @@ rules:
|
||||
- patch
|
||||
- delete
|
||||
{{- end }}
|
||||
- apiGroups:
|
||||
- cilium.io
|
||||
resources:
|
||||
- ciliumendpointslices
|
||||
verbs:
|
||||
- deletecollection
|
||||
{{- end }}
|
||||
|
||||
@@ -98,7 +98,7 @@ spec:
|
||||
fieldRef:
|
||||
apiVersion: v1
|
||||
fieldPath: metadata.namespace
|
||||
{{- if or .Values.clustermesh.enableEndpointSliceSynchronization .Values.clustermesh.enableMCSAPISupport }}
|
||||
{{- if or .Values.clustermesh.enableEndpointSliceSynchronization .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport }}
|
||||
- name: CILIUM_CLUSTERMESH_CONFIG
|
||||
value: /var/lib/cilium/clustermesh/
|
||||
{{- end }}
|
||||
@@ -170,6 +170,7 @@ spec:
|
||||
- name: AZURE_RESOURCE_GROUP
|
||||
value: {{ .Values.azure.resourceGroup }}
|
||||
{{- end }}
|
||||
{{- if .Values.azure.clientID }}
|
||||
- name: AZURE_CLIENT_ID
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
@@ -181,11 +182,15 @@ spec:
|
||||
name: cilium-azure
|
||||
key: AZURE_CLIENT_SECRET
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- with .Values.operator.extraEnv }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.operator.prometheus.enabled }}
|
||||
ports:
|
||||
- name: health
|
||||
containerPort: 9234
|
||||
hostPort: 9234
|
||||
{{- if .Values.operator.prometheus.enabled }}
|
||||
- name: prometheus
|
||||
containerPort: {{ .Values.operator.prometheus.port }}
|
||||
{{- if .Values.operator.hostNetwork }}
|
||||
@@ -199,7 +204,7 @@ spec:
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
{{- end }}
|
||||
path: /healthz
|
||||
port: 9234
|
||||
port: health
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 10
|
||||
@@ -210,7 +215,7 @@ spec:
|
||||
host: {{ .Values.ipv4.enabled | ternary "127.0.0.1" "::1" | quote }}
|
||||
{{- end }}
|
||||
path: /healthz
|
||||
port: 9234
|
||||
port: health
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 0
|
||||
periodSeconds: 5
|
||||
@@ -230,7 +235,7 @@ spec:
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if or .Values.clustermesh.enableEndpointSliceSynchronization .Values.clustermesh.enableMCSAPISupport }}
|
||||
{{- if or .Values.clustermesh.enableEndpointSliceSynchronization .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport }}
|
||||
- name: clustermesh-secrets
|
||||
mountPath: /var/lib/cilium/clustermesh
|
||||
readOnly: true
|
||||
@@ -245,6 +250,11 @@ spec:
|
||||
mountPath: {{ dir .Values.authentication.mutual.spire.agentSocketPath }}
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- if and .Values.operator.prometheus.enabled .Values.operator.prometheus.tls.enabled }}
|
||||
- name: prometheus-tls
|
||||
mountPath: /var/lib/cilium/tls/prometheus
|
||||
readOnly: true
|
||||
{{- end }}
|
||||
{{- range .Values.operator.extraHostPathMounts }}
|
||||
- name: {{ .name }}
|
||||
mountPath: {{ .mountPath }}
|
||||
@@ -256,13 +266,15 @@ spec:
|
||||
{{- with .Values.operator.extraVolumeMounts }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- include "cilium-operator.volumeMounts.extra" . | nindent 8 }}
|
||||
{{- with .Values.operator.resources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- with .Values.operator.securityContext }}
|
||||
{{- $sc := include "cilium.operator.securityContext" . | trim }}
|
||||
{{- if $sc }}
|
||||
securityContext:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
{{- $sc | nindent 10 }}
|
||||
{{- end }}
|
||||
terminationMessagePolicy: FallbackToLogsOnError
|
||||
hostNetwork: {{ .Values.operator.hostNetwork }}
|
||||
@@ -295,23 +307,25 @@ spec:
|
||||
nodeSelector:
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if and (or .Values.clustermesh.enableEndpointSliceSynchronization .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.config.enabled (not (and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.kvstoremesh.enabled )) }}
|
||||
{{- if and (or .Values.clustermesh.enableEndpointSliceSynchronization .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.config.enabled (not (and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.kvstoremesh.enabled )) }}
|
||||
hostAliases:
|
||||
{{- range $cluster := .Values.clustermesh.config.clusters }}
|
||||
{{- range $_, $cluster := (include "clustermesh-clusters" . | fromJson) }}
|
||||
{{- range $ip := $cluster.ips }}
|
||||
- ip: {{ $ip }}
|
||||
hostnames: [ "{{ $cluster.name }}.{{ $.Values.clustermesh.config.domain }}" ]
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- with .Values.operator.tolerations }}
|
||||
{{- if or (.Values.operator.tolerations) (hasKey .Values "agentNotReadyTaintKey") }}
|
||||
tolerations:
|
||||
{{- with .Values.operator.tolerations }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if hasKey .Values "agentNotReadyTaintKey" }}
|
||||
- key: {{ .Values.agentNotReadyTaintKey }}
|
||||
operator: Exists
|
||||
{{ end}}
|
||||
{{- end}}
|
||||
volumes:
|
||||
# To read the configuration from the config map
|
||||
- name: cilium-config-path
|
||||
@@ -360,7 +374,7 @@ spec:
|
||||
{{- with .Values.operator.extraVolumes }}
|
||||
{{- toYaml . | nindent 6 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.clustermesh.enableEndpointSliceSynchronization .Values.clustermesh.enableMCSAPISupport }}
|
||||
{{- if or .Values.clustermesh.enableEndpointSliceSynchronization .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport }}
|
||||
# To read the clustermesh configuration
|
||||
- name: clustermesh-secrets
|
||||
projected:
|
||||
@@ -417,4 +431,25 @@ spec:
|
||||
path: local-etcd-client-ca.crt
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if and .Values.operator.prometheus.enabled .Values.operator.prometheus.tls.enabled }}
|
||||
# To read the prometheus configuration
|
||||
- name: prometheus-tls
|
||||
projected:
|
||||
# note: the leading zero means this number is in octal representation: do not remove it
|
||||
defaultMode: 0400
|
||||
sources:
|
||||
- secret:
|
||||
name: {{ .Values.operator.prometheus.tls.server.existingSecret }}
|
||||
optional: true
|
||||
items:
|
||||
- key: tls.crt
|
||||
path: server.crt
|
||||
- key: tls.key
|
||||
path: server.key
|
||||
{{- if .Values.operator.prometheus.tls.server.mtls.enabled }}
|
||||
- key: ca.crt
|
||||
path: client-ca.crt
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- include "cilium-operator.volumes.extra" . | nindent 6 }}
|
||||
{{- end }}
|
||||
|
||||
@@ -83,3 +83,35 @@ rules:
|
||||
- update
|
||||
- patch
|
||||
{{- end }}
|
||||
|
||||
{{- if and .Values.operator.enabled .Values.serviceAccounts.operator.create }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: cilium-operator-ztunnel
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
rules:
|
||||
# ZTunnel DaemonSet management permissions
|
||||
# Note: These permissions must always be granted (not conditional on encryption.type)
|
||||
# because the controller needs to clean up stale DaemonSets when ztunnel is disabled.
|
||||
- apiGroups:
|
||||
- apps
|
||||
resources:
|
||||
- daemonsets
|
||||
verbs:
|
||||
- create
|
||||
- delete
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
{{- end }}
|
||||
|
||||
@@ -77,3 +77,29 @@ subjects:
|
||||
name: {{ .Values.serviceAccounts.operator.name | quote }}
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- end }}
|
||||
|
||||
{{- if and .Values.operator.enabled .Values.serviceAccounts.operator.create }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: cilium-operator-ztunnel
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.operator.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: cilium-operator-ztunnel
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ .Values.serviceAccounts.operator.name | quote }}
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- end }}
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
{{- if .Values.operator.enabled }}
|
||||
{{- if .Values.azure.enabled }}
|
||||
{{- if .Values.azure.clientID }}
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
@@ -19,3 +20,4 @@ data:
|
||||
AZURE_CLIENT_SECRET: {{ default "" .Values.azure.clientSecret | b64enc | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -94,7 +94,6 @@ rules:
|
||||
- cilium.io
|
||||
resources:
|
||||
- ciliumloadbalancerippools
|
||||
- ciliumbgppeeringpolicies
|
||||
- ciliumbgpnodeconfigs
|
||||
- ciliumbgpadvertisements
|
||||
- ciliumbgppeerconfigs
|
||||
|
||||
@@ -20,6 +20,9 @@ metadata:
|
||||
{{- with $.Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with $.Values.secretsNamespaceLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
{{- with $.Values.secretsNamespaceAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
|
||||
@@ -50,7 +50,7 @@ rules:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
{{- if .Values.clustermesh.enableMCSAPISupport }}
|
||||
{{- if or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport }}
|
||||
- apiGroups:
|
||||
- multicluster.x-k8s.io
|
||||
resources:
|
||||
|
||||
@@ -137,6 +137,7 @@ spec:
|
||||
- --advertise-client-urls=https://localhost:2379
|
||||
- --initial-cluster-token=$(INITIAL_CLUSTER_TOKEN)
|
||||
- --auto-compaction-retention=1
|
||||
- --enable-grpc-gateway=false
|
||||
{{- if .Values.clustermesh.apiserver.metrics.etcd.enabled }}
|
||||
- --listen-metrics-urls=http://0.0.0.0:{{ .Values.clustermesh.apiserver.metrics.etcd.port }}
|
||||
- --metrics={{ .Values.clustermesh.apiserver.metrics.etcd.mode }}
|
||||
@@ -208,12 +209,13 @@ spec:
|
||||
- --prometheus-serve-addr=:{{ .Values.clustermesh.apiserver.metrics.port }}
|
||||
- --controller-group-metrics=all
|
||||
{{- end }}
|
||||
{{- if .Values.clustermesh.enableMCSAPISupport }}
|
||||
{{- if or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport }}
|
||||
- --clustermesh-enable-mcs-api
|
||||
{{- end }}
|
||||
{{- if .Values.ciliumEndpointSlice.enabled }}
|
||||
- --enable-cilium-endpoint-slice
|
||||
{{- end }}
|
||||
{{- include "clustermesh.apiserver.args.extra" . | nindent 8 }}
|
||||
{{- with .Values.clustermesh.apiserver.extraArgs }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
@@ -303,12 +305,14 @@ spec:
|
||||
{{- if hasKey .Values.clustermesh "maxConnectedClusters" }}
|
||||
- --max-connected-clusters={{ .Values.clustermesh.maxConnectedClusters }}
|
||||
{{- end }}
|
||||
- --clustermesh-cache-ttl={{ .Values.clustermesh.cacheTTL }}
|
||||
- --health-port={{ .Values.clustermesh.apiserver.kvstoremesh.healthPort }}
|
||||
{{- if .Values.clustermesh.apiserver.metrics.kvstoremesh.enabled }}
|
||||
- --prometheus-serve-addr=:{{ .Values.clustermesh.apiserver.metrics.kvstoremesh.port }}
|
||||
- --controller-group-metrics=all
|
||||
{{- end }}
|
||||
- --enable-heartbeat={{ eq "true" (include "identityAllocationCRD" .) | ternary "false" "true" }}
|
||||
{{- include "clustermesh.kvstoremesh.args.extra" . | nindent 8 }}
|
||||
{{- with .Values.clustermesh.apiserver.kvstoremesh.extraArgs }}
|
||||
{{- toYaml . | trim | nindent 8 }}
|
||||
{{- end }}
|
||||
@@ -505,7 +509,7 @@ spec:
|
||||
{{- end }}
|
||||
{{- if and .Values.clustermesh.config.enabled .Values.clustermesh.apiserver.kvstoremesh.enabled }}
|
||||
hostAliases:
|
||||
{{- range $cluster := .Values.clustermesh.config.clusters }}
|
||||
{{- range $_, $cluster := (include "clustermesh-clusters" . | fromJson) }}
|
||||
{{- range $ip := $cluster.ips }}
|
||||
- ip: {{ $ip }}
|
||||
hostnames: [ "{{ $cluster.name }}.{{ $.Values.clustermesh.config.domain }}" ]
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{{- if and .Values.clustermesh.useAPIServer (eq .Values.clustermesh.apiserver.kvstoremesh.kvstoreMode "internal") }}
|
||||
{{- if and .Values.clustermesh.useAPIServer (eq .Values.clustermesh.apiserver.kvstoremesh.kvstoreMode "internal") (not .Values.clustermesh.apiserver.service.externallyCreated) }}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
|
||||
@@ -9,10 +9,18 @@ spec:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
securityContext:
|
||||
seccompProfile:
|
||||
type: RuntimeDefault
|
||||
containers:
|
||||
- name: certgen
|
||||
image: {{ include "cilium.image" .Values.certgen.image | quote }}
|
||||
imagePullPolicy: {{ .Values.certgen.image.pullPolicy }}
|
||||
securityContext:
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
allowPrivilegeEscalation: false
|
||||
{{- with .Values.certgen.resources }}
|
||||
resources:
|
||||
{{- toYaml . | nindent 12 }}
|
||||
@@ -84,7 +92,7 @@ spec:
|
||||
volumeMounts:
|
||||
{{- toYaml . | nindent 10 }}
|
||||
{{- end }}
|
||||
hostNetwork: true
|
||||
hostNetwork: false
|
||||
{{- with .Values.certgen.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
@@ -96,7 +104,6 @@ spec:
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccount: {{ .Values.serviceAccounts.clustermeshcertgen.name | quote }}
|
||||
serviceAccountName: {{ .Values.serviceAccounts.clustermeshcertgen.name | quote }}
|
||||
automountServiceAccountToken: {{ .Values.serviceAccounts.clustermeshcertgen.automount }}
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
@@ -108,9 +115,11 @@ spec:
|
||||
volumes:
|
||||
{{- toYaml . | nindent 6 }}
|
||||
{{- end }}
|
||||
affinity:
|
||||
{{- with .Values.certgen.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
ttlSecondsAfterFinished: {{ .Values.certgen.ttlSecondsAfterFinished }}
|
||||
{{- with .Values.certgen.ttlSecondsAfterFinished }}
|
||||
ttlSecondsAfterFinished: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -16,6 +16,8 @@ metadata:
|
||||
{{- end }}
|
||||
spec:
|
||||
schedule: {{ .Values.clustermesh.apiserver.tls.auto.schedule | quote }}
|
||||
successfulJobsHistoryLimit: {{ .Values.certgen.cronJob.successfulJobsHistoryLimit }}
|
||||
failedJobsHistoryLimit: {{ .Values.certgen.cronJob.failedJobsHistoryLimit }}
|
||||
concurrencyPolicy: Forbid
|
||||
jobTemplate:
|
||||
{{- include "clustermesh-apiserver-generate-certs.job.spec" . | nindent 4 }}
|
||||
|
||||
@@ -1,9 +1,18 @@
|
||||
{{- if and (and .Values.clustermesh.useAPIServer (eq .Values.clustermesh.apiserver.kvstoremesh.kvstoreMode "internal")) .Values.clustermesh.apiserver.tls.auto.enabled (eq .Values.clustermesh.apiserver.tls.auto.method "cronJob") }}
|
||||
{{/*
|
||||
Because Kubernetes job specs are immutable, Helm will fail patch this job if
|
||||
the spec changes between releases. To avoid breaking the upgrade path, we
|
||||
generate a name for the job here which is based on the checksum of the spec.
|
||||
This will cause the name of the job to change if its content changes,
|
||||
and in turn cause Helm to do delete the old job and replace it with a new one.
|
||||
*/}}
|
||||
{{- $jobSpec := include "clustermesh-apiserver-generate-certs.job.spec" . -}}
|
||||
{{- $checkSum := $jobSpec | sha256sum | trunc 10 -}}
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: clustermesh-apiserver-generate-certs
|
||||
name: clustermesh-apiserver-generate-certs-{{$checkSum}}
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
labels:
|
||||
k8s-app: clustermesh-apiserver-generate-certs
|
||||
@@ -11,13 +20,14 @@ metadata:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/part-of: cilium
|
||||
{{- if or .Values.certgen.annotations.job .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
"helm.sh/hook": post-install,post-upgrade
|
||||
{{- with .Values.certgen.annotations.job }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{ include "clustermesh-apiserver-generate-certs.job.spec" . }}
|
||||
{{- end }}
|
||||
{{ $jobSpec }}
|
||||
{{- end }}
|
||||
|
||||
@@ -38,7 +38,6 @@ rules:
|
||||
- clustermesh-apiserver-admin-cert
|
||||
- clustermesh-apiserver-remote-cert
|
||||
- clustermesh-apiserver-local-cert
|
||||
- clustermesh-apiserver-client-cert
|
||||
verbs:
|
||||
- update
|
||||
{{- end }}
|
||||
|
||||
@@ -8,12 +8,16 @@ kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-admin-cert
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
cilium.io/helm-template-non-idempotent: "true"
|
||||
annotations:
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nonIdempotentAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
|
||||
@@ -8,12 +8,16 @@ kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-local-cert
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
cilium.io/helm-template-non-idempotent: "true"
|
||||
annotations:
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nonIdempotentAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
|
||||
@@ -8,12 +8,16 @@ kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-remote-cert
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
cilium.io/helm-template-non-idempotent: "true"
|
||||
annotations:
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nonIdempotentAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
|
||||
@@ -10,12 +10,16 @@ kind: Secret
|
||||
metadata:
|
||||
name: clustermesh-apiserver-server-cert
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
cilium.io/helm-template-non-idempotent: "true"
|
||||
annotations:
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nonIdempotentAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{{- if and
|
||||
(and .Values.clustermesh.useAPIServer (eq .Values.clustermesh.apiserver.kvstoremesh.kvstoreMode "internal") (eq "true" (include "identityAllocationCRD" .)))
|
||||
(and .Values.clustermesh.useAPIServer .Values.clustermesh.config.enabled (eq .Values.clustermesh.apiserver.kvstoremesh.kvstoreMode "internal") (eq "true" (include "identityAllocationCRD" .)))
|
||||
(ne .Values.clustermesh.apiserver.tls.authMode "legacy")
|
||||
}}
|
||||
---
|
||||
@@ -21,7 +21,7 @@ metadata:
|
||||
data:
|
||||
users.yaml: |
|
||||
users:
|
||||
{{- range .Values.clustermesh.config.clusters }}
|
||||
{{- range (include "clustermesh-clusters" . | fromJson) }}
|
||||
- name: remote-{{ .name }}
|
||||
role: remote
|
||||
{{- end }}
|
||||
|
||||
@@ -43,3 +43,26 @@ key-file: /var/lib/cilium/clustermesh/{{ $prefix }}etcd-client.key
|
||||
cert-file: /var/lib/cilium/clustermesh/{{ $prefix }}etcd-client.crt
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{- define "clustermesh-clusters" }}
|
||||
{{- $clusters := dict }}
|
||||
{{- if kindIs "map" .Values.clustermesh.config.clusters }}
|
||||
{{- range $name, $cluster := deepCopy .Values.clustermesh.config.clusters }}
|
||||
{{- if ne $cluster.enabled false }}
|
||||
{{- $_ := unset $cluster "enabled" }}
|
||||
{{- $_ = set $cluster "name" $name }}
|
||||
{{- $_ = set $clusters $name $cluster }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- else if kindIs "slice" .Values.clustermesh.config.clusters }}
|
||||
{{- range $cluster := deepCopy .Values.clustermesh.config.clusters }}
|
||||
{{- if ne $cluster.enabled false }}
|
||||
{{- $_ := unset $cluster "enabled" }}
|
||||
{{- $_ := set $clusters $cluster.name $cluster }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- else }}
|
||||
{{- fail (printf "unknown type %s for clustermesh.config.clusters" (kindOf .Values.clustermesh.config.clusters)) }}
|
||||
{{- end }}
|
||||
{{- toJson $clusters }}
|
||||
{{- end }}
|
||||
|
||||
@@ -18,7 +18,7 @@ data:
|
||||
{{- $kvstoremesh := and .Values.clustermesh.useAPIServer .Values.clustermesh.apiserver.kvstoremesh.enabled }}
|
||||
{{- $override := ternary (printf "https://clustermesh-apiserver.%s.svc:2379" (include "cilium.namespace" .)) "" $kvstoremesh }}
|
||||
{{- $local_etcd := and $kvstoremesh (eq .Values.clustermesh.apiserver.kvstoremesh.kvstoreMode "external") }}
|
||||
{{- range .Values.clustermesh.config.clusters }}
|
||||
{{- range (include "clustermesh-clusters" . | fromJson) }}
|
||||
{{ .name }}: {{ include "clustermesh-config-generate-etcd-cfg" (list . $.Values.clustermesh.config.domain $override $local_etcd $.Values.etcd ) | b64enc }}
|
||||
{{- /* The parenthesis around .tls are required, since it can be null: https://stackoverflow.com/a/68807258 */}}
|
||||
{{- if and (eq $override "") (.tls).cert (.tls).key }}
|
||||
|
||||
@@ -15,7 +15,7 @@ metadata:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
data:
|
||||
{{- range .Values.clustermesh.config.clusters }}
|
||||
{{- range (include "clustermesh-clusters" . | fromJson) }}
|
||||
{{ .name }}: {{ include "clustermesh-config-generate-etcd-cfg" (list . $.Values.clustermesh.config.domain "" false $.Values.etcd ) | b64enc }}
|
||||
{{- /* The parenthesis around .tls are required, since it can be null: https://stackoverflow.com/a/68807258 */}}
|
||||
{{- if and (.tls).cert (.tls).key }}
|
||||
|
||||
@@ -0,0 +1,29 @@
|
||||
{{- if and (or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.mcsapi.corednsAutoConfigure.enabled }}
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: coredns-mcsapi
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
{{/*
|
||||
We have to leave CoreDNS RBAC to be able to read MCS-API resources
|
||||
as we would leave a broken CoreDNS installation otherwise
|
||||
*/}}
|
||||
helm.sh/resource-policy: keep
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- multicluster.x-k8s.io
|
||||
resources:
|
||||
- serviceimports
|
||||
verbs:
|
||||
- list
|
||||
- watch
|
||||
{{- end }}
|
||||
@@ -0,0 +1,28 @@
|
||||
{{- if and (or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.mcsapi.corednsAutoConfigure.enabled }}
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: coredns-mcsapi
|
||||
labels:
|
||||
app.kubernetes.io/part-of: cilium
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
annotations:
|
||||
{{/*
|
||||
We have to leave CoreDNS RBAC to be able to read MCS-API resources
|
||||
as we would leave a broken CoreDNS installation otherwise
|
||||
*/}}
|
||||
helm.sh/resource-policy: keep
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: coredns-mcsapi
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.serviceAccountName | quote }}
|
||||
namespace: {{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.namespace | quote }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,22 @@
|
||||
{{- if and (or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.mcsapi.corednsAutoConfigure.enabled }}
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: cilium-coredns-mcsapi-autoconfig
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
rules:
|
||||
# note: namespaces permission are needed to initialize and verify that the kubernetes client works.
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- "namespaces"
|
||||
verbs:
|
||||
- "get"
|
||||
{{- end }}
|
||||
@@ -0,0 +1,22 @@
|
||||
{{- if and (or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.mcsapi.corednsAutoConfigure.enabled }}
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: cilium-coredns-mcsapi-autoconfig
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cilium-coredns-mcsapi-autoconfig
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ .Values.serviceAccounts.corednsMCSAPI.name | quote }}
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,36 @@
|
||||
{{- if and (or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.mcsapi.corednsAutoConfigure.enabled }}
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: cilium-coredns-mcsapi-autoconfig
|
||||
namespace: {{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.namespace }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- "configmaps"
|
||||
verbs:
|
||||
- "update"
|
||||
- "patch"
|
||||
- "get"
|
||||
resourceNames:
|
||||
- "{{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.configMapName }}"
|
||||
- apiGroups:
|
||||
- "apps"
|
||||
resources:
|
||||
- "deployments"
|
||||
verbs:
|
||||
- "patch"
|
||||
- "update"
|
||||
- "get"
|
||||
resourceNames:
|
||||
- "{{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.deploymentName }}"
|
||||
{{- end }}
|
||||
@@ -0,0 +1,23 @@
|
||||
{{- if and (or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.mcsapi.corednsAutoConfigure.enabled }}
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: cilium-coredns-mcsapi-autoconfig
|
||||
namespace: {{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.namespace }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: cilium-coredns-mcsapi-autoconfig
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ .Values.serviceAccounts.corednsMCSAPI.name | quote }}
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,20 @@
|
||||
{{- if and (or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.mcsapi.corednsAutoConfigure.enabled .Values.serviceAccounts.corednsMCSAPI.create }}
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: {{ .Values.serviceAccounts.corednsMCSAPI.name | quote }}
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.serviceAccounts.corednsMCSAPI.annotations .Values.clustermesh.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.serviceAccounts.corednsMCSAPI.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,82 @@
|
||||
{{- if and (or .Values.clustermesh.mcsapi.enabled .Values.clustermesh.enableMCSAPISupport) .Values.clustermesh.mcsapi.corednsAutoConfigure.enabled }}
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: cilium-coredns-mcsapi-autoconfig
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
labels:
|
||||
k8s-app: cilium-coredns-mcsapi-autoconfig
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/part-of: cilium
|
||||
annotations:
|
||||
"helm.sh/hook": post-install,post-upgrade
|
||||
{{- with .Values.clustermesh.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.mcsapi.corednsAutoConfigure.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: cilium-coredns-mcsapi-autoconfig
|
||||
{{- with .Values.clustermesh.mcsapi.corednsAutoConfigure.podLabels }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
containers:
|
||||
- name: autoconfig
|
||||
image: {{ include "cilium.image" .Values.clustermesh.apiserver.image | quote }}
|
||||
imagePullPolicy: {{ .Values.clustermesh.apiserver.image.pullPolicy }}
|
||||
{{- with .Values.clustermesh.mcsapi.corednsAutoConfigure.resources }}
|
||||
resources:
|
||||
{{- toYaml . | nindent 10 }}
|
||||
{{- end }}
|
||||
command:
|
||||
- /usr/bin/clustermesh-apiserver
|
||||
args:
|
||||
- mcsapi-coredns-cfg
|
||||
- --coredns-deployment-name={{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.deploymentName }}
|
||||
- --coredns-configmap-name={{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.configMapName }}
|
||||
- --coredns-namespace={{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.namespace }}
|
||||
- --coredns-cluster-domain={{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.clusterDomain }}
|
||||
- --coredns-clusterset-domain={{ .Values.clustermesh.mcsapi.corednsAutoConfigure.coredns.clustersetDomain }}
|
||||
{{- with .Values.clustermesh.mcsapi.corednsAutoConfigure.extraArgs }}
|
||||
{{- toYaml . | trim | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.mcsapi.corednsAutoConfigure.extraVolumeMounts }}
|
||||
volumeMounts:
|
||||
{{- toYaml . | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.mcsapi.corednsAutoConfigure.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- if .Values.clustermesh.mcsapi.corednsAutoConfigure.priorityClassName }}
|
||||
priorityClassName: {{ .Values.clustermesh.mcsapi.corednsAutoConfigure.priorityClassName }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.mcsapi.corednsAutoConfigure.tolerations }}
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ .Values.serviceAccounts.corednsMCSAPI.name | quote }}
|
||||
automountServiceAccountToken: true
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
restartPolicy: OnFailure
|
||||
{{- with .Values.clustermesh.mcsapi.corednsAutoConfigure.extraVolumes }}
|
||||
volumes:
|
||||
{{- toYaml . | nindent 6 }}
|
||||
{{- end }}
|
||||
{{- with .Values.clustermesh.mcsapi.corednsAutoConfigure.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
ttlSecondsAfterFinished: {{ .Values.clustermesh.mcsapi.corednsAutoConfigure.ttlSecondsAfterFinished }}
|
||||
{{- end }}
|
||||
@@ -29,6 +29,8 @@ data:
|
||||
pprof: {{ .Values.hubble.relay.pprof.enabled | quote }}
|
||||
pprof-address: {{ .Values.hubble.relay.pprof.address | quote }}
|
||||
pprof-port: {{ .Values.hubble.relay.pprof.port | quote }}
|
||||
pprof-mutex-profile-fraction: {{ .Values.hubble.relay.pprof.mutexProfileFraction | quote }}
|
||||
pprof-block-profile-rate: {{ .Values.hubble.relay.pprof.blockProfileRate | quote }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.relay.prometheus.enabled }}
|
||||
metrics-listen-address: ":{{ .Values.hubble.relay.prometheus.port }}"
|
||||
@@ -44,4 +46,10 @@ data:
|
||||
disable-client-tls: true
|
||||
{{- end }}
|
||||
{{- include "hubble-relay.config.tls" . | nindent 4 }}
|
||||
{{- if .Values.hubble.relay.logOptions.format }}
|
||||
log-format: {{ .Values.hubble.relay.logOptions.format }}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.relay.logOptions.level }}
|
||||
log-level: {{ .Values.hubble.relay.logOptions.level }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -14,14 +14,6 @@ metadata:
|
||||
{{- end }}
|
||||
|
||||
rules:
|
||||
- apiGroups:
|
||||
- networking.k8s.io
|
||||
resources:
|
||||
- networkpolicies
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
@@ -43,12 +35,4 @@ rules:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- cilium.io
|
||||
resources:
|
||||
- "*"
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
{{- end }}
|
||||
|
||||
@@ -74,11 +74,11 @@ spec:
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8081
|
||||
port: http
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /
|
||||
port: 8081
|
||||
port: http
|
||||
{{- with .Values.hubble.ui.frontend.resources }}
|
||||
resources:
|
||||
{{- toYaml . | trim | nindent 10 }}
|
||||
@@ -184,8 +184,12 @@ spec:
|
||||
defaultMode: 420
|
||||
name: hubble-ui-nginx
|
||||
name: hubble-ui-nginx-conf
|
||||
- emptyDir: {}
|
||||
name: tmp-dir
|
||||
- name: tmp-dir
|
||||
{{- if .Values.hubble.ui.tmpVolume }}
|
||||
{{- toYaml .Values.hubble.ui.tmpVolume | nindent 8 }}
|
||||
{{- else }}
|
||||
emptyDir: {}
|
||||
{{- end }}
|
||||
{{- if .Values.hubble.relay.tls.server.enabled }}
|
||||
- name: hubble-ui-client-certs
|
||||
{{- if .Values.hubble.ui.standalone.enabled }}
|
||||
|
||||
@@ -137,7 +137,6 @@ spec:
|
||||
tolerations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
serviceAccount: {{ .Values.serviceAccounts.hubblecertgen.name | quote }}
|
||||
serviceAccountName: {{ .Values.serviceAccounts.hubblecertgen.name | quote }}
|
||||
automountServiceAccountToken: {{ .Values.serviceAccounts.hubblecertgen.automount }}
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
@@ -149,9 +148,11 @@ spec:
|
||||
volumes:
|
||||
{{- toYaml . | nindent 6 }}
|
||||
{{- end }}
|
||||
affinity:
|
||||
{{- with .Values.certgen.affinity }}
|
||||
affinity:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
ttlSecondsAfterFinished: {{ .Values.certgen.ttlSecondsAfterFinished }}
|
||||
{{- with .Values.certgen.ttlSecondsAfterFinished }}
|
||||
ttlSecondsAfterFinished: {{ . }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -23,6 +23,8 @@ metadata:
|
||||
{{- end }}
|
||||
spec:
|
||||
schedule: {{ .Values.hubble.tls.auto.schedule | quote }}
|
||||
successfulJobsHistoryLimit: {{ .Values.certgen.cronJob.successfulJobsHistoryLimit }}
|
||||
failedJobsHistoryLimit: {{ .Values.certgen.cronJob.failedJobsHistoryLimit }}
|
||||
concurrencyPolicy: Forbid
|
||||
jobTemplate:
|
||||
{{- include "hubble-generate-certs.job.spec" . | nindent 4 }}
|
||||
|
||||
@@ -1,9 +1,18 @@
|
||||
{{- if and .Values.hubble.enabled .Values.hubble.tls.enabled .Values.hubble.tls.auto.enabled (eq .Values.hubble.tls.auto.method "cronJob") }}
|
||||
{{/*
|
||||
Because Kubernetes job specs are immutable, Helm will fail patch this job if
|
||||
the spec changes between releases. To avoid breaking the upgrade path, we
|
||||
generate a name for the job here which is based on the checksum of the spec.
|
||||
This will cause the name of the job to change if its content changes,
|
||||
and in turn cause Helm to do delete the old job and replace it with a new one.
|
||||
*/}}
|
||||
{{- $jobSpec := include "hubble-generate-certs.job.spec" . -}}
|
||||
{{- $checkSum := $jobSpec | sha256sum | trunc 10 -}}
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: hubble-generate-certs
|
||||
name: hubble-generate-certs-{{$checkSum}}
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
labels:
|
||||
k8s-app: hubble-generate-certs
|
||||
@@ -12,13 +21,14 @@ metadata:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- if or .Values.certgen.annotations.job .Values.hubble.annotations }}
|
||||
annotations:
|
||||
"helm.sh/hook": post-install,post-upgrade
|
||||
{{- with .Values.certgen.annotations.job }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.hubble.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{ include "hubble-generate-certs.job.spec" . }}
|
||||
{{- end }}
|
||||
{{ $jobSpec }}
|
||||
{{- end }}
|
||||
|
||||
@@ -10,13 +10,17 @@ kind: Secret
|
||||
metadata:
|
||||
name: hubble-metrics-server-certs
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
cilium.io/helm-template-non-idempotent: "true"
|
||||
|
||||
{{- with .Values.hubble.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.hubble.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nonIdempotentAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
|
||||
@@ -17,13 +17,17 @@ kind: Secret
|
||||
metadata:
|
||||
name: hubble-relay-client-certs
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
cilium.io/helm-template-non-idempotent: "true"
|
||||
|
||||
{{- with .Values.hubble.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.hubble.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nonIdempotentAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
|
||||
@@ -10,13 +10,17 @@ kind: Secret
|
||||
metadata:
|
||||
name: hubble-relay-server-certs
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
cilium.io/helm-template-non-idempotent: "true"
|
||||
|
||||
{{- with .Values.hubble.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.hubble.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nonIdempotentAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
|
||||
@@ -18,13 +18,17 @@ kind: Secret
|
||||
metadata:
|
||||
name: hubble-server-certs
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
cilium.io/helm-template-non-idempotent: "true"
|
||||
|
||||
{{- with .Values.hubble.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.hubble.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nonIdempotentAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
|
||||
@@ -10,13 +10,17 @@ metadata:
|
||||
name: hubble-ui-client-certs
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
cilium.io/helm-template-non-idempotent: "true"
|
||||
|
||||
{{- with .Values.hubble.annotations }}
|
||||
annotations:
|
||||
{{- with .Values.hubble.annotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.nonIdempotentAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
type: kubernetes.io/tls
|
||||
|
||||
@@ -0,0 +1,28 @@
|
||||
{{- if .Values.standaloneDnsProxy.enabled }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: standalone-dns-proxy-config
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.commonLabels }}
|
||||
labels:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
|
||||
{{- with .Values.standaloneDnsProxy.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
data:
|
||||
# Use the same L7 proxy and DNS settings as the agent for consistency
|
||||
enable-l7-proxy: {{ .Values.l7Proxy | quote }}
|
||||
debug: {{ .Values.standaloneDnsProxy.debug | quote }}
|
||||
enable-standalone-dns-proxy: {{ .Values.standaloneDnsProxy.enabled | quote }}
|
||||
enable-ipv4: {{ .Values.ipv4.enabled | quote }}
|
||||
enable-ipv6: {{ .Values.ipv6.enabled | quote }}
|
||||
standalone-dns-proxy-server-port: {{ .Values.standaloneDnsProxy.serverPort | quote }}
|
||||
# DNS proxy configuration inherited from agent settings
|
||||
tofqdns-proxy-port: {{ .Values.dnsProxy.proxyPort | quote }}
|
||||
tofqdns-enable-dns-compression: {{ .Values.dnsProxy.enableDnsCompression | quote }}
|
||||
{{- end }}
|
||||
@@ -0,0 +1,80 @@
|
||||
{{- if .Values.standaloneDnsProxy.enabled }}
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: standalone-dns-proxy
|
||||
namespace: {{ include "cilium.namespace" . }}
|
||||
{{- with .Values.standaloneDnsProxy.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
labels:
|
||||
k8s-app: standalone-dns-proxy
|
||||
app.kubernetes.io/part-of: cilium
|
||||
app.kubernetes.io/name: standalone-dns-proxy
|
||||
name: standalone-dns-proxy
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
minReadySeconds: 5
|
||||
{{- with .Values.standaloneDnsProxy.updateStrategy }}
|
||||
updateStrategy:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: standalone-dns-proxy
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
{{- if .Values.standaloneDnsProxy.rollOutPods }}
|
||||
# ensure pods roll when configmap updates
|
||||
cilium.io/standalone-dns-proxy-configmap-checksum: {{ include (print $.Template.BasePath "/standalone-dns-proxy/configmap.yaml") . | sha256sum | quote }}
|
||||
{{- end }}
|
||||
container.apparmor.security.beta.kubernetes.io/standalone-dns-proxy: "unconfined"
|
||||
labels:
|
||||
k8s-app: standalone-dns-proxy
|
||||
name: standalone-dns-proxy
|
||||
app.kubernetes.io/name: standalone-dns-proxy
|
||||
app.kubernetes.io/part-of: cilium
|
||||
{{- with .Values.commonLabels }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
hostNetwork: true
|
||||
automountServiceAccountToken: {{ .Values.standaloneDnsProxy.automountServiceAccountToken }}
|
||||
{{- with .Values.standaloneDnsProxy.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
tolerations:
|
||||
- operator: Exists
|
||||
{{- with .Values.standaloneDnsProxy.tolerations }}
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: standalone-dns-proxy
|
||||
image: {{ include "cilium.image" .Values.standaloneDnsProxy.image | quote }}
|
||||
args:
|
||||
- --config-dir=/tmp/standalone-dns-proxy/config-map
|
||||
imagePullPolicy: {{ .Values.standaloneDnsProxy.image.pullPolicy }}
|
||||
volumeMounts:
|
||||
- mountPath: /tmp/standalone-dns-proxy/config-map
|
||||
name: standalone-dns-proxy-config-path
|
||||
readOnly: true
|
||||
- mountPath: /var/run/standalone-dns-proxy
|
||||
name: runtime-dir
|
||||
securityContext:
|
||||
capabilities:
|
||||
add: ["NET_ADMIN", "NET_RAW"]
|
||||
drop: ["ALL"]
|
||||
volumes:
|
||||
- configMap:
|
||||
defaultMode: 420
|
||||
name: standalone-dns-proxy-config
|
||||
name: standalone-dns-proxy-config-path
|
||||
- emptyDir: {}
|
||||
name: runtime-dir
|
||||
{{- end }}
|
||||
@@ -220,3 +220,22 @@
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/* validate Standalone DNS Proxy */}}
|
||||
{{- if .Values.standaloneDnsProxy.enabled }}
|
||||
{{- if not .Values.dnsProxy.proxyPort }}
|
||||
{{ fail "standaloneDnsProxy requires dnsProxy.proxyPort to be explicitly set (e.g., 10094)" }}
|
||||
{{- end }}
|
||||
{{- if eq (int .Values.dnsProxy.proxyPort) 0 }}
|
||||
{{ fail "standaloneDnsProxy requires dnsProxy.proxyPort to be set to a non-zero value (e.g., 10094). The standalone DNS proxy uses the same DNS configuration as the agent." }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/* validate we don't run tproxy with netkit - see GH issue 39892 */}}
|
||||
{{- if hasKey .Values "bpf" }}
|
||||
{{- if and (hasKey .Values.bpf "tproxy") (hasKey .Values.bpf "datapathMode") }}
|
||||
{{- if and (.Values.bpf.tproxy) (list "netkit" "netkit-l2" | has .Values.bpf.datapathMode) }}
|
||||
{{ fail ".Values.bpf.tproxy cannot be enabled with .Values.bpf.datapathMode=netkit or .Values.bpf.datapathMode=netkit-l2" }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -58,6 +58,27 @@
|
||||
"properties": {
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"nodeSpec": {
|
||||
"properties": {
|
||||
"securityGroupTags": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"securityGroups": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"vSwitchTags": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"vSwitches": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
@@ -440,6 +461,14 @@
|
||||
"properties": {
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"nodeSpec": {
|
||||
"properties": {
|
||||
"azureInterfaceName": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
@@ -644,6 +673,14 @@
|
||||
"monitorInterval": {
|
||||
"type": "string"
|
||||
},
|
||||
"monitorTraceIPOption": {
|
||||
"minimum": 0,
|
||||
"maximum": 255,
|
||||
"type": [
|
||||
"null",
|
||||
"integer"
|
||||
]
|
||||
},
|
||||
"natMax": {
|
||||
"type": [
|
||||
"null",
|
||||
@@ -668,6 +705,12 @@
|
||||
"integer"
|
||||
]
|
||||
},
|
||||
"policyMapPressureMetricsThreshold": {
|
||||
"type": [
|
||||
"null",
|
||||
"number"
|
||||
]
|
||||
},
|
||||
"policyStatsMapMax": {
|
||||
"type": [
|
||||
"null",
|
||||
@@ -714,6 +757,17 @@
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"cronJob": {
|
||||
"properties": {
|
||||
"failedJobsHistoryLimit": {
|
||||
"type": "integer"
|
||||
},
|
||||
"successfulJobsHistoryLimit": {
|
||||
"type": "integer"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"extraVolumeMounts": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
@@ -768,7 +822,10 @@
|
||||
"type": "array"
|
||||
},
|
||||
"ttlSecondsAfterFinished": {
|
||||
"type": "integer"
|
||||
"type": [
|
||||
"null",
|
||||
"integer"
|
||||
]
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
@@ -1299,6 +1356,9 @@
|
||||
"Cluster"
|
||||
]
|
||||
},
|
||||
"externallyCreated": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"internalTrafficPolicy": {
|
||||
"enum": [
|
||||
"Local",
|
||||
@@ -1369,17 +1429,6 @@
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"client": {
|
||||
"properties": {
|
||||
"cert": {
|
||||
"type": "string"
|
||||
},
|
||||
"key": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"enableSecrets": {
|
||||
"type": "boolean"
|
||||
},
|
||||
@@ -1452,11 +1501,17 @@
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"cacheTTL": {
|
||||
"type": "string"
|
||||
},
|
||||
"config": {
|
||||
"properties": {
|
||||
"clusters": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
"type": [
|
||||
"object",
|
||||
"array"
|
||||
]
|
||||
},
|
||||
"domain": {
|
||||
"type": "string"
|
||||
@@ -1476,6 +1531,85 @@
|
||||
"maxConnectedClusters": {
|
||||
"type": "integer"
|
||||
},
|
||||
"mcsapi": {
|
||||
"properties": {
|
||||
"corednsAutoConfigure": {
|
||||
"properties": {
|
||||
"affinity": {
|
||||
"type": "object"
|
||||
},
|
||||
"annotations": {
|
||||
"type": "object"
|
||||
},
|
||||
"coredns": {
|
||||
"properties": {
|
||||
"clusterDomain": {
|
||||
"type": "string"
|
||||
},
|
||||
"clustersetDomain": {
|
||||
"type": "string"
|
||||
},
|
||||
"configMapName": {
|
||||
"type": "string"
|
||||
},
|
||||
"deploymentName": {
|
||||
"type": "string"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string"
|
||||
},
|
||||
"serviceAccountName": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"extraArgs": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"extraVolumeMounts": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"extraVolumes": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"type": "object"
|
||||
},
|
||||
"podLabels": {
|
||||
"type": "object"
|
||||
},
|
||||
"priorityClassName": {
|
||||
"type": "string"
|
||||
},
|
||||
"resources": {
|
||||
"type": "object"
|
||||
},
|
||||
"tolerations": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"ttlSecondsAfterFinished": {
|
||||
"type": "integer"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"installCRDs": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"policyDefaultLocalCluster": {
|
||||
"type": "boolean"
|
||||
},
|
||||
@@ -1537,10 +1671,27 @@
|
||||
},
|
||||
"resources": {
|
||||
"properties": {
|
||||
"limits": {
|
||||
"properties": {
|
||||
"cpu": {
|
||||
"type": [
|
||||
"integer",
|
||||
"string"
|
||||
]
|
||||
},
|
||||
"memory": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"requests": {
|
||||
"properties": {
|
||||
"cpu": {
|
||||
"type": "string"
|
||||
"type": [
|
||||
"integer",
|
||||
"string"
|
||||
]
|
||||
},
|
||||
"memory": {
|
||||
"type": "string"
|
||||
@@ -1578,14 +1729,6 @@
|
||||
"crdWaitTimeout": {
|
||||
"type": "string"
|
||||
},
|
||||
"customCalls": {
|
||||
"properties": {
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"daemon": {
|
||||
"properties": {
|
||||
"allowedConfigOverrides": {
|
||||
@@ -1749,9 +1892,15 @@
|
||||
"enableMasqueradeRouteSource": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enableNoServiceEndpointsRoutable": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enableNonDefaultDenyPolicies": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enableTunnelBIGTCP": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enableXTSocketFallback": {
|
||||
"type": "boolean"
|
||||
},
|
||||
@@ -1797,8 +1946,30 @@
|
||||
"cidr": {
|
||||
"type": "string"
|
||||
},
|
||||
"egress": {
|
||||
"properties": {
|
||||
"allowRemoteNodeIdentities": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"cidr": {
|
||||
"type": "string"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"ingress": {
|
||||
"properties": {
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
@@ -1869,6 +2040,49 @@
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"nodeSpec": {
|
||||
"properties": {
|
||||
"deleteOnTermination": {
|
||||
"type": [
|
||||
"null",
|
||||
"boolean"
|
||||
]
|
||||
},
|
||||
"disablePrefixDelegation": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"excludeInterfaceTags": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"firstInterfaceIndex": {
|
||||
"type": [
|
||||
"null",
|
||||
"integer"
|
||||
]
|
||||
},
|
||||
"securityGroupTags": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"securityGroups": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"subnetIDs": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"subnetTags": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"usePrimaryAddress": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"subnetIDsFilter": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
@@ -2011,6 +2225,12 @@
|
||||
"string"
|
||||
]
|
||||
},
|
||||
"clusterMaxConnections": {
|
||||
"type": "integer"
|
||||
},
|
||||
"clusterMaxRequests": {
|
||||
"type": "integer"
|
||||
},
|
||||
"connectTimeoutSeconds": {
|
||||
"type": "integer"
|
||||
},
|
||||
@@ -2104,6 +2324,10 @@
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"initContainers": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"initialFetchTimeoutSeconds": {
|
||||
"type": "integer"
|
||||
},
|
||||
@@ -2171,6 +2395,9 @@
|
||||
"maxConnectionDurationSeconds": {
|
||||
"type": "integer"
|
||||
},
|
||||
"maxGlobalDownstreamConnections": {
|
||||
"type": "integer"
|
||||
},
|
||||
"maxRequestsPerConnection": {
|
||||
"type": "integer"
|
||||
},
|
||||
@@ -2393,6 +2620,9 @@
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"useOriginalSourceAddress": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"xffNumTrustedHopsL7PolicyEgress": {
|
||||
"type": "integer"
|
||||
},
|
||||
@@ -2614,10 +2844,17 @@
|
||||
"anyOf": [
|
||||
{
|
||||
"properties": {
|
||||
"aggregationInterval": {
|
||||
"type": "string"
|
||||
},
|
||||
"excludeFilters": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"fieldAggregate": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"fieldMask": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
@@ -2661,6 +2898,9 @@
|
||||
},
|
||||
"static": {
|
||||
"properties": {
|
||||
"aggregationInterval": {
|
||||
"type": "string"
|
||||
},
|
||||
"allowList": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
@@ -2672,6 +2912,10 @@
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"fieldAggregate": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"fieldMask": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
@@ -3037,6 +3281,23 @@
|
||||
"listenPort": {
|
||||
"type": "string"
|
||||
},
|
||||
"logOptions": {
|
||||
"properties": {
|
||||
"format": {
|
||||
"type": [
|
||||
"null",
|
||||
"string"
|
||||
]
|
||||
},
|
||||
"level": {
|
||||
"type": [
|
||||
"null",
|
||||
"string"
|
||||
]
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"properties": {
|
||||
"kubernetes.io/os": {
|
||||
@@ -3100,9 +3361,15 @@
|
||||
"address": {
|
||||
"type": "string"
|
||||
},
|
||||
"blockProfileRate": {
|
||||
"type": "integer"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"mutexProfileFraction": {
|
||||
"type": "integer"
|
||||
},
|
||||
"port": {
|
||||
"type": "integer"
|
||||
}
|
||||
@@ -3680,6 +3947,9 @@
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"tmpVolume": {
|
||||
"type": "object"
|
||||
},
|
||||
"tolerations": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
@@ -3921,6 +4191,33 @@
|
||||
"multiPoolPreAllocation": {
|
||||
"type": "string"
|
||||
},
|
||||
"nodeSpec": {
|
||||
"properties": {
|
||||
"ipamMaxAllocate": {
|
||||
"type": [
|
||||
"null",
|
||||
"integer"
|
||||
]
|
||||
},
|
||||
"ipamMinAllocate": {
|
||||
"type": [
|
||||
"null",
|
||||
"integer"
|
||||
]
|
||||
},
|
||||
"ipamPreAllocate": {
|
||||
"type": [
|
||||
"null",
|
||||
"integer"
|
||||
]
|
||||
},
|
||||
"ipamStaticIPTags": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"operator": {
|
||||
"properties": {
|
||||
"autoCreateCiliumPodIPPools": {
|
||||
@@ -4278,9 +4575,6 @@
|
||||
},
|
||||
"enableHealthCheckLoadBalancerIP": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
@@ -4394,7 +4688,10 @@
|
||||
"requests": {
|
||||
"properties": {
|
||||
"cpu": {
|
||||
"type": "string"
|
||||
"type": [
|
||||
"integer",
|
||||
"string"
|
||||
]
|
||||
},
|
||||
"memory": {
|
||||
"type": "string"
|
||||
@@ -4486,6 +4783,9 @@
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"waitForCloudInit": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
@@ -4694,9 +4994,15 @@
|
||||
"address": {
|
||||
"type": "string"
|
||||
},
|
||||
"blockProfileRate": {
|
||||
"type": "integer"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"mutexProfileFraction": {
|
||||
"type": "integer"
|
||||
},
|
||||
"port": {
|
||||
"type": "integer"
|
||||
}
|
||||
@@ -4754,6 +5060,30 @@
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"tls": {
|
||||
"properties": {
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"server": {
|
||||
"properties": {
|
||||
"existingSecret": {
|
||||
"type": "string"
|
||||
},
|
||||
"mtls": {
|
||||
"properties": {
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
@@ -4866,6 +5196,12 @@
|
||||
},
|
||||
"restart": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"selector": {
|
||||
"type": [
|
||||
"null",
|
||||
"string"
|
||||
]
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
@@ -4902,6 +5238,9 @@
|
||||
"properties": {
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"packetizationLayerPMTUDMode": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
@@ -4940,6 +5279,9 @@
|
||||
"array"
|
||||
]
|
||||
},
|
||||
"policyDenyResponse": {
|
||||
"type": "string"
|
||||
},
|
||||
"policyEnforcementMode": {
|
||||
"type": "string"
|
||||
},
|
||||
@@ -4948,9 +5290,15 @@
|
||||
"address": {
|
||||
"type": "string"
|
||||
},
|
||||
"blockProfileRate": {
|
||||
"type": "integer"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"mutexProfileFraction": {
|
||||
"type": "integer"
|
||||
},
|
||||
"port": {
|
||||
"type": "integer"
|
||||
}
|
||||
@@ -5363,6 +5711,9 @@
|
||||
"secretsNamespaceAnnotations": {
|
||||
"type": "object"
|
||||
},
|
||||
"secretsNamespaceLabels": {
|
||||
"type": "object"
|
||||
},
|
||||
"securityContext": {
|
||||
"properties": {
|
||||
"allowPrivilegeEscalation": {
|
||||
@@ -5422,6 +5773,9 @@
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "string"
|
||||
},
|
||||
{
|
||||
"type": "string"
|
||||
}
|
||||
@@ -5537,6 +5891,23 @@
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"corednsMCSAPI": {
|
||||
"properties": {
|
||||
"annotations": {
|
||||
"type": "object"
|
||||
},
|
||||
"automount": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"create": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"name": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"envoy": {
|
||||
"properties": {
|
||||
"annotations": {
|
||||
@@ -5676,6 +6047,86 @@
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"standaloneDnsProxy": {
|
||||
"properties": {
|
||||
"annotations": {
|
||||
"type": "object"
|
||||
},
|
||||
"automountServiceAccountToken": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"debug": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"enabled": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"image": {
|
||||
"properties": {
|
||||
"digest": {
|
||||
"type": "string"
|
||||
},
|
||||
"override": {
|
||||
"type": [
|
||||
"null",
|
||||
"string"
|
||||
]
|
||||
},
|
||||
"pullPolicy": {
|
||||
"type": "string"
|
||||
},
|
||||
"repository": {
|
||||
"type": "string"
|
||||
},
|
||||
"tag": {
|
||||
"type": "string"
|
||||
},
|
||||
"useDigest": {
|
||||
"type": "boolean"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"properties": {
|
||||
"kubernetes.io/os": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"rollOutPods": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"serverPort": {
|
||||
"type": "integer"
|
||||
},
|
||||
"tolerations": {
|
||||
"items": {},
|
||||
"type": "array"
|
||||
},
|
||||
"updateStrategy": {
|
||||
"properties": {
|
||||
"rollingUpdate": {
|
||||
"properties": {
|
||||
"maxSurge": {
|
||||
"type": "integer"
|
||||
},
|
||||
"maxUnavailable": {
|
||||
"type": "integer"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"type": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
}
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"startupProbe": {
|
||||
"properties": {
|
||||
"failureThreshold": {
|
||||
@@ -5687,9 +6138,6 @@
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"svcSourceRangeCheck": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"synchronizeK8sNodes": {
|
||||
"type": "boolean"
|
||||
},
|
||||
@@ -5774,6 +6222,9 @@
|
||||
},
|
||||
"type": "object"
|
||||
},
|
||||
"tmpVolume": {
|
||||
"type": "object"
|
||||
},
|
||||
"tolerations": {
|
||||
"items": {
|
||||
"anyOf": [
|
||||
|
||||
@@ -6,7 +6,6 @@
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
# -- namespaceOverride allows to override the destination namespace for Cilium resources.
|
||||
# This property allows to use Cilium as part of an Umbrella Chart with different targets.
|
||||
namespaceOverride: ""
|
||||
# @schema
|
||||
# type: [null, object]
|
||||
@@ -29,7 +28,7 @@ debug:
|
||||
# @schema
|
||||
# -- Configure verbosity levels for debug logging
|
||||
# This option is used to enable debug messages for operations related to such
|
||||
# sub-system such as (e.g. kvstore, envoy, datapath or policy), and flow is
|
||||
# sub-system such as (e.g. kvstore, envoy, datapath, policy, or tagged), and flow is
|
||||
# for enabling debug messages emitted per request, message and connection.
|
||||
# Multiple values can be set via a space-separated string (e.g. "datapath envoy").
|
||||
#
|
||||
@@ -39,6 +38,7 @@ debug:
|
||||
# - envoy
|
||||
# - datapath
|
||||
# - policy
|
||||
# - tagged
|
||||
verbose: ~
|
||||
# -- Set the agent-internal metrics sampling frequency. This sets the
|
||||
# frequency of the internal sampling of the agent metrics. These are
|
||||
@@ -204,6 +204,12 @@ serviceAccounts:
|
||||
name: hubble-generate-certs
|
||||
automount: true
|
||||
annotations: {}
|
||||
# -- CorednsMCSAPI is used if clustermesh.mcsapi.corednsAutoConfigure.enabled=true
|
||||
corednsMCSAPI:
|
||||
create: true
|
||||
name: cilium-coredns-mcsapi-autoconfig
|
||||
automount: true
|
||||
annotations: {}
|
||||
# -- Configure termination grace period for cilium-agent DaemonSet.
|
||||
terminationGracePeriodSeconds: 1
|
||||
# -- Install the cilium agent resources.
|
||||
@@ -219,10 +225,10 @@ image:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/cilium"
|
||||
tag: "v1.18.6"
|
||||
tag: "v1.19.1"
|
||||
pullPolicy: "IfNotPresent"
|
||||
# cilium-digest
|
||||
digest: sha256:42ec562a5ff6c8a860c0639f5a7611685e253fd9eb2d2fcdade693724c9166a4
|
||||
digest: sha256:41f1f74a0000de8656f1de4088ea00c8f2d49d6edea579034c73c5fd5fe01792
|
||||
useDigest: true
|
||||
# -- Scheduling configurations for cilium pods
|
||||
scheduling:
|
||||
@@ -363,6 +369,8 @@ securityContext:
|
||||
- SETGID
|
||||
# Allow to execute program that changes UID (e.g. required for package installation)
|
||||
- SETUID
|
||||
# Allow to read dmesg and get kernel pointers when kptr_restrict=1
|
||||
- SYSLOG
|
||||
# -- Capabilities for the `mount-cgroup` init container
|
||||
mountCgroup:
|
||||
# Only used for 'mount' cgroup
|
||||
@@ -433,9 +441,16 @@ azure:
|
||||
# clientID: 00000000-0000-0000-0000-000000000000
|
||||
# clientSecret: 00000000-0000-0000-0000-000000000000
|
||||
# userAssignedIdentityID: 00000000-0000-0000-0000-000000000000
|
||||
nodeSpec:
|
||||
azureInterfaceName: ""
|
||||
alibabacloud:
|
||||
# -- Enable AlibabaCloud ENI integration
|
||||
enabled: false
|
||||
nodeSpec:
|
||||
vSwitches: []
|
||||
vSwitchTags: []
|
||||
securityGroups: []
|
||||
securityGroupTags: []
|
||||
# -- Enable bandwidth manager to optimize TCP and UDP workloads and allow
|
||||
# for rate-limiting traffic from individual Pods with EDT (Earliest Departure
|
||||
# Time) through the "kubernetes.io/egress-bandwidth" Pod annotation.
|
||||
@@ -468,8 +483,7 @@ l2podAnnouncements:
|
||||
interface: "eth0"
|
||||
# -- A regular expression matching interfaces used for sending Gratuitous ARP pod announcements
|
||||
# interfacePattern: ""
|
||||
# -- This feature set enables virtual BGP routers to be created via
|
||||
# CiliumBGPPeeringPolicy CRDs.
|
||||
# -- This feature set enables virtual BGP routers to be created via BGP CRDs.
|
||||
bgpControlPlane:
|
||||
# -- Enables the BGP control plane.
|
||||
enabled: false
|
||||
@@ -479,9 +493,9 @@ bgpControlPlane:
|
||||
create: false
|
||||
# -- The name of the secret namespace to which Cilium agents are given read access
|
||||
name: kube-system
|
||||
# -- Status reporting settings (BGPv2 only)
|
||||
# -- Status reporting settings
|
||||
statusReport:
|
||||
# -- Enable/Disable BGPv2 status reporting
|
||||
# -- Enable/Disable BGP status reporting
|
||||
# It is recommended to enable status reporting in general, but if you have any issue
|
||||
# such as high API server load, you can disable it by setting this to false.
|
||||
enabled: true
|
||||
@@ -491,7 +505,7 @@ bgpControlPlane:
|
||||
mode: "default"
|
||||
# -- IP pool to allocate the BGP router-id from when the mode is ip-pool.
|
||||
ipPool: ""
|
||||
# -- Legacy BGP ORIGIN attribute settings (BGPv2 only)
|
||||
# -- Legacy BGP ORIGIN attribute settings
|
||||
legacyOriginAttribute:
|
||||
# -- Enable/Disable advertising LoadBalancerIP routes with the legacy
|
||||
# BGP ORIGIN attribute value INCOMPLETE (2) instead of the default IGP (0).
|
||||
@@ -501,6 +515,11 @@ pmtuDiscovery:
|
||||
# -- Enable path MTU discovery to send ICMP fragmentation-needed replies to
|
||||
# the client.
|
||||
enabled: false
|
||||
# -- Enable kernel probing path MTU discovery for Pods which uses different message
|
||||
# sizes to search for correct MTU value.
|
||||
# Valid values are: always, blackhole, disabled and unset (or empty). If value
|
||||
# is 'unset' or left empty then will not try to override setting.
|
||||
packetizationLayerPMTUDMode: "blackhole"
|
||||
bpf:
|
||||
autoMount:
|
||||
# -- Enable automatic mount of BPF filesystem
|
||||
@@ -548,7 +567,7 @@ bpf:
|
||||
# Helm configuration for BPF events map rate limiting is experimental and might change
|
||||
# in upcoming releases.
|
||||
events:
|
||||
# -- Default settings for all types of events except dbg and pcap.
|
||||
# -- Default settings for all types of events except dbg.
|
||||
default:
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
@@ -608,6 +627,12 @@ bpf:
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
policyMapMax: 16384
|
||||
# -- (float64) Configure threshold for emitting pressure metrics of policy maps.
|
||||
# @schema
|
||||
# type: [null, number]
|
||||
# @schema
|
||||
# @default -- `0.1`
|
||||
policyMapPressureMetricsThreshold: ~
|
||||
# -- Configure the maximum number of entries in global policy stats map.
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
@@ -665,7 +690,8 @@ bpf:
|
||||
# type: [null, boolean]
|
||||
# @schema
|
||||
# -- (bool) Configure the eBPF-based TPROXY (beta) to reduce reliance on iptables rules
|
||||
# for implementing Layer 7 policy.
|
||||
# for implementing Layer 7 policy. Note this is incompatible with netkit (`bpf.datapathMode=netkit`,
|
||||
# `bpf.datapathMode=netkit-l2`).
|
||||
# @default -- `false`
|
||||
tproxy: ~
|
||||
# @schema
|
||||
@@ -675,6 +701,15 @@ bpf:
|
||||
# [0] will allow all VLAN id's without any filtering.
|
||||
# @default -- `[]`
|
||||
vlanBypass: ~
|
||||
# -- Configure the IP tracing option type.
|
||||
# This option is used to specify the IP option type to use for tracing.
|
||||
# The value must be an integer between 0 and 255.
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# minimum: 0
|
||||
# maximum: 255
|
||||
# @schema
|
||||
monitorTraceIPOption: 0
|
||||
# -- (bool) Disable ExternalIP mitigation (CVE-2020-8554)
|
||||
# @default -- `false`
|
||||
disableExternalIPMitigation: false
|
||||
@@ -682,7 +717,8 @@ bpf:
|
||||
# supported kernels.
|
||||
# @default -- `true`
|
||||
enableTCX: true
|
||||
# -- (string) Mode for Pod devices for the core datapath (veth, netkit, netkit-l2)
|
||||
# -- (string) Mode for Pod devices for the core datapath (veth, netkit, netkit-l2).
|
||||
# Note netkit is incompatible with TPROXY (`bpf.tproxy`).
|
||||
# @default -- `veth`
|
||||
datapathMode: veth
|
||||
# -- Enable BPF clock source probing for more efficient tick retrieval.
|
||||
@@ -770,8 +806,17 @@ cni:
|
||||
# -- Specifies the resources for the cni initContainer
|
||||
resources:
|
||||
requests:
|
||||
# @schema
|
||||
# type: [integer, string]
|
||||
# @schema
|
||||
cpu: 100m
|
||||
memory: 10Mi
|
||||
limits:
|
||||
# @schema
|
||||
# type: [integer, string]
|
||||
# @schema
|
||||
cpu: 1
|
||||
memory: 1Gi
|
||||
# -- Enable route MTU for pod netns when CNI chaining is used
|
||||
enableRouteMTUForCNIChaining: false
|
||||
# -- Enable the removal of iptables rules created by the AWS CNI VPC plugin.
|
||||
@@ -796,10 +841,6 @@ conntrackGCMaxInterval: ""
|
||||
# -- (string) Configure timeout in which Cilium will exit if CRDs are not available
|
||||
# @default -- `"5m"`
|
||||
crdWaitTimeout: ""
|
||||
# -- Tail call hooks for custom eBPF programs.
|
||||
customCalls:
|
||||
# -- Enable tail call hooks for custom eBPF programs.
|
||||
enabled: false
|
||||
daemon:
|
||||
# -- Configure where Cilium runtime state should be stored.
|
||||
runPath: "/var/run/cilium"
|
||||
@@ -842,6 +883,12 @@ daemon:
|
||||
#
|
||||
# By default, this functionality is enabled
|
||||
enableSourceIPVerification: true
|
||||
# -- Configure temporary volume for cilium-agent
|
||||
tmpVolume: {}
|
||||
# emptyDir:
|
||||
# sizeLimit: "100Mi"
|
||||
# medium: "Memory"
|
||||
|
||||
# -- Specify which network interfaces can run the eBPF datapath. This means
|
||||
# that a packet sent from a pod to a destination outside the cluster will be
|
||||
# masqueraded (to an output device IPv4 address), if the output device runs the
|
||||
@@ -860,9 +907,6 @@ forceDeviceDetection: false
|
||||
# -- Enable setting identity mark for local traffic.
|
||||
# enableIdentityMark: true
|
||||
|
||||
# -- Enable Kubernetes EndpointSlice feature in Cilium if the cluster supports it.
|
||||
# enableK8sEndpointSlice: true
|
||||
|
||||
# -- CiliumEndpointSlice configuration options.
|
||||
ciliumEndpointSlice:
|
||||
# -- Enable Cilium EndpointSlice feature.
|
||||
@@ -1042,20 +1086,33 @@ enableXTSocketFallback: true
|
||||
encryption:
|
||||
# -- Enable transparent network encryption.
|
||||
enabled: false
|
||||
# -- Encryption method. Can be either ipsec or wireguard.
|
||||
# -- Encryption method. Can be one of ipsec, wireguard or ztunnel.
|
||||
type: ipsec
|
||||
# -- Enable encryption for pure node to node traffic.
|
||||
# This option is only effective when encryption.type is set to "wireguard".
|
||||
nodeEncryption: false
|
||||
# -- Configure the WireGuard Pod2Pod strict mode.
|
||||
# -- Configure the Encryption Pod2Pod strict mode.
|
||||
strictMode:
|
||||
# -- Enable WireGuard Pod2Pod strict mode.
|
||||
# -- Enable Encryption Pod2Pod strict mode. (deprecated: please use encryption.strictMode.egress.enabled)
|
||||
enabled: false
|
||||
# -- CIDR for the WireGuard Pod2Pod strict mode.
|
||||
# -- CIDR for the Encryption Pod2Pod strict mode. (deprecated: please use encryption.strictMode.egress.cidr)
|
||||
cidr: ""
|
||||
# -- Allow dynamic lookup of remote node identities.
|
||||
# -- Allow dynamic lookup of remote node identities. (deprecated: please use encryption.strictMode.egress.allowRemoteNodeIdentities)
|
||||
# This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap.
|
||||
allowRemoteNodeIdentities: false
|
||||
egress:
|
||||
# -- Enable strict egress encryption.
|
||||
enabled: false
|
||||
# -- CIDR for the Encryption Pod2Pod strict egress mode.
|
||||
cidr: ""
|
||||
# -- Allow dynamic lookup of remote node identities.
|
||||
# This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap.
|
||||
allowRemoteNodeIdentities: false
|
||||
ingress:
|
||||
# -- Enable strict ingress encryption.
|
||||
# When enabled, all unencrypted overlay ingress traffic will be dropped.
|
||||
# This option is only applicable when WireGuard and tunneling are enabled.
|
||||
enabled: false
|
||||
ipsec:
|
||||
# -- Name of the key file inside the Kubernetes secret configured via secretName.
|
||||
keyFile: keys
|
||||
@@ -1127,6 +1184,32 @@ eni:
|
||||
# -- Filter via AWS EC2 Instance tags (k=v) which will dictate which AWS EC2 Instances
|
||||
# are going to be used to create new ENIs
|
||||
instanceTagsFilter: []
|
||||
# -- NodeSpec configuration for the ENI
|
||||
nodeSpec:
|
||||
# -- First interface index to use for IP allocation
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
firstInterfaceIndex: ~
|
||||
# -- Subnet IDs to use for IP allocation
|
||||
subnetIDs: []
|
||||
# -- Subnet tags to use for IP allocation
|
||||
subnetTags: []
|
||||
# -- Security groups to use for IP allocation
|
||||
securityGroups: []
|
||||
# -- Security group tags to use for IP allocation
|
||||
securityGroupTags: []
|
||||
# -- Exclude interface tags to use for IP allocation
|
||||
excludeInterfaceTags: []
|
||||
# -- Use primary address for IP allocation
|
||||
usePrimaryAddress: false
|
||||
# -- Disable prefix delegation for IP allocation
|
||||
disablePrefixDelegation: false
|
||||
# -- Delete ENI on termination
|
||||
# @schema
|
||||
# type: [null, boolean]
|
||||
# @schema
|
||||
deleteOnTermination: ~
|
||||
# fragmentTracking enables IPv4 fragment tracking support in the datapath.
|
||||
# fragmentTracking: true
|
||||
gke:
|
||||
@@ -1142,6 +1225,8 @@ healthCheckICMPFailureThreshold: 3
|
||||
hostFirewall:
|
||||
# -- Enables the enforcement of host policies in the eBPF datapath.
|
||||
enabled: false
|
||||
# -- Enable routing to a service that has zero endpoints
|
||||
enableNoServiceEndpointsRoutable: true
|
||||
# -- Configure socket LB
|
||||
socketLB:
|
||||
# -- Enable socket LB
|
||||
@@ -1165,12 +1250,15 @@ certgen:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/certgen"
|
||||
tag: "v0.3.1"
|
||||
digest: "sha256:2825dbfa6f89cbed882fd1d81e46a56c087e35885825139923aa29eb8aec47a9"
|
||||
tag: "v0.3.2"
|
||||
digest: "sha256:19921f48ee7e2295ea4dca955878a6cd8d70e6d4219d08f688e866ece9d95d4d"
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
# -- Seconds after which the completed job pod will be deleted
|
||||
ttlSecondsAfterFinished: 1800
|
||||
ttlSecondsAfterFinished: null
|
||||
# -- Labels to be added to hubble-certgen pods
|
||||
podLabels: {}
|
||||
# -- Annotations to be added to the hubble-certgen initial Job and CronJob
|
||||
@@ -1195,6 +1283,11 @@ certgen:
|
||||
extraVolumeMounts: []
|
||||
# -- Affinity for certgen
|
||||
affinity: {}
|
||||
cronJob:
|
||||
# -- The number of successful finished jobs to keep
|
||||
successfulJobsHistoryLimit: 3
|
||||
# -- The number of failed finished jobs to keep
|
||||
failedJobsHistoryLimit: 1
|
||||
hubble:
|
||||
# -- Enable Hubble (true by default).
|
||||
enabled: true
|
||||
@@ -1210,6 +1303,9 @@ hubble:
|
||||
# 2047, 4095, 8191, 16383, 32767, 65535
|
||||
# eventBufferCapacity: "4095"
|
||||
|
||||
# -- The interval at which Hubble will send out lost events from the Observer server, if any.
|
||||
# lostEventSendInterval: 1s
|
||||
|
||||
# -- Hubble metrics configuration.
|
||||
# See https://docs.cilium.io/en/stable/observability/metrics/#hubble-metrics
|
||||
# for more comprehensive documentation about Hubble metrics.
|
||||
@@ -1503,9 +1599,9 @@ hubble:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/hubble-relay"
|
||||
tag: "v1.18.6"
|
||||
tag: "v1.19.1"
|
||||
# hubble-relay-digest
|
||||
digest: sha256:fb6135e34c31e5f175cb5e75f86cea52ef2ff12b49bcefb7088ed93f5009eb8e
|
||||
digest: sha256:d8c4e13bc36a56179292bb52bc6255379cb94cb873700d316ea3139b1bdb8165
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# -- Specifies the resources for the hubble-relay pods
|
||||
@@ -1716,6 +1812,24 @@ hubble:
|
||||
address: localhost
|
||||
# -- Configure pprof listen port for hubble-relay
|
||||
port: 6062
|
||||
# -- Enable mutex contention profiling for hubble-relay and set the fraction of sampled events (set to 1 to sample all events)
|
||||
mutexProfileFraction: 0
|
||||
# -- Enable goroutine blocking profiling for hubble-relay and set the rate of sampled events in nanoseconds (set to 1 to sample all events [warning: performance overhead])
|
||||
blockProfileRate: 0
|
||||
# -- Logging configuration for hubble-relay.
|
||||
logOptions:
|
||||
# @schema
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
# -- Log format for hubble-relay. Valid values are: text, text-ts, json, json-ts.
|
||||
# @default -- text-ts
|
||||
format: ~
|
||||
# @schema
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
# -- Log level for hubble-relay. Valid values are: debug, info, warn, error.
|
||||
# @default -- info
|
||||
level: ~
|
||||
ui:
|
||||
# -- Whether to enable the Hubble UI.
|
||||
enabled: false
|
||||
@@ -1911,6 +2025,11 @@ hubble:
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
# -- Configure temporary volume for hubble-ui
|
||||
tmpVolume: {}
|
||||
# emptyDir:
|
||||
# # sizeLimit: "100Mi"
|
||||
# # medium: "Memory"
|
||||
# -- Hubble flows export.
|
||||
export:
|
||||
# --- Static exporter configuration.
|
||||
@@ -1923,6 +2042,14 @@ hubble:
|
||||
# - source
|
||||
# - destination
|
||||
# - verdict
|
||||
fieldAggregate: []
|
||||
# - time
|
||||
# - source
|
||||
# - destination
|
||||
# - verdict
|
||||
# --- Defines the interval at which to aggregate before exporting Hubble flows.
|
||||
# Aggregation feature is only enabled when fieldAggregate is specified and aggregationInterval > 0s.
|
||||
aggregationInterval: "0s"
|
||||
allowList: []
|
||||
# - '{"verdict":["DROPPED","ERROR"]}'
|
||||
denyList: []
|
||||
@@ -1948,6 +2075,8 @@ hubble:
|
||||
content:
|
||||
- name: all
|
||||
fieldMask: []
|
||||
fieldAggregate: []
|
||||
aggregationInterval: "0s"
|
||||
includeFilters: []
|
||||
excludeFilters: []
|
||||
filePath: "/var/run/cilium/hubble/events.log"
|
||||
@@ -2040,11 +2169,30 @@ ipam:
|
||||
# refill the bucket up to the burst size capacity.
|
||||
# @default -- `4.0`
|
||||
externalAPILimitQPS: ~
|
||||
# -- defaultLBServiceIPAM indicates the default LoadBalancer Service IPAM when
|
||||
# no LoadBalancer class is set. Applicable values: lbipam, nodeipam, none
|
||||
# -- NodeSpec configuration for the IPAM
|
||||
nodeSpec:
|
||||
# -- IPAM min allocate
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
ipamMinAllocate: ~
|
||||
# -- IPAM pre allocate
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
ipamPreAllocate: ~
|
||||
# -- IPAM max allocate
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
ipamMaxAllocate: ~
|
||||
# -- IPAM static IP tags (currently only works with AWS and Azure)
|
||||
ipamStaticIPTags: []
|
||||
# @schema
|
||||
# type: [string]
|
||||
# @schema
|
||||
# -- defaultLBServiceIPAM indicates the default LoadBalancer Service IPAM when
|
||||
# no LoadBalancer class is set. Applicable values: lbipam, nodeipam, none
|
||||
defaultLBServiceIPAM: lbipam
|
||||
nodeIPAM:
|
||||
# -- Configure Node IPAM
|
||||
@@ -2155,7 +2303,7 @@ maglev: {}
|
||||
# type: [null, boolean]
|
||||
# @schema
|
||||
# -- (bool) Enables masquerading of IPv4 traffic leaving the node from endpoints.
|
||||
# @default -- `true` unless ipam eni mode is active
|
||||
# @default -- `true` unless ipam eni mode is active
|
||||
enableIPv4Masquerade: ~
|
||||
# -- Enables masquerading of IPv6 traffic leaving the node from endpoints.
|
||||
enableIPv6Masquerade: true
|
||||
@@ -2165,6 +2313,8 @@ enableMasqueradeRouteSource: false
|
||||
enableIPv4BIGTCP: false
|
||||
# -- Enables IPv6 BIG TCP support which increases maximum IPv6 GSO/GRO limits for nodes and pods
|
||||
enableIPv6BIGTCP: false
|
||||
# -- Enable BIG TCP in tunneling mode and increase maximum GRO/GSO limits for VXLAN/GENEVE tunnels
|
||||
enableTunnelBIGTCP: false
|
||||
nat:
|
||||
# -- Number of the top-k SNAT map connections to track in Cilium statedb.
|
||||
mapStatsEntries: 32
|
||||
@@ -2266,8 +2416,6 @@ loadBalancer:
|
||||
algorithm: round_robin
|
||||
# -- Configure N-S k8s service loadbalancing
|
||||
nodePort:
|
||||
# -- Enable the Cilium NodePort service implementation.
|
||||
enabled: false
|
||||
# -- Port range to use for NodePort services.
|
||||
# range: "30000,32767"
|
||||
|
||||
@@ -2311,6 +2459,10 @@ pprof:
|
||||
address: localhost
|
||||
# -- Configure pprof listen port for cilium-agent
|
||||
port: 6060
|
||||
# -- Enable mutex contention profiling for cilium-agent and set the fraction of sampled events (set to 1 to sample all events)
|
||||
mutexProfileFraction: 0
|
||||
# -- Enable goroutine blocking profiling for cilium-agent and set the rate of sampled events in nanoseconds (set to 1 to sample all events [warning: performance overhead])
|
||||
blockProfileRate: 0
|
||||
# -- Configure prometheus metrics on the configured port at /metrics
|
||||
prometheus:
|
||||
metricsService: false
|
||||
@@ -2435,6 +2587,12 @@ envoy:
|
||||
initialFetchTimeoutSeconds: 30
|
||||
# -- Maximum number of concurrent retries on Envoy clusters
|
||||
maxConcurrentRetries: 128
|
||||
# -- Maximum number of connections on Envoy clusters
|
||||
clusterMaxConnections: 1024
|
||||
# -- Maximum number of requests on Envoy clusters
|
||||
clusterMaxRequests: 1024
|
||||
# -- Maximum number of global downstream connections
|
||||
maxGlobalDownstreamConnections: 50000
|
||||
# -- Maximum number of retries for each HTTP request
|
||||
httpRetryCount: 3
|
||||
# -- ProxyMaxRequestsPerConnection specifies the max_requests_per_connection setting for Envoy
|
||||
@@ -2451,6 +2609,9 @@ envoy:
|
||||
xffNumTrustedHopsL7PolicyIngress: 0
|
||||
# -- Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the egress L7 policy enforcement Envoy listeners.
|
||||
xffNumTrustedHopsL7PolicyEgress: 0
|
||||
# -- For cases when CiliumEnvoyConfig is not used directly (Ingress, Gateway), configures Cilium BPF Metadata listener filter
|
||||
# to use the original source address when extracting the metadata for a request.
|
||||
useOriginalSourceAddress: true
|
||||
# @schema
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
@@ -2465,10 +2626,12 @@ envoy:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/cilium-envoy"
|
||||
tag: "v1.35.9-1767794330-db497dd19e346b39d81d7b5c0dedf6c812bcc5c9"
|
||||
tag: "v1.35.9-1770979049-232ed4a26881e4ab4f766f251f258ed424fff663"
|
||||
pullPolicy: "IfNotPresent"
|
||||
digest: "sha256:81398e449f2d3d0a6a70527e4f641aaa685d3156bea0bb30712fae3fd8822b86"
|
||||
digest: "sha256:8188114a2768b5f49d6ce58e168b20d765e0fbc64eee0d83241aa2b150ccd788"
|
||||
useDigest: true
|
||||
# -- Init containers added to the cilium Envoy DaemonSet.
|
||||
initContainers: []
|
||||
# -- Additional containers added to the cilium Envoy DaemonSet.
|
||||
extraContainers: []
|
||||
# -- Additional envoy container arguments.
|
||||
@@ -2699,16 +2862,15 @@ resourceQuotas:
|
||||
pods: "15"
|
||||
# Need to document default
|
||||
##################
|
||||
#sessionAffinity: false
|
||||
|
||||
# -- Annotations to be added to all cilium-secret namespaces (resources under templates/cilium-secrets-namespace)
|
||||
secretsNamespaceAnnotations: {}
|
||||
# -- Labels to be added to all cilium-secret namespaces (resources under templates/cilium-secrets-namespace)
|
||||
secretsNamespaceLabels: {}
|
||||
# -- Do not run Cilium agent when running with clean mode. Useful to completely
|
||||
# uninstall Cilium as it will stop Cilium from starting and create artifacts
|
||||
# in the node.
|
||||
sleepAfterInit: false
|
||||
# -- Enable check of service source ranges (currently, only for LoadBalancer).
|
||||
svcSourceRangeCheck: true
|
||||
# -- Synchronize Kubernetes nodes to kvstore and perform CNP GC.
|
||||
synchronizeK8sNodes: true
|
||||
# -- Configure TLS configuration in the agent.
|
||||
@@ -2791,6 +2953,9 @@ tls:
|
||||
# @default -- `"vxlan"`
|
||||
tunnelProtocol: ""
|
||||
# -- IP family for the underlay.
|
||||
# Possible values:
|
||||
# - "ipv4"
|
||||
# - "ipv6"
|
||||
# @default -- `"ipv4"`
|
||||
underlayProtocol: ""
|
||||
# -- Enable native-routing mode or tunneling mode.
|
||||
@@ -2811,6 +2976,11 @@ tunnelSourcePortRange: 0-0
|
||||
# - reject (default)
|
||||
# - drop
|
||||
serviceNoBackendResponse: reject
|
||||
# -- Configure what the response should be to pod egress traffic denied by network policy.
|
||||
# Possible values:
|
||||
# - none (default)
|
||||
# - icmp
|
||||
policyDenyResponse: none
|
||||
# -- Configure the underlying network MTU to overwrite auto-detected MTU.
|
||||
# This value doesn't change the host network interface MTU i.e. eth0 or ens0.
|
||||
# It changes the MTU for cilium_net@cilium_host, cilium_host@cilium_net,
|
||||
@@ -2841,15 +3011,15 @@ operator:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/operator"
|
||||
tag: "v1.18.6"
|
||||
tag: "v1.19.1"
|
||||
# operator-generic-digest
|
||||
genericDigest: sha256:34a827ce9ed021c8adf8f0feca131f53b3c54a3ef529053d871d0347ec4d69af
|
||||
genericDigest: sha256:e7278d763e448bf6c184b0682cf98cdca078d58a27e1b2f3c906792670aa211a
|
||||
# operator-azure-digest
|
||||
azureDigest: sha256:a57aff47aeb32eccfedaa2a49d1af984d996d6d6de79609c232e0c4cf9ce97a1
|
||||
azureDigest: sha256:82bce78603056e709d4c4e9f9ebb25c222c36d8a07f8c05381c2372d9078eca8
|
||||
# operator-aws-digest
|
||||
awsDigest: sha256:47dbc1a5bd483fec170dab7fb0bf2cca3585a4893675b0324d41d97bac8be5eb
|
||||
awsDigest: sha256:18913d05a6c4d205f0b7126c4723bb9ccbd4dc24403da46ed0f9f4bf2a142804
|
||||
# operator-alibabacloud-digest
|
||||
alibabacloudDigest: sha256:212c4cbe27da3772bcb952b8f8cbaa0b0eef72488b52edf90ad2b32072a3ca4c
|
||||
alibabacloudDigest: sha256:837b12f4239e88ea5b4b5708ab982c319a94ee05edaecaafe5fd0e5b1962f554
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
suffix: ""
|
||||
@@ -2988,6 +3158,10 @@ operator:
|
||||
address: localhost
|
||||
# -- Configure pprof listen port for cilium-operator
|
||||
port: 6061
|
||||
# -- Enable mutex contention profiling for cilium-operator and set the fraction of sampled events (set to 1 to sample all events)
|
||||
mutexProfileFraction: 0
|
||||
# -- Enable goroutine blocking profiling for cilium-operator and set the rate of sampled events in nanoseconds (set to 1 to sample all events [warning: performance overhead])
|
||||
blockProfileRate: 0
|
||||
# -- Enable prometheus metrics for cilium-operator on the configured port at
|
||||
# /metrics
|
||||
prometheus:
|
||||
@@ -3021,6 +3195,17 @@ operator:
|
||||
# @schema
|
||||
# -- Metrics relabeling configs for the ServiceMonitor cilium-operator
|
||||
metricRelabelings: ~
|
||||
# -- TLS configuration for Prometheus
|
||||
tls:
|
||||
enabled: false
|
||||
server:
|
||||
# -- Name of the Secret containing the certificate, key and CA files for the Prometheus server.
|
||||
existingSecret: ""
|
||||
mtls:
|
||||
# When set to true enforces mutual TLS between Operator Prometheus server and its clients.
|
||||
# False allow non-mutual TLS connections.
|
||||
# This option has no effect when TLS is disabled.
|
||||
enabled: false
|
||||
# -- Grafana dashboards for cilium-operator
|
||||
# grafana can import dashboards based on the label and value
|
||||
# ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards
|
||||
@@ -3054,6 +3239,12 @@ operator:
|
||||
# -- Interval, in seconds, to check if there are any pods that are not
|
||||
# managed by Cilium.
|
||||
intervalSeconds: 15
|
||||
# -- Selector for pods that should be restarted when not managed by Cilium.
|
||||
# If not set, defaults to built-in selector "k8s-app=kube-dns". Set to empty string to select all pods.
|
||||
# @schema
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
selector: ~
|
||||
nodeinit:
|
||||
# -- Enable the node initialization DaemonSet
|
||||
enabled: false
|
||||
@@ -3064,8 +3255,8 @@ nodeinit:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/startup-script"
|
||||
tag: "1755531540-60ee83e"
|
||||
digest: "sha256:5bdca3c2dec2c79f58d45a7a560bf1098c2126350c901379fe850b7f78d3d757"
|
||||
tag: "1763560095-8f36c34"
|
||||
digest: "sha256:50b9cf9c280096b59b80d2fc8ee6638facef79ac18998a22f0cbc40d5d28c16f"
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# -- The priority class to use for the nodeinit pod.
|
||||
@@ -3108,6 +3299,9 @@ nodeinit:
|
||||
# ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
|
||||
resources:
|
||||
requests:
|
||||
# @schema
|
||||
# type: [integer, string]
|
||||
# @schema
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
# -- Security context to be added to nodeinit pods.
|
||||
@@ -3130,6 +3324,8 @@ nodeinit:
|
||||
# -- bootstrapFile is the location of the file where the bootstrap timestamp is
|
||||
# written by the node-init DaemonSet
|
||||
bootstrapFile: "/tmp/cilium-bootstrap.d/cilium-bootstrap-time"
|
||||
# -- wait for Cloud init to finish on the host and assume the node has cloud init installed
|
||||
waitForCloudInit: false
|
||||
# -- startup offers way to customize startup nodeinit script (pre and post position)
|
||||
startup:
|
||||
preScript: ""
|
||||
@@ -3148,9 +3344,9 @@ preflight:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/cilium"
|
||||
tag: "v1.18.6"
|
||||
tag: "v1.19.1"
|
||||
# cilium-digest
|
||||
digest: sha256:42ec562a5ff6c8a860c0639f5a7611685e253fd9eb2d2fcdade693724c9166a4
|
||||
digest: sha256:41f1f74a0000de8656f1de4088ea00c8f2d49d6edea579034c73c5fd5fe01792
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
envoy:
|
||||
@@ -3161,9 +3357,9 @@ preflight:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/cilium-envoy"
|
||||
tag: "v1.35.9-1767794330-db497dd19e346b39d81d7b5c0dedf6c812bcc5c9"
|
||||
tag: "v1.35.9-1770979049-232ed4a26881e4ab4f766f251f258ed424fff663"
|
||||
pullPolicy: "IfNotPresent"
|
||||
digest: "sha256:81398e449f2d3d0a6a70527e4f641aaa685d3156bea0bb30712fae3fd8822b86"
|
||||
digest: "sha256:8188114a2768b5f49d6ce58e168b20d765e0fbc64eee0d83241aa2b150ccd788"
|
||||
useDigest: true
|
||||
# -- The priority class to use for the preflight pod.
|
||||
priorityClassName: ""
|
||||
@@ -3263,7 +3459,9 @@ enableCriticalPriorityClass: true
|
||||
# on AArch64 as the images do not currently ship a version of Envoy.
|
||||
#disableEnvoyVersionCheck: false
|
||||
clustermesh:
|
||||
# -- Deploy clustermesh-apiserver for clustermesh
|
||||
# -- Deploy clustermesh-apiserver for clustermesh. This option is typically
|
||||
# used with ``clustermesh.config.enabled=true``. Refer to the
|
||||
# ``clustermesh.config.enabled=true``documentation for more information.
|
||||
useAPIServer: false
|
||||
# -- The maximum number of clusters to support in a ClusterMesh. This value
|
||||
# cannot be changed on running clusters, and all clusters in a ClusterMesh
|
||||
@@ -3271,44 +3469,132 @@ clustermesh:
|
||||
# maximum allocatable cluster-local identities.
|
||||
# Supported values are 255 and 511.
|
||||
maxConnectedClusters: 255
|
||||
# -- The time to live for the cache of a remote cluster after connectivity is
|
||||
# lost. If the connection is not re-established within this duration, the
|
||||
# cached data is revoked to prevent stale state. If not specified or set to
|
||||
# 0s, the cache is never revoked (default).
|
||||
cacheTTL: "0s"
|
||||
# -- Enable the synchronization of Kubernetes EndpointSlices corresponding to
|
||||
# the remote endpoints of appropriately-annotated global services through ClusterMesh
|
||||
enableEndpointSliceSynchronization: false
|
||||
# -- Enable Multi-Cluster Services API support
|
||||
# -- Enable Multi-Cluster Services API support (deprecated; use clustermesh.mcsapi.enabled)
|
||||
enableMCSAPISupport: false
|
||||
# -- Control whether policy rules assume by default the local cluster if not explicitly selected
|
||||
policyDefaultLocalCluster: false
|
||||
policyDefaultLocalCluster: true
|
||||
# -- Annotations to be added to all top-level clustermesh objects (resources under templates/clustermesh-apiserver and templates/clustermesh-config)
|
||||
annotations: {}
|
||||
# -- Clustermesh explicit configuration.
|
||||
config:
|
||||
# -- Enable the Clustermesh explicit configuration.
|
||||
# If set to false, you need to provide the following resources yourself:
|
||||
# - (Secret) cilium-clustermesh (used by cilium-agent/cilium-operator to connect to
|
||||
# the local etcd instance if KVStoreMesh is enabled or the remote clusters
|
||||
# if KVStoreMesh is disabled)
|
||||
# - (Secret) cilium-kvstoremesh (used by KVStoreMesh to connect to the remote clusters)
|
||||
# - (ConfigMap) clustermesh-remote-users (used to create one etcd user per remote cluster
|
||||
# if clustermesh-apiserver is used and `clustermesh.apiserver.tls.authMode` is not
|
||||
# set to `legacy`)
|
||||
enabled: false
|
||||
# -- Default dns domain for the Clustermesh API servers
|
||||
# This is used in the case cluster addresses are not provided
|
||||
# and IPs are used.
|
||||
domain: mesh.cilium.io
|
||||
# -- List of clusters to be peered in the mesh.
|
||||
# -- Clusters to be peered in the mesh.
|
||||
# @schema
|
||||
# type: [object, array]
|
||||
# @schema
|
||||
clusters: []
|
||||
# You can use a dict of clusters (recommended):
|
||||
# clusters:
|
||||
# # -- Name of the cluster
|
||||
# # -- Name of the cluster
|
||||
# cluster1:
|
||||
# # -- Whether to enable this cluster in the mesh. Optional, defaults to true.
|
||||
# enabled: true
|
||||
# # -- Address of the cluster, use this if you created DNS records for
|
||||
# # the cluster Clustermesh API server.
|
||||
# address: cluster1.mesh.cilium.io
|
||||
# # -- Port of the cluster Clustermesh API server.
|
||||
# port: 2379
|
||||
# # -- IPs of the cluster Clustermesh API server, use multiple ones when
|
||||
# # you have multiple IPs to access the Clustermesh API server.
|
||||
# ips:
|
||||
# - 172.18.255.201
|
||||
# # -- (deprecated) base64 encoded PEM values for the cluster client certificate, private key and certificate authority.
|
||||
# # These fields can (and should) be omitted in case the CA is shared across clusters. In that case, the
|
||||
# # "remote" private key and certificate available in the local cluster are automatically used instead.
|
||||
# tls:
|
||||
# cert: ""
|
||||
# key: ""
|
||||
# caCert: ""
|
||||
#
|
||||
# Or alternatively you can use a list of clusters:
|
||||
# clusters:
|
||||
# # -- Name of the cluster
|
||||
# - name: cluster1
|
||||
# # -- Address of the cluster, use this if you created DNS records for
|
||||
# # the cluster Clustermesh API server.
|
||||
# # -- Address of the cluster, use this if you created DNS records for
|
||||
# # the cluster Clustermesh API server.
|
||||
# address: cluster1.mesh.cilium.io
|
||||
# # -- Port of the cluster Clustermesh API server.
|
||||
# # -- Port of the cluster Clustermesh API server.
|
||||
# port: 2379
|
||||
# # -- IPs of the cluster Clustermesh API server, use multiple ones when
|
||||
# # you have multiple IPs to access the Clustermesh API server.
|
||||
# # -- IPs of the cluster Clustermesh API server, use multiple ones when
|
||||
# # you have multiple IPs to access the Clustermesh API server.
|
||||
# ips:
|
||||
# - 172.18.255.201
|
||||
# # -- base64 encoded PEM values for the cluster client certificate, private key and certificate authority.
|
||||
# # These fields can (and should) be omitted in case the CA is shared across clusters. In that case, the
|
||||
# # "remote" private key and certificate available in the local cluster are automatically used instead.
|
||||
# # -- (deprecated) base64 encoded PEM values for the cluster client certificate, private key and certificate authority.
|
||||
# # These fields can (and should) be omitted in case the CA is shared across clusters. In that case, the
|
||||
# # "remote" private key and certificate available in the local cluster are automatically used instead.
|
||||
# tls:
|
||||
# cert: ""
|
||||
# key: ""
|
||||
# caCert: ""
|
||||
mcsapi:
|
||||
# -- Enable Multi-Cluster Services API support
|
||||
enabled: false
|
||||
# -- Enabled MCS-API CRDs auto-installation
|
||||
installCRDs: true
|
||||
corednsAutoConfigure:
|
||||
# -- Enable auto-configuration of CoreDNS for Multi-Cluster Services API.
|
||||
# CoreDNS MUST be at least in version v1.12.2 to run this.
|
||||
enabled: false
|
||||
coredns:
|
||||
# -- The Deployment for the cluster CoreDNS service
|
||||
deploymentName: coredns
|
||||
# -- The Service Account name for the cluster CoreDNS service
|
||||
serviceAccountName: coredns
|
||||
# -- The ConfigMap name for the cluster CoreDNS service
|
||||
configMapName: coredns
|
||||
# -- The namespace for the cluster CoreDNS service
|
||||
namespace: kube-system
|
||||
# -- The cluster domain for the cluster CoreDNS service
|
||||
clusterDomain: cluster.local
|
||||
# -- The clusterset domain for the cluster CoreDNS service
|
||||
clustersetDomain: clusterset.local
|
||||
# -- Additional arguments to `clustermesh-apiserver coredns-mcsapi-auto-configure`.
|
||||
extraArgs: []
|
||||
# -- Seconds after which the completed job pod will be deleted
|
||||
ttlSecondsAfterFinished: 1800
|
||||
# -- Labels to be added to coredns-mcsapi-autoconfig pods
|
||||
podLabels: {}
|
||||
# -- Annotations to be added to the coredns-mcsapi-autoconfig Job
|
||||
annotations: {}
|
||||
# -- Node selector for coredns-mcsapi-autoconfig
|
||||
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
|
||||
nodeSelector: {}
|
||||
# -- Priority class for coredns-mcsapi-autoconfig
|
||||
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
|
||||
priorityClassName: ""
|
||||
# -- Node tolerations for pod assignment on nodes with taints
|
||||
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
|
||||
tolerations: []
|
||||
# -- Resource limits for coredns-mcsapi-autoconfig
|
||||
# ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers
|
||||
resources: {}
|
||||
# -- Additional coredns-mcsapi-autoconfig volumes.
|
||||
extraVolumes: []
|
||||
# -- Additional coredns-mcsapi-autoconfig volumeMounts.
|
||||
extraVolumeMounts: []
|
||||
# -- Affinity for coredns-mcsapi-autoconfig
|
||||
affinity: {}
|
||||
apiserver:
|
||||
# -- Clustermesh API server image.
|
||||
image:
|
||||
@@ -3317,9 +3603,9 @@ clustermesh:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "quay.io/cilium/clustermesh-apiserver"
|
||||
tag: "v1.18.6"
|
||||
tag: "v1.19.1"
|
||||
# clustermesh-apiserver-digest
|
||||
digest: sha256:8ee142912a0e261850c0802d9256ddbe3729e1cd35c6bea2d93077f334c3cf3b
|
||||
digest: sha256:56d6c3dc13b50126b80ecb571707a0ea97f6db694182b9d61efd386d04e5bb28
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# -- TCP port for the clustermesh-apiserver health API.
|
||||
@@ -3408,17 +3694,12 @@ clustermesh:
|
||||
# - "external": ``clustermesh-apiserver`` will sync remote cluster information to the etcd used as kvstore. This can't be enabled with crd identity allocation mode.
|
||||
kvstoreMode: "internal"
|
||||
service:
|
||||
# -- (bool) Set externallyCreated to true to create the clustermesh-apiserver service outside this helm chart.
|
||||
# For example after external load balancer controllers are created.
|
||||
externallyCreated: false
|
||||
# -- The type of service used for apiserver access.
|
||||
type: NodePort
|
||||
# -- Optional port to use as the node port for apiserver access.
|
||||
#
|
||||
# WARNING: make sure to configure a different NodePort in each cluster if
|
||||
# kube-proxy replacement is enabled, as Cilium is currently affected by a known
|
||||
# bug (#24692) when NodePorts are handled by the KPR implementation. If a service
|
||||
# with the same NodePort exists both in the local and the remote cluster, all
|
||||
# traffic originating from inside the cluster and targeting the corresponding
|
||||
# NodePort will be redirected to a local backend, regardless of whether the
|
||||
# destination node belongs to the local or the remote cluster.
|
||||
nodePort: 32379
|
||||
# -- Annotations for the clustermesh-apiserver service.
|
||||
# Example annotations to configure an internal load balancer on different cloud providers:
|
||||
@@ -3587,13 +3868,15 @@ clustermesh:
|
||||
# The "remote" certificate must be generated with CN=remote-<cluster-name>
|
||||
# if provided manually. Cluster mode is meaningful only when the same
|
||||
# CA is shared across all clusters part of the mesh.
|
||||
authMode: legacy
|
||||
# -- Allow users to provide their own certificates
|
||||
authMode: migration
|
||||
# -- (deprecated) Allow users to provide their own certificates
|
||||
# Users may need to provide their certificates using
|
||||
# a mechanism that requires they provide their own secrets.
|
||||
# This setting does not apply to any of the auto-generated
|
||||
# mechanisms below, it only restricts the creation of secrets
|
||||
# via the `tls-provided` templates.
|
||||
# This option is deprecated as secrets are expected to be created
|
||||
# externally when 'auto' is not enabled.
|
||||
enableSecrets: true
|
||||
# -- Configure automatic TLS certificates generation.
|
||||
# A Kubernetes CronJob is used the generate any
|
||||
@@ -3602,7 +3885,14 @@ clustermesh:
|
||||
auto:
|
||||
# -- When set to true, automatically generate a CA and certificates to
|
||||
# enable mTLS between clustermesh-apiserver and external workload instances.
|
||||
# If set to false, the certs to be provided by setting appropriate values below.
|
||||
#
|
||||
# When set to false you need to pre-create the following secrets:
|
||||
# - clustermesh-apiserver-server-cert
|
||||
# - clustermesh-apiserver-admin-cert
|
||||
# - clustermesh-apiserver-remote-cert
|
||||
# - clustermesh-apiserver-local-cert
|
||||
# The above secret should at least contains the keys `tls.crt` and `tls.key`
|
||||
# and optionally `ca.crt` if a CA bundle is not configured.
|
||||
enabled: true
|
||||
# Sets the method to auto-generate certificates. Supported values:
|
||||
# - helm: This method uses Helm to generate all certificates.
|
||||
@@ -3637,7 +3927,9 @@ clustermesh:
|
||||
# -- base64 encoded PEM values for the clustermesh-apiserver server certificate and private key.
|
||||
# Used if 'auto' is not enabled.
|
||||
server:
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
cert: ""
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
key: ""
|
||||
# -- Extra DNS names added to certificate when it's auto generated
|
||||
extraDnsNames: []
|
||||
@@ -3646,17 +3938,16 @@ clustermesh:
|
||||
# -- base64 encoded PEM values for the clustermesh-apiserver admin certificate and private key.
|
||||
# Used if 'auto' is not enabled.
|
||||
admin:
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
cert: ""
|
||||
key: ""
|
||||
# -- base64 encoded PEM values for the clustermesh-apiserver client certificate and private key.
|
||||
# Used if 'auto' is not enabled.
|
||||
client:
|
||||
cert: ""
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
key: ""
|
||||
# -- base64 encoded PEM values for the clustermesh-apiserver remote cluster certificate and private key.
|
||||
# Used if 'auto' is not enabled.
|
||||
remote:
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
cert: ""
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
key: ""
|
||||
# clustermesh-apiserver Prometheus metrics configuration
|
||||
metrics:
|
||||
@@ -3811,7 +4102,7 @@ authentication:
|
||||
# -- Enable authentication processing and garbage collection.
|
||||
# Note that if disabled, policy enforcement will still block requests that require authentication.
|
||||
# But the resulting authentication requests for these requests will not be processed, therefore the requests not be allowed.
|
||||
enabled: true
|
||||
enabled: false
|
||||
# -- Buffer size of the channel Cilium uses to receive authentication events from the signal map.
|
||||
queueSize: 1024
|
||||
# -- Buffer size of the channel Cilium uses to receive certificate expiration events from auth handlers.
|
||||
@@ -3849,7 +4140,7 @@ authentication:
|
||||
override: ~
|
||||
repository: "docker.io/library/busybox"
|
||||
tag: "1.37.0"
|
||||
digest: "sha256:2383baad1860bbe9d8a7a843775048fd07d8afe292b94bd876df64a69aae7cb1"
|
||||
digest: "sha256:b3255e7dfbcd10cb367af0d409747d511aeb66dfac98cf30e97e87e4207dd76f"
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# SPIRE agent configuration
|
||||
@@ -3863,8 +4154,8 @@ authentication:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "ghcr.io/spiffe/spire-agent"
|
||||
tag: "1.12.4"
|
||||
digest: "sha256:163970884fba18860cac93655dc32b6af85a5dcf2ebb7e3e119a10888eff8fcd"
|
||||
tag: "1.9.6"
|
||||
digest: "sha256:5106ac601272a88684db14daf7f54b9a45f31f77bb16a906bd5e87756ee7b97c"
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# -- SPIRE agent service account
|
||||
@@ -3918,8 +4209,8 @@ authentication:
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "ghcr.io/spiffe/spire-server"
|
||||
tag: "1.12.4"
|
||||
digest: "sha256:34147f27066ab2be5cc10ca1d4bfd361144196467155d46c45f3519f41596e49"
|
||||
tag: "1.9.6"
|
||||
digest: "sha256:59a0b92b39773515e25e68a46c40d3b931b9c1860bc445a79ceb45a805cab8b4"
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
# -- SPIRE server service account
|
||||
@@ -4004,3 +4295,41 @@ authentication:
|
||||
enableInternalTrafficPolicy: true
|
||||
# -- Enable LoadBalancer IP Address Management
|
||||
enableLBIPAM: true
|
||||
# -- Standalone DNS Proxy Configuration
|
||||
# Note: The standalone DNS proxy uses the agent's dnsProxy.* configuration
|
||||
# for DNS settings (proxyPort, enableDnsCompression) to ensure consistency.
|
||||
standaloneDnsProxy:
|
||||
# -- Enable standalone DNS proxy (alpha feature)
|
||||
enabled: false
|
||||
# -- Roll out Standalone DNS proxy automatically when configmap is updated.
|
||||
rollOutPods: false
|
||||
# -- Standalone DNS proxy annotations
|
||||
annotations: {}
|
||||
# -- Standalone DNS proxy debug mode
|
||||
debug: false
|
||||
# -- Standalone DNS proxy server port
|
||||
serverPort: 10095
|
||||
# -- Standalone DNS proxy Node Selector
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
# -- Standalone DNS proxy tolerations
|
||||
tolerations: []
|
||||
# -- Standalone DNS proxy auto mount service account token
|
||||
automountServiceAccountToken: false
|
||||
# -- Standalone DNS proxy update strategy
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxSurge: 2
|
||||
maxUnavailable: 0
|
||||
# -- Standalone DNS proxy image
|
||||
image:
|
||||
# @schema
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
override: ~
|
||||
repository: ""
|
||||
tag: ""
|
||||
digest: ""
|
||||
useDigest: true
|
||||
pullPolicy: "IfNotPresent"
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
# -- namespaceOverride allows to override the destination namespace for Cilium resources.
|
||||
# This property allows to use Cilium as part of an Umbrella Chart with different targets.
|
||||
namespaceOverride: ""
|
||||
# @schema
|
||||
# type: [null, object]
|
||||
@@ -27,7 +26,7 @@ debug:
|
||||
# @schema
|
||||
# -- Configure verbosity levels for debug logging
|
||||
# This option is used to enable debug messages for operations related to such
|
||||
# sub-system such as (e.g. kvstore, envoy, datapath or policy), and flow is
|
||||
# sub-system such as (e.g. kvstore, envoy, datapath, policy, or tagged), and flow is
|
||||
# for enabling debug messages emitted per request, message and connection.
|
||||
# Multiple values can be set via a space-separated string (e.g. "datapath envoy").
|
||||
#
|
||||
@@ -37,6 +36,7 @@ debug:
|
||||
# - envoy
|
||||
# - datapath
|
||||
# - policy
|
||||
# - tagged
|
||||
verbose: ~
|
||||
|
||||
# -- Set the agent-internal metrics sampling frequency. This sets the
|
||||
@@ -207,6 +207,12 @@ serviceAccounts:
|
||||
name: hubble-generate-certs
|
||||
automount: true
|
||||
annotations: {}
|
||||
# -- CorednsMCSAPI is used if clustermesh.mcsapi.corednsAutoConfigure.enabled=true
|
||||
corednsMCSAPI:
|
||||
create: true
|
||||
name: cilium-coredns-mcsapi-autoconfig
|
||||
automount: true
|
||||
annotations: {}
|
||||
# -- Configure termination grace period for cilium-agent DaemonSet.
|
||||
terminationGracePeriodSeconds: 1
|
||||
# -- Install the cilium agent resources.
|
||||
@@ -368,6 +374,8 @@ securityContext:
|
||||
- SETGID
|
||||
# Allow to execute program that changes UID (e.g. required for package installation)
|
||||
- SETUID
|
||||
# Allow to read dmesg and get kernel pointers when kptr_restrict=1
|
||||
- SYSLOG
|
||||
# -- Capabilities for the `mount-cgroup` init container
|
||||
mountCgroup:
|
||||
# Only used for 'mount' cgroup
|
||||
@@ -440,9 +448,16 @@ azure:
|
||||
# clientID: 00000000-0000-0000-0000-000000000000
|
||||
# clientSecret: 00000000-0000-0000-0000-000000000000
|
||||
# userAssignedIdentityID: 00000000-0000-0000-0000-000000000000
|
||||
nodeSpec:
|
||||
azureInterfaceName: ""
|
||||
alibabacloud:
|
||||
# -- Enable AlibabaCloud ENI integration
|
||||
enabled: false
|
||||
nodeSpec:
|
||||
vSwitches: []
|
||||
vSwitchTags: []
|
||||
securityGroups: []
|
||||
securityGroupTags: []
|
||||
# -- Enable bandwidth manager to optimize TCP and UDP workloads and allow
|
||||
# for rate-limiting traffic from individual Pods with EDT (Earliest Departure
|
||||
# Time) through the "kubernetes.io/egress-bandwidth" Pod annotation.
|
||||
@@ -475,8 +490,7 @@ l2podAnnouncements:
|
||||
interface: "eth0"
|
||||
# -- A regular expression matching interfaces used for sending Gratuitous ARP pod announcements
|
||||
# interfacePattern: ""
|
||||
# -- This feature set enables virtual BGP routers to be created via
|
||||
# CiliumBGPPeeringPolicy CRDs.
|
||||
# -- This feature set enables virtual BGP routers to be created via BGP CRDs.
|
||||
bgpControlPlane:
|
||||
# -- Enables the BGP control plane.
|
||||
enabled: false
|
||||
@@ -486,9 +500,9 @@ bgpControlPlane:
|
||||
create: false
|
||||
# -- The name of the secret namespace to which Cilium agents are given read access
|
||||
name: kube-system
|
||||
# -- Status reporting settings (BGPv2 only)
|
||||
# -- Status reporting settings
|
||||
statusReport:
|
||||
# -- Enable/Disable BGPv2 status reporting
|
||||
# -- Enable/Disable BGP status reporting
|
||||
# It is recommended to enable status reporting in general, but if you have any issue
|
||||
# such as high API server load, you can disable it by setting this to false.
|
||||
enabled: true
|
||||
@@ -498,7 +512,7 @@ bgpControlPlane:
|
||||
mode: "default"
|
||||
# -- IP pool to allocate the BGP router-id from when the mode is ip-pool.
|
||||
ipPool: ""
|
||||
# -- Legacy BGP ORIGIN attribute settings (BGPv2 only)
|
||||
# -- Legacy BGP ORIGIN attribute settings
|
||||
legacyOriginAttribute:
|
||||
# -- Enable/Disable advertising LoadBalancerIP routes with the legacy
|
||||
# BGP ORIGIN attribute value INCOMPLETE (2) instead of the default IGP (0).
|
||||
@@ -508,6 +522,11 @@ pmtuDiscovery:
|
||||
# -- Enable path MTU discovery to send ICMP fragmentation-needed replies to
|
||||
# the client.
|
||||
enabled: false
|
||||
# -- Enable kernel probing path MTU discovery for Pods which uses different message
|
||||
# sizes to search for correct MTU value.
|
||||
# Valid values are: always, blackhole, disabled and unset (or empty). If value
|
||||
# is 'unset' or left empty then will not try to override setting.
|
||||
packetizationLayerPMTUDMode: "blackhole"
|
||||
bpf:
|
||||
autoMount:
|
||||
# -- Enable automatic mount of BPF filesystem
|
||||
@@ -555,7 +574,7 @@ bpf:
|
||||
# Helm configuration for BPF events map rate limiting is experimental and might change
|
||||
# in upcoming releases.
|
||||
events:
|
||||
# -- Default settings for all types of events except dbg and pcap.
|
||||
# -- Default settings for all types of events except dbg.
|
||||
default:
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
@@ -615,6 +634,12 @@ bpf:
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
policyMapMax: 16384
|
||||
# -- (float64) Configure threshold for emitting pressure metrics of policy maps.
|
||||
# @schema
|
||||
# type: [null, number]
|
||||
# @schema
|
||||
# @default -- `0.1`
|
||||
policyMapPressureMetricsThreshold: ~
|
||||
# -- Configure the maximum number of entries in global policy stats map.
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
@@ -672,7 +697,8 @@ bpf:
|
||||
# type: [null, boolean]
|
||||
# @schema
|
||||
# -- (bool) Configure the eBPF-based TPROXY (beta) to reduce reliance on iptables rules
|
||||
# for implementing Layer 7 policy.
|
||||
# for implementing Layer 7 policy. Note this is incompatible with netkit (`bpf.datapathMode=netkit`,
|
||||
# `bpf.datapathMode=netkit-l2`).
|
||||
# @default -- `false`
|
||||
tproxy: ~
|
||||
# @schema
|
||||
@@ -682,6 +708,15 @@ bpf:
|
||||
# [0] will allow all VLAN id's without any filtering.
|
||||
# @default -- `[]`
|
||||
vlanBypass: ~
|
||||
# -- Configure the IP tracing option type.
|
||||
# This option is used to specify the IP option type to use for tracing.
|
||||
# The value must be an integer between 0 and 255.
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# minimum: 0
|
||||
# maximum: 255
|
||||
# @schema
|
||||
monitorTraceIPOption: 0
|
||||
# -- (bool) Disable ExternalIP mitigation (CVE-2020-8554)
|
||||
# @default -- `false`
|
||||
disableExternalIPMitigation: false
|
||||
@@ -689,7 +724,8 @@ bpf:
|
||||
# supported kernels.
|
||||
# @default -- `true`
|
||||
enableTCX: true
|
||||
# -- (string) Mode for Pod devices for the core datapath (veth, netkit, netkit-l2)
|
||||
# -- (string) Mode for Pod devices for the core datapath (veth, netkit, netkit-l2).
|
||||
# Note netkit is incompatible with TPROXY (`bpf.tproxy`).
|
||||
# @default -- `veth`
|
||||
datapathMode: veth
|
||||
# -- Enable BPF clock source probing for more efficient tick retrieval.
|
||||
@@ -778,8 +814,17 @@ cni:
|
||||
# -- Specifies the resources for the cni initContainer
|
||||
resources:
|
||||
requests:
|
||||
# @schema
|
||||
# type: [integer, string]
|
||||
# @schema
|
||||
cpu: 100m
|
||||
memory: 10Mi
|
||||
limits:
|
||||
# @schema
|
||||
# type: [integer, string]
|
||||
# @schema
|
||||
cpu: 1
|
||||
memory: 1Gi
|
||||
# -- Enable route MTU for pod netns when CNI chaining is used
|
||||
enableRouteMTUForCNIChaining: false
|
||||
# -- Enable the removal of iptables rules created by the AWS CNI VPC plugin.
|
||||
@@ -804,10 +849,6 @@ conntrackGCMaxInterval: ""
|
||||
# -- (string) Configure timeout in which Cilium will exit if CRDs are not available
|
||||
# @default -- `"5m"`
|
||||
crdWaitTimeout: ""
|
||||
# -- Tail call hooks for custom eBPF programs.
|
||||
customCalls:
|
||||
# -- Enable tail call hooks for custom eBPF programs.
|
||||
enabled: false
|
||||
daemon:
|
||||
# -- Configure where Cilium runtime state should be stored.
|
||||
runPath: "/var/run/cilium"
|
||||
@@ -850,6 +891,13 @@ daemon:
|
||||
#
|
||||
# By default, this functionality is enabled
|
||||
enableSourceIPVerification: true
|
||||
|
||||
# -- Configure temporary volume for cilium-agent
|
||||
tmpVolume: {}
|
||||
# emptyDir:
|
||||
# sizeLimit: "100Mi"
|
||||
# medium: "Memory"
|
||||
|
||||
# -- Specify which network interfaces can run the eBPF datapath. This means
|
||||
# that a packet sent from a pod to a destination outside the cluster will be
|
||||
# masqueraded (to an output device IPv4 address), if the output device runs the
|
||||
@@ -869,9 +917,6 @@ forceDeviceDetection: false
|
||||
# -- Enable setting identity mark for local traffic.
|
||||
# enableIdentityMark: true
|
||||
|
||||
# -- Enable Kubernetes EndpointSlice feature in Cilium if the cluster supports it.
|
||||
# enableK8sEndpointSlice: true
|
||||
|
||||
# -- CiliumEndpointSlice configuration options.
|
||||
ciliumEndpointSlice:
|
||||
# -- Enable Cilium EndpointSlice feature.
|
||||
@@ -1056,20 +1101,33 @@ enableXTSocketFallback: true
|
||||
encryption:
|
||||
# -- Enable transparent network encryption.
|
||||
enabled: false
|
||||
# -- Encryption method. Can be either ipsec or wireguard.
|
||||
# -- Encryption method. Can be one of ipsec, wireguard or ztunnel.
|
||||
type: ipsec
|
||||
# -- Enable encryption for pure node to node traffic.
|
||||
# This option is only effective when encryption.type is set to "wireguard".
|
||||
nodeEncryption: false
|
||||
# -- Configure the WireGuard Pod2Pod strict mode.
|
||||
# -- Configure the Encryption Pod2Pod strict mode.
|
||||
strictMode:
|
||||
# -- Enable WireGuard Pod2Pod strict mode.
|
||||
# -- Enable Encryption Pod2Pod strict mode. (deprecated: please use encryption.strictMode.egress.enabled)
|
||||
enabled: false
|
||||
# -- CIDR for the WireGuard Pod2Pod strict mode.
|
||||
# -- CIDR for the Encryption Pod2Pod strict mode. (deprecated: please use encryption.strictMode.egress.cidr)
|
||||
cidr: ""
|
||||
# -- Allow dynamic lookup of remote node identities.
|
||||
# -- Allow dynamic lookup of remote node identities. (deprecated: please use encryption.strictMode.egress.allowRemoteNodeIdentities)
|
||||
# This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap.
|
||||
allowRemoteNodeIdentities: false
|
||||
egress:
|
||||
# -- Enable strict egress encryption.
|
||||
enabled: false
|
||||
# -- CIDR for the Encryption Pod2Pod strict egress mode.
|
||||
cidr: ""
|
||||
# -- Allow dynamic lookup of remote node identities.
|
||||
# This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap.
|
||||
allowRemoteNodeIdentities: false
|
||||
ingress:
|
||||
# -- Enable strict ingress encryption.
|
||||
# When enabled, all unencrypted overlay ingress traffic will be dropped.
|
||||
# This option is only applicable when WireGuard and tunneling are enabled.
|
||||
enabled: false
|
||||
ipsec:
|
||||
# -- Name of the key file inside the Kubernetes secret configured via secretName.
|
||||
keyFile: keys
|
||||
@@ -1141,6 +1199,33 @@ eni:
|
||||
# -- Filter via AWS EC2 Instance tags (k=v) which will dictate which AWS EC2 Instances
|
||||
# are going to be used to create new ENIs
|
||||
instanceTagsFilter: []
|
||||
# -- NodeSpec configuration for the ENI
|
||||
nodeSpec:
|
||||
# -- First interface index to use for IP allocation
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
firstInterfaceIndex: ~
|
||||
# -- Subnet IDs to use for IP allocation
|
||||
subnetIDs: []
|
||||
# -- Subnet tags to use for IP allocation
|
||||
subnetTags: []
|
||||
# -- Security groups to use for IP allocation
|
||||
securityGroups: []
|
||||
# -- Security group tags to use for IP allocation
|
||||
securityGroupTags: []
|
||||
# -- Exclude interface tags to use for IP allocation
|
||||
excludeInterfaceTags: []
|
||||
# -- Use primary address for IP allocation
|
||||
usePrimaryAddress: false
|
||||
# -- Disable prefix delegation for IP allocation
|
||||
disablePrefixDelegation: false
|
||||
# -- Delete ENI on termination
|
||||
# @schema
|
||||
# type: [null, boolean]
|
||||
# @schema
|
||||
deleteOnTermination: ~
|
||||
|
||||
# fragmentTracking enables IPv4 fragment tracking support in the datapath.
|
||||
# fragmentTracking: true
|
||||
gke:
|
||||
@@ -1156,6 +1241,8 @@ healthCheckICMPFailureThreshold: 3
|
||||
hostFirewall:
|
||||
# -- Enables the enforcement of host policies in the eBPF datapath.
|
||||
enabled: false
|
||||
# -- Enable routing to a service that has zero endpoints
|
||||
enableNoServiceEndpointsRoutable: true
|
||||
# -- Configure socket LB
|
||||
socketLB:
|
||||
# -- Enable socket LB
|
||||
@@ -1183,8 +1270,11 @@ certgen:
|
||||
digest: "${CERTGEN_DIGEST}"
|
||||
useDigest: true
|
||||
pullPolicy: "${PULL_POLICY}"
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
# -- Seconds after which the completed job pod will be deleted
|
||||
ttlSecondsAfterFinished: 1800
|
||||
ttlSecondsAfterFinished: null
|
||||
# -- Labels to be added to hubble-certgen pods
|
||||
podLabels: {}
|
||||
# -- Annotations to be added to the hubble-certgen initial Job and CronJob
|
||||
@@ -1209,6 +1299,11 @@ certgen:
|
||||
extraVolumeMounts: []
|
||||
# -- Affinity for certgen
|
||||
affinity: {}
|
||||
cronJob:
|
||||
# -- The number of successful finished jobs to keep
|
||||
successfulJobsHistoryLimit: 3
|
||||
# -- The number of failed finished jobs to keep
|
||||
failedJobsHistoryLimit: 1
|
||||
hubble:
|
||||
# -- Enable Hubble (true by default).
|
||||
enabled: true
|
||||
@@ -1224,6 +1319,9 @@ hubble:
|
||||
# 2047, 4095, 8191, 16383, 32767, 65535
|
||||
# eventBufferCapacity: "4095"
|
||||
|
||||
# -- The interval at which Hubble will send out lost events from the Observer server, if any.
|
||||
# lostEventSendInterval: 1s
|
||||
|
||||
# -- Hubble metrics configuration.
|
||||
# See https://docs.cilium.io/en/stable/observability/metrics/#hubble-metrics
|
||||
# for more comprehensive documentation about Hubble metrics.
|
||||
@@ -1730,6 +1828,24 @@ hubble:
|
||||
address: localhost
|
||||
# -- Configure pprof listen port for hubble-relay
|
||||
port: 6062
|
||||
# -- Enable mutex contention profiling for hubble-relay and set the fraction of sampled events (set to 1 to sample all events)
|
||||
mutexProfileFraction: 0
|
||||
# -- Enable goroutine blocking profiling for hubble-relay and set the rate of sampled events in nanoseconds (set to 1 to sample all events [warning: performance overhead])
|
||||
blockProfileRate: 0
|
||||
# -- Logging configuration for hubble-relay.
|
||||
logOptions:
|
||||
# @schema
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
# -- Log format for hubble-relay. Valid values are: text, text-ts, json, json-ts.
|
||||
# @default -- text-ts
|
||||
format: ~
|
||||
# @schema
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
# -- Log level for hubble-relay. Valid values are: debug, info, warn, error.
|
||||
# @default -- info
|
||||
level: ~
|
||||
ui:
|
||||
# -- Whether to enable the Hubble UI.
|
||||
enabled: false
|
||||
@@ -1925,6 +2041,13 @@ hubble:
|
||||
# - secretName: chart-example-tls
|
||||
# hosts:
|
||||
# - chart-example.local
|
||||
|
||||
# -- Configure temporary volume for hubble-ui
|
||||
tmpVolume: {}
|
||||
# emptyDir:
|
||||
# # sizeLimit: "100Mi"
|
||||
# # medium: "Memory"
|
||||
|
||||
# -- Hubble flows export.
|
||||
export:
|
||||
# --- Static exporter configuration.
|
||||
@@ -1937,6 +2060,14 @@ hubble:
|
||||
# - source
|
||||
# - destination
|
||||
# - verdict
|
||||
fieldAggregate: []
|
||||
# - time
|
||||
# - source
|
||||
# - destination
|
||||
# - verdict
|
||||
# --- Defines the interval at which to aggregate before exporting Hubble flows.
|
||||
# Aggregation feature is only enabled when fieldAggregate is specified and aggregationInterval > 0s.
|
||||
aggregationInterval: "0s"
|
||||
allowList: []
|
||||
# - '{"verdict":["DROPPED","ERROR"]}'
|
||||
denyList: []
|
||||
@@ -1962,6 +2093,8 @@ hubble:
|
||||
content:
|
||||
- name: all
|
||||
fieldMask: []
|
||||
fieldAggregate: []
|
||||
aggregationInterval: "0s"
|
||||
includeFilters: []
|
||||
excludeFilters: []
|
||||
filePath: "/var/run/cilium/hubble/events.log"
|
||||
@@ -2056,11 +2189,30 @@ ipam:
|
||||
# refill the bucket up to the burst size capacity.
|
||||
# @default -- `4.0`
|
||||
externalAPILimitQPS: ~
|
||||
# -- defaultLBServiceIPAM indicates the default LoadBalancer Service IPAM when
|
||||
# no LoadBalancer class is set. Applicable values: lbipam, nodeipam, none
|
||||
# -- NodeSpec configuration for the IPAM
|
||||
nodeSpec:
|
||||
# -- IPAM min allocate
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
ipamMinAllocate: ~
|
||||
# -- IPAM pre allocate
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
ipamPreAllocate: ~
|
||||
# -- IPAM max allocate
|
||||
# @schema
|
||||
# type: [null, integer]
|
||||
# @schema
|
||||
ipamMaxAllocate: ~
|
||||
# -- IPAM static IP tags (currently only works with AWS and Azure)
|
||||
ipamStaticIPTags: []
|
||||
# @schema
|
||||
# type: [string]
|
||||
# @schema
|
||||
# -- defaultLBServiceIPAM indicates the default LoadBalancer Service IPAM when
|
||||
# no LoadBalancer class is set. Applicable values: lbipam, nodeipam, none
|
||||
defaultLBServiceIPAM: lbipam
|
||||
nodeIPAM:
|
||||
# -- Configure Node IPAM
|
||||
@@ -2147,7 +2299,7 @@ localRedirectPolicy: false
|
||||
localRedirectPolicies:
|
||||
# -- Enable local redirect policies.
|
||||
enabled: false
|
||||
|
||||
|
||||
# -- Limit the allowed addresses in Address Matcher rule of
|
||||
# Local Redirect Policies to the given CIDRs.
|
||||
# @schema@
|
||||
@@ -2177,7 +2329,7 @@ maglev: {}
|
||||
# type: [null, boolean]
|
||||
# @schema
|
||||
# -- (bool) Enables masquerading of IPv4 traffic leaving the node from endpoints.
|
||||
# @default -- `true` unless ipam eni mode is active
|
||||
# @default -- `true` unless ipam eni mode is active
|
||||
enableIPv4Masquerade: ~
|
||||
# -- Enables masquerading of IPv6 traffic leaving the node from endpoints.
|
||||
enableIPv6Masquerade: true
|
||||
@@ -2187,6 +2339,8 @@ enableMasqueradeRouteSource: false
|
||||
enableIPv4BIGTCP: false
|
||||
# -- Enables IPv6 BIG TCP support which increases maximum IPv6 GSO/GRO limits for nodes and pods
|
||||
enableIPv6BIGTCP: false
|
||||
# -- Enable BIG TCP in tunneling mode and increase maximum GRO/GSO limits for VXLAN/GENEVE tunnels
|
||||
enableTunnelBIGTCP: false
|
||||
|
||||
nat:
|
||||
# -- Number of the top-k SNAT map connections to track in Cilium statedb.
|
||||
@@ -2290,8 +2444,6 @@ loadBalancer:
|
||||
algorithm: round_robin
|
||||
# -- Configure N-S k8s service loadbalancing
|
||||
nodePort:
|
||||
# -- Enable the Cilium NodePort service implementation.
|
||||
enabled: false
|
||||
# -- Port range to use for NodePort services.
|
||||
# range: "30000,32767"
|
||||
|
||||
@@ -2336,6 +2488,10 @@ pprof:
|
||||
address: localhost
|
||||
# -- Configure pprof listen port for cilium-agent
|
||||
port: 6060
|
||||
# -- Enable mutex contention profiling for cilium-agent and set the fraction of sampled events (set to 1 to sample all events)
|
||||
mutexProfileFraction: 0
|
||||
# -- Enable goroutine blocking profiling for cilium-agent and set the rate of sampled events in nanoseconds (set to 1 to sample all events [warning: performance overhead])
|
||||
blockProfileRate: 0
|
||||
# -- Configure prometheus metrics on the configured port at /metrics
|
||||
prometheus:
|
||||
metricsService: false
|
||||
@@ -2460,6 +2616,12 @@ envoy:
|
||||
initialFetchTimeoutSeconds: 30
|
||||
# -- Maximum number of concurrent retries on Envoy clusters
|
||||
maxConcurrentRetries: 128
|
||||
# -- Maximum number of connections on Envoy clusters
|
||||
clusterMaxConnections: 1024
|
||||
# -- Maximum number of requests on Envoy clusters
|
||||
clusterMaxRequests: 1024
|
||||
# -- Maximum number of global downstream connections
|
||||
maxGlobalDownstreamConnections: 50000
|
||||
# -- Maximum number of retries for each HTTP request
|
||||
httpRetryCount: 3
|
||||
# -- ProxyMaxRequestsPerConnection specifies the max_requests_per_connection setting for Envoy
|
||||
@@ -2476,6 +2638,9 @@ envoy:
|
||||
xffNumTrustedHopsL7PolicyIngress: 0
|
||||
# -- Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the egress L7 policy enforcement Envoy listeners.
|
||||
xffNumTrustedHopsL7PolicyEgress: 0
|
||||
# -- For cases when CiliumEnvoyConfig is not used directly (Ingress, Gateway), configures Cilium BPF Metadata listener filter
|
||||
# to use the original source address when extracting the metadata for a request.
|
||||
useOriginalSourceAddress: true
|
||||
# @schema
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
@@ -2494,6 +2659,8 @@ envoy:
|
||||
pullPolicy: "${PULL_POLICY}"
|
||||
digest: "${CILIUM_ENVOY_DIGEST}"
|
||||
useDigest: true
|
||||
# -- Init containers added to the cilium Envoy DaemonSet.
|
||||
initContainers: []
|
||||
# -- Additional containers added to the cilium Envoy DaemonSet.
|
||||
extraContainers: []
|
||||
# -- Additional envoy container arguments.
|
||||
@@ -2726,17 +2893,16 @@ resourceQuotas:
|
||||
pods: "15"
|
||||
# Need to document default
|
||||
##################
|
||||
#sessionAffinity: false
|
||||
|
||||
# -- Annotations to be added to all cilium-secret namespaces (resources under templates/cilium-secrets-namespace)
|
||||
secretsNamespaceAnnotations: {}
|
||||
# -- Labels to be added to all cilium-secret namespaces (resources under templates/cilium-secrets-namespace)
|
||||
secretsNamespaceLabels: {}
|
||||
|
||||
# -- Do not run Cilium agent when running with clean mode. Useful to completely
|
||||
# uninstall Cilium as it will stop Cilium from starting and create artifacts
|
||||
# in the node.
|
||||
sleepAfterInit: false
|
||||
# -- Enable check of service source ranges (currently, only for LoadBalancer).
|
||||
svcSourceRangeCheck: true
|
||||
# -- Synchronize Kubernetes nodes to kvstore and perform CNP GC.
|
||||
synchronizeK8sNodes: true
|
||||
# -- Configure TLS configuration in the agent.
|
||||
@@ -2819,6 +2985,9 @@ tls:
|
||||
# @default -- `"vxlan"`
|
||||
tunnelProtocol: ""
|
||||
# -- IP family for the underlay.
|
||||
# Possible values:
|
||||
# - "ipv4"
|
||||
# - "ipv6"
|
||||
# @default -- `"ipv4"`
|
||||
underlayProtocol: ""
|
||||
# -- Enable native-routing mode or tunneling mode.
|
||||
@@ -2839,6 +3008,11 @@ tunnelSourcePortRange: 0-0
|
||||
# - reject (default)
|
||||
# - drop
|
||||
serviceNoBackendResponse: reject
|
||||
# -- Configure what the response should be to pod egress traffic denied by network policy.
|
||||
# Possible values:
|
||||
# - none (default)
|
||||
# - icmp
|
||||
policyDenyResponse: none
|
||||
# -- Configure the underlying network MTU to overwrite auto-detected MTU.
|
||||
# This value doesn't change the host network interface MTU i.e. eth0 or ens0.
|
||||
# It changes the MTU for cilium_net@cilium_host, cilium_host@cilium_net,
|
||||
@@ -2924,7 +3098,7 @@ operator:
|
||||
# @schema
|
||||
# type: [null, array]
|
||||
# @schema
|
||||
tolerations:
|
||||
tolerations:
|
||||
- key: "node-role.kubernetes.io/control-plane"
|
||||
operator: Exists
|
||||
- key: "node-role.kubernetes.io/master" #deprecated
|
||||
@@ -3016,6 +3190,10 @@ operator:
|
||||
address: localhost
|
||||
# -- Configure pprof listen port for cilium-operator
|
||||
port: 6061
|
||||
# -- Enable mutex contention profiling for cilium-operator and set the fraction of sampled events (set to 1 to sample all events)
|
||||
mutexProfileFraction: 0
|
||||
# -- Enable goroutine blocking profiling for cilium-operator and set the rate of sampled events in nanoseconds (set to 1 to sample all events [warning: performance overhead])
|
||||
blockProfileRate: 0
|
||||
# -- Enable prometheus metrics for cilium-operator on the configured port at
|
||||
# /metrics
|
||||
prometheus:
|
||||
@@ -3049,6 +3227,17 @@ operator:
|
||||
# @schema
|
||||
# -- Metrics relabeling configs for the ServiceMonitor cilium-operator
|
||||
metricRelabelings: ~
|
||||
# -- TLS configuration for Prometheus
|
||||
tls:
|
||||
enabled: false
|
||||
server:
|
||||
# -- Name of the Secret containing the certificate, key and CA files for the Prometheus server.
|
||||
existingSecret: ""
|
||||
mtls:
|
||||
# When set to true enforces mutual TLS between Operator Prometheus server and its clients.
|
||||
# False allow non-mutual TLS connections.
|
||||
# This option has no effect when TLS is disabled.
|
||||
enabled: false
|
||||
# -- Grafana dashboards for cilium-operator
|
||||
# grafana can import dashboards based on the label and value
|
||||
# ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards
|
||||
@@ -3082,6 +3271,12 @@ operator:
|
||||
# -- Interval, in seconds, to check if there are any pods that are not
|
||||
# managed by Cilium.
|
||||
intervalSeconds: 15
|
||||
# -- Selector for pods that should be restarted when not managed by Cilium.
|
||||
# If not set, defaults to built-in selector "k8s-app=kube-dns". Set to empty string to select all pods.
|
||||
# @schema
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
selector: ~
|
||||
nodeinit:
|
||||
# -- Enable the node initialization DaemonSet
|
||||
enabled: false
|
||||
@@ -3136,6 +3331,9 @@ nodeinit:
|
||||
# ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
|
||||
resources:
|
||||
requests:
|
||||
# @schema
|
||||
# type: [integer, string]
|
||||
# @schema
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
# -- Security context to be added to nodeinit pods.
|
||||
@@ -3160,6 +3358,8 @@ nodeinit:
|
||||
# -- bootstrapFile is the location of the file where the bootstrap timestamp is
|
||||
# written by the node-init DaemonSet
|
||||
bootstrapFile: "/tmp/cilium-bootstrap.d/cilium-bootstrap-time"
|
||||
# -- wait for Cloud init to finish on the host and assume the node has cloud init installed
|
||||
waitForCloudInit: false
|
||||
# -- startup offers way to customize startup nodeinit script (pre and post position)
|
||||
startup:
|
||||
preScript: ""
|
||||
@@ -3293,7 +3493,9 @@ enableCriticalPriorityClass: true
|
||||
# on AArch64 as the images do not currently ship a version of Envoy.
|
||||
#disableEnvoyVersionCheck: false
|
||||
clustermesh:
|
||||
# -- Deploy clustermesh-apiserver for clustermesh
|
||||
# -- Deploy clustermesh-apiserver for clustermesh. This option is typically
|
||||
# used with ``clustermesh.config.enabled=true``. Refer to the
|
||||
# ``clustermesh.config.enabled=true``documentation for more information.
|
||||
useAPIServer: false
|
||||
# -- The maximum number of clusters to support in a ClusterMesh. This value
|
||||
# cannot be changed on running clusters, and all clusters in a ClusterMesh
|
||||
@@ -3301,45 +3503,133 @@ clustermesh:
|
||||
# maximum allocatable cluster-local identities.
|
||||
# Supported values are 255 and 511.
|
||||
maxConnectedClusters: 255
|
||||
# -- The time to live for the cache of a remote cluster after connectivity is
|
||||
# lost. If the connection is not re-established within this duration, the
|
||||
# cached data is revoked to prevent stale state. If not specified or set to
|
||||
# 0s, the cache is never revoked (default).
|
||||
cacheTTL: "0s"
|
||||
# -- Enable the synchronization of Kubernetes EndpointSlices corresponding to
|
||||
# the remote endpoints of appropriately-annotated global services through ClusterMesh
|
||||
enableEndpointSliceSynchronization: false
|
||||
# -- Enable Multi-Cluster Services API support
|
||||
# -- Enable Multi-Cluster Services API support (deprecated; use clustermesh.mcsapi.enabled)
|
||||
enableMCSAPISupport: false
|
||||
# -- Control whether policy rules assume by default the local cluster if not explicitly selected
|
||||
policyDefaultLocalCluster: false
|
||||
policyDefaultLocalCluster: true
|
||||
|
||||
# -- Annotations to be added to all top-level clustermesh objects (resources under templates/clustermesh-apiserver and templates/clustermesh-config)
|
||||
annotations: {}
|
||||
# -- Clustermesh explicit configuration.
|
||||
config:
|
||||
# -- Enable the Clustermesh explicit configuration.
|
||||
# If set to false, you need to provide the following resources yourself:
|
||||
# - (Secret) cilium-clustermesh (used by cilium-agent/cilium-operator to connect to
|
||||
# the local etcd instance if KVStoreMesh is enabled or the remote clusters
|
||||
# if KVStoreMesh is disabled)
|
||||
# - (Secret) cilium-kvstoremesh (used by KVStoreMesh to connect to the remote clusters)
|
||||
# - (ConfigMap) clustermesh-remote-users (used to create one etcd user per remote cluster
|
||||
# if clustermesh-apiserver is used and `clustermesh.apiserver.tls.authMode` is not
|
||||
# set to `legacy`)
|
||||
enabled: false
|
||||
# -- Default dns domain for the Clustermesh API servers
|
||||
# This is used in the case cluster addresses are not provided
|
||||
# and IPs are used.
|
||||
domain: mesh.cilium.io
|
||||
# -- List of clusters to be peered in the mesh.
|
||||
# -- Clusters to be peered in the mesh.
|
||||
# @schema
|
||||
# type: [object, array]
|
||||
# @schema
|
||||
clusters: []
|
||||
# You can use a dict of clusters (recommended):
|
||||
# clusters:
|
||||
# # -- Name of the cluster
|
||||
# # -- Name of the cluster
|
||||
# cluster1:
|
||||
# # -- Whether to enable this cluster in the mesh. Optional, defaults to true.
|
||||
# enabled: true
|
||||
# # -- Address of the cluster, use this if you created DNS records for
|
||||
# # the cluster Clustermesh API server.
|
||||
# address: cluster1.mesh.cilium.io
|
||||
# # -- Port of the cluster Clustermesh API server.
|
||||
# port: 2379
|
||||
# # -- IPs of the cluster Clustermesh API server, use multiple ones when
|
||||
# # you have multiple IPs to access the Clustermesh API server.
|
||||
# ips:
|
||||
# - 172.18.255.201
|
||||
# # -- (deprecated) base64 encoded PEM values for the cluster client certificate, private key and certificate authority.
|
||||
# # These fields can (and should) be omitted in case the CA is shared across clusters. In that case, the
|
||||
# # "remote" private key and certificate available in the local cluster are automatically used instead.
|
||||
# tls:
|
||||
# cert: ""
|
||||
# key: ""
|
||||
# caCert: ""
|
||||
#
|
||||
# Or alternatively you can use a list of clusters:
|
||||
# clusters:
|
||||
# # -- Name of the cluster
|
||||
# - name: cluster1
|
||||
# # -- Address of the cluster, use this if you created DNS records for
|
||||
# # the cluster Clustermesh API server.
|
||||
# # -- Address of the cluster, use this if you created DNS records for
|
||||
# # the cluster Clustermesh API server.
|
||||
# address: cluster1.mesh.cilium.io
|
||||
# # -- Port of the cluster Clustermesh API server.
|
||||
# # -- Port of the cluster Clustermesh API server.
|
||||
# port: 2379
|
||||
# # -- IPs of the cluster Clustermesh API server, use multiple ones when
|
||||
# # you have multiple IPs to access the Clustermesh API server.
|
||||
# # -- IPs of the cluster Clustermesh API server, use multiple ones when
|
||||
# # you have multiple IPs to access the Clustermesh API server.
|
||||
# ips:
|
||||
# - 172.18.255.201
|
||||
# # -- base64 encoded PEM values for the cluster client certificate, private key and certificate authority.
|
||||
# # These fields can (and should) be omitted in case the CA is shared across clusters. In that case, the
|
||||
# # "remote" private key and certificate available in the local cluster are automatically used instead.
|
||||
# # -- (deprecated) base64 encoded PEM values for the cluster client certificate, private key and certificate authority.
|
||||
# # These fields can (and should) be omitted in case the CA is shared across clusters. In that case, the
|
||||
# # "remote" private key and certificate available in the local cluster are automatically used instead.
|
||||
# tls:
|
||||
# cert: ""
|
||||
# key: ""
|
||||
# caCert: ""
|
||||
mcsapi:
|
||||
# -- Enable Multi-Cluster Services API support
|
||||
enabled: false
|
||||
# -- Enabled MCS-API CRDs auto-installation
|
||||
installCRDs: true
|
||||
corednsAutoConfigure:
|
||||
# -- Enable auto-configuration of CoreDNS for Multi-Cluster Services API.
|
||||
# CoreDNS MUST be at least in version v1.12.2 to run this.
|
||||
enabled: false
|
||||
coredns:
|
||||
# -- The Deployment for the cluster CoreDNS service
|
||||
deploymentName: coredns
|
||||
# -- The Service Account name for the cluster CoreDNS service
|
||||
serviceAccountName: coredns
|
||||
# -- The ConfigMap name for the cluster CoreDNS service
|
||||
configMapName: coredns
|
||||
# -- The namespace for the cluster CoreDNS service
|
||||
namespace: kube-system
|
||||
# -- The cluster domain for the cluster CoreDNS service
|
||||
clusterDomain: cluster.local
|
||||
# -- The clusterset domain for the cluster CoreDNS service
|
||||
clustersetDomain: clusterset.local
|
||||
# -- Additional arguments to `clustermesh-apiserver coredns-mcsapi-auto-configure`.
|
||||
extraArgs: []
|
||||
# -- Seconds after which the completed job pod will be deleted
|
||||
ttlSecondsAfterFinished: 1800
|
||||
# -- Labels to be added to coredns-mcsapi-autoconfig pods
|
||||
podLabels: {}
|
||||
# -- Annotations to be added to the coredns-mcsapi-autoconfig Job
|
||||
annotations: {}
|
||||
# -- Node selector for coredns-mcsapi-autoconfig
|
||||
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
|
||||
nodeSelector: {}
|
||||
# -- Priority class for coredns-mcsapi-autoconfig
|
||||
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
|
||||
priorityClassName: ""
|
||||
# -- Node tolerations for pod assignment on nodes with taints
|
||||
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
|
||||
tolerations: []
|
||||
# -- Resource limits for coredns-mcsapi-autoconfig
|
||||
# ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers
|
||||
resources: {}
|
||||
# -- Additional coredns-mcsapi-autoconfig volumes.
|
||||
extraVolumes: []
|
||||
# -- Additional coredns-mcsapi-autoconfig volumeMounts.
|
||||
extraVolumeMounts: []
|
||||
# -- Affinity for coredns-mcsapi-autoconfig
|
||||
affinity: {}
|
||||
apiserver:
|
||||
# -- Clustermesh API server image.
|
||||
image:
|
||||
@@ -3445,17 +3735,12 @@ clustermesh:
|
||||
# - "external": ``clustermesh-apiserver`` will sync remote cluster information to the etcd used as kvstore. This can't be enabled with crd identity allocation mode.
|
||||
kvstoreMode: "internal"
|
||||
service:
|
||||
# -- (bool) Set externallyCreated to true to create the clustermesh-apiserver service outside this helm chart.
|
||||
# For example after external load balancer controllers are created.
|
||||
externallyCreated: false
|
||||
# -- The type of service used for apiserver access.
|
||||
type: NodePort
|
||||
# -- Optional port to use as the node port for apiserver access.
|
||||
#
|
||||
# WARNING: make sure to configure a different NodePort in each cluster if
|
||||
# kube-proxy replacement is enabled, as Cilium is currently affected by a known
|
||||
# bug (#24692) when NodePorts are handled by the KPR implementation. If a service
|
||||
# with the same NodePort exists both in the local and the remote cluster, all
|
||||
# traffic originating from inside the cluster and targeting the corresponding
|
||||
# NodePort will be redirected to a local backend, regardless of whether the
|
||||
# destination node belongs to the local or the remote cluster.
|
||||
nodePort: 32379
|
||||
# -- Annotations for the clustermesh-apiserver service.
|
||||
# Example annotations to configure an internal load balancer on different cloud providers:
|
||||
@@ -3624,13 +3909,15 @@ clustermesh:
|
||||
# The "remote" certificate must be generated with CN=remote-<cluster-name>
|
||||
# if provided manually. Cluster mode is meaningful only when the same
|
||||
# CA is shared across all clusters part of the mesh.
|
||||
authMode: legacy
|
||||
# -- Allow users to provide their own certificates
|
||||
authMode: migration
|
||||
# -- (deprecated) Allow users to provide their own certificates
|
||||
# Users may need to provide their certificates using
|
||||
# a mechanism that requires they provide their own secrets.
|
||||
# This setting does not apply to any of the auto-generated
|
||||
# mechanisms below, it only restricts the creation of secrets
|
||||
# via the `tls-provided` templates.
|
||||
# This option is deprecated as secrets are expected to be created
|
||||
# externally when 'auto' is not enabled.
|
||||
enableSecrets: true
|
||||
# -- Configure automatic TLS certificates generation.
|
||||
# A Kubernetes CronJob is used the generate any
|
||||
@@ -3639,7 +3926,14 @@ clustermesh:
|
||||
auto:
|
||||
# -- When set to true, automatically generate a CA and certificates to
|
||||
# enable mTLS between clustermesh-apiserver and external workload instances.
|
||||
# If set to false, the certs to be provided by setting appropriate values below.
|
||||
#
|
||||
# When set to false you need to pre-create the following secrets:
|
||||
# - clustermesh-apiserver-server-cert
|
||||
# - clustermesh-apiserver-admin-cert
|
||||
# - clustermesh-apiserver-remote-cert
|
||||
# - clustermesh-apiserver-local-cert
|
||||
# The above secret should at least contains the keys `tls.crt` and `tls.key`
|
||||
# and optionally `ca.crt` if a CA bundle is not configured.
|
||||
enabled: true
|
||||
# Sets the method to auto-generate certificates. Supported values:
|
||||
# - helm: This method uses Helm to generate all certificates.
|
||||
@@ -3671,10 +3965,13 @@ clustermesh:
|
||||
# name: ca-issuer
|
||||
# -- certmanager issuer used when clustermesh.apiserver.tls.auto.method=certmanager.
|
||||
certManagerIssuerRef: {}
|
||||
|
||||
# -- base64 encoded PEM values for the clustermesh-apiserver server certificate and private key.
|
||||
# Used if 'auto' is not enabled.
|
||||
server:
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
cert: ""
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
key: ""
|
||||
# -- Extra DNS names added to certificate when it's auto generated
|
||||
extraDnsNames: []
|
||||
@@ -3683,17 +3980,16 @@ clustermesh:
|
||||
# -- base64 encoded PEM values for the clustermesh-apiserver admin certificate and private key.
|
||||
# Used if 'auto' is not enabled.
|
||||
admin:
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
cert: ""
|
||||
key: ""
|
||||
# -- base64 encoded PEM values for the clustermesh-apiserver client certificate and private key.
|
||||
# Used if 'auto' is not enabled.
|
||||
client:
|
||||
cert: ""
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
key: ""
|
||||
# -- base64 encoded PEM values for the clustermesh-apiserver remote cluster certificate and private key.
|
||||
# Used if 'auto' is not enabled.
|
||||
remote:
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
cert: ""
|
||||
# -- Deprecated, as secrets will always need to be created externally if `auto` is disabled.
|
||||
key: ""
|
||||
# clustermesh-apiserver Prometheus metrics configuration
|
||||
metrics:
|
||||
@@ -3848,7 +4144,7 @@ authentication:
|
||||
# -- Enable authentication processing and garbage collection.
|
||||
# Note that if disabled, policy enforcement will still block requests that require authentication.
|
||||
# But the resulting authentication requests for these requests will not be processed, therefore the requests not be allowed.
|
||||
enabled: true
|
||||
enabled: false
|
||||
# -- Buffer size of the channel Cilium uses to receive authentication events from the signal map.
|
||||
queueSize: 1024
|
||||
# -- Buffer size of the channel Cilium uses to receive certificate expiration events from auth handlers.
|
||||
@@ -4041,4 +4337,41 @@ authentication:
|
||||
enableInternalTrafficPolicy: true
|
||||
# -- Enable LoadBalancer IP Address Management
|
||||
enableLBIPAM: true
|
||||
|
||||
# -- Standalone DNS Proxy Configuration
|
||||
# Note: The standalone DNS proxy uses the agent's dnsProxy.* configuration
|
||||
# for DNS settings (proxyPort, enableDnsCompression) to ensure consistency.
|
||||
standaloneDnsProxy:
|
||||
# -- Enable standalone DNS proxy (alpha feature)
|
||||
enabled: false
|
||||
# -- Roll out Standalone DNS proxy automatically when configmap is updated.
|
||||
rollOutPods: false
|
||||
# -- Standalone DNS proxy annotations
|
||||
annotations: {}
|
||||
# -- Standalone DNS proxy debug mode
|
||||
debug: false
|
||||
# -- Standalone DNS proxy server port
|
||||
serverPort: 10095
|
||||
# -- Standalone DNS proxy Node Selector
|
||||
nodeSelector:
|
||||
kubernetes.io/os: linux
|
||||
# -- Standalone DNS proxy tolerations
|
||||
tolerations: []
|
||||
# -- Standalone DNS proxy auto mount service account token
|
||||
automountServiceAccountToken: false
|
||||
# -- Standalone DNS proxy update strategy
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
rollingUpdate:
|
||||
maxSurge: 2
|
||||
maxUnavailable: 0
|
||||
# -- Standalone DNS proxy image
|
||||
image:
|
||||
# @schema
|
||||
# type: [null, string]
|
||||
# @schema
|
||||
override: ~
|
||||
repository: "${STANDALONE_DNS_PROXY_REPO}"
|
||||
tag: "${STANDALONE_DNS_PROXY_VERSION}"
|
||||
digest: "${STANDALONE_DNS_PROXY_DIGEST}"
|
||||
useDigest: ${USE_DIGESTS}
|
||||
pullPolicy: "${PULL_POLICY}"
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
ARG VERSION=v1.18.6
|
||||
ARG VERSION=v1.19.1
|
||||
FROM quay.io/cilium/cilium:${VERSION}
|
||||
|
||||
@@ -15,8 +15,8 @@ cilium:
|
||||
mode: "kubernetes"
|
||||
image:
|
||||
repository: ghcr.io/cozystack/cozystack/cilium
|
||||
tag: 1.18.6
|
||||
digest: "sha256:4f4585f8adc3b8becd15d3999f3900a4d3d650f2ab7f85ca8c661f3807113d01"
|
||||
tag: 1.19.1
|
||||
digest: "sha256:ab3acf270821df4614a8456348a4e0d3098aed72a4b2016a0edfa30d91428c3d"
|
||||
envoy:
|
||||
enabled: false
|
||||
rollOutCiliumPods: true
|
||||
|
||||
@@ -2,11 +2,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: vlogs-generic
|
||||
name: vlinsert-generic
|
||||
namespace: cozy-monitoring
|
||||
spec:
|
||||
type: ExternalName
|
||||
externalName: vlogs-generic.tenant-root.svc.cluster.local
|
||||
externalName: vlinsert-generic.tenant-root.svc.cluster.local
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
||||
@@ -1069,7 +1069,7 @@ spec:
|
||||
preferredAutoattachInputDevice: true
|
||||
preferredDiskBus: sata
|
||||
preferredInterfaceModel: e1000e
|
||||
preferredTPM: {}
|
||||
preferredTPM:
|
||||
features:
|
||||
preferredAcpi: {}
|
||||
preferredApic: {}
|
||||
@@ -1135,7 +1135,7 @@ spec:
|
||||
preferredInputBus: virtio
|
||||
preferredInputType: tablet
|
||||
preferredInterfaceModel: virtio
|
||||
preferredTPM: {}
|
||||
preferredTPM:
|
||||
features:
|
||||
preferredAcpi: {}
|
||||
preferredApic: {}
|
||||
@@ -1455,7 +1455,7 @@ spec:
|
||||
preferredAutoattachInputDevice: true
|
||||
preferredDiskBus: sata
|
||||
preferredInterfaceModel: e1000e
|
||||
preferredTPM: {}
|
||||
preferredTPM:
|
||||
features:
|
||||
preferredAcpi: {}
|
||||
preferredApic: {}
|
||||
@@ -1521,7 +1521,7 @@ spec:
|
||||
preferredInputBus: virtio
|
||||
preferredInputType: tablet
|
||||
preferredInterfaceModel: virtio
|
||||
preferredTPM: {}
|
||||
preferredTPM:
|
||||
features:
|
||||
preferredAcpi: {}
|
||||
preferredApic: {}
|
||||
@@ -1585,7 +1585,7 @@ spec:
|
||||
preferredAutoattachInputDevice: true
|
||||
preferredDiskBus: sata
|
||||
preferredInterfaceModel: e1000e
|
||||
preferredTPM: {}
|
||||
preferredTPM:
|
||||
features:
|
||||
preferredAcpi: {}
|
||||
preferredApic: {}
|
||||
@@ -1651,7 +1651,7 @@ spec:
|
||||
preferredInputBus: virtio
|
||||
preferredInputType: tablet
|
||||
preferredInterfaceModel: virtio
|
||||
preferredTPM: {}
|
||||
preferredTPM:
|
||||
features:
|
||||
preferredAcpi: {}
|
||||
preferredApic: {}
|
||||
|
||||
@@ -344,8 +344,8 @@ fluent-bit:
|
||||
[OUTPUT]
|
||||
Name http
|
||||
Match kube.*
|
||||
Host vlogs-generic.{{ .Values.global.target }}.svc
|
||||
port 9428
|
||||
Host vlinsert-generic.{{ .Values.global.target }}.svc
|
||||
port 9481
|
||||
compress gzip
|
||||
uri /insert/jsonline?_stream_fields=log_source,stream,kubernetes_pod_name,kubernetes_container_name,kubernetes_namespace_name&_msg_field=log&_time_field=date
|
||||
format json_lines
|
||||
@@ -355,8 +355,8 @@ fluent-bit:
|
||||
[OUTPUT]
|
||||
Name http
|
||||
Match events.*
|
||||
Host vlogs-generic.{{ .Values.global.target }}.svc
|
||||
port 9428
|
||||
Host vlinsert-generic.{{ .Values.global.target }}.svc
|
||||
port 9481
|
||||
compress gzip
|
||||
uri /insert/jsonline?_stream_fields=log_source,reason,meatdata_namespace,metadata_name&_msg_field=message&_time_field=date
|
||||
format json_lines
|
||||
@@ -366,8 +366,8 @@ fluent-bit:
|
||||
[OUTPUT]
|
||||
Name http
|
||||
Match audit.*
|
||||
Host vlogs-generic.{{ .Values.global.target }}.svc
|
||||
port 9428
|
||||
Host vlinsert-generic.{{ .Values.global.target }}.svc
|
||||
port 9481
|
||||
compress gzip
|
||||
uri /insert/jsonline?_stream_fields=log_source,stage,user_username,verb,requestUri&_msg_field=requestURI&_time_field=date
|
||||
format json_lines
|
||||
|
||||
34
packages/system/monitoring/dashboards-infra.list
Normal file
34
packages/system/monitoring/dashboards-infra.list
Normal file
@@ -0,0 +1,34 @@
|
||||
main/controller
|
||||
main/namespaces
|
||||
main/capacity-planning
|
||||
main/ntp
|
||||
main/namespace
|
||||
main/pod
|
||||
main/volumes
|
||||
main/node
|
||||
main/nodes
|
||||
control-plane/control-plane-status
|
||||
control-plane/deprecated-resources
|
||||
control-plane/dns-coredns
|
||||
control-plane/kube-etcd
|
||||
victoria-metrics/vmalert
|
||||
victoria-metrics/vmagent
|
||||
victoria-metrics/victoriametrics-cluster
|
||||
victoria-metrics/backupmanager
|
||||
victoria-metrics/victoriametrics
|
||||
victoria-metrics/operator
|
||||
dotdc/k8s-views-global
|
||||
dotdc/k8s-views-namespaces
|
||||
dotdc/k8s-system-coredns
|
||||
dotdc/k8s-views-pods
|
||||
flux/flux-control-plane
|
||||
flux/flux-stats
|
||||
kubevirt/kubevirt-control-plane
|
||||
goldpinger/goldpinger
|
||||
cache/nginx-vts-stats
|
||||
storage/linstor
|
||||
seaweedfs/seaweedfs
|
||||
hubble/overview
|
||||
hubble/dns-namespace
|
||||
hubble/l7-http-metrics
|
||||
hubble/network-overview
|
||||
@@ -1,14 +1,3 @@
|
||||
dotdc/k8s-views-global
|
||||
dotdc/k8s-views-namespaces
|
||||
dotdc/k8s-system-coredns
|
||||
dotdc/k8s-views-pods
|
||||
cache/nginx-vts-stats
|
||||
victoria-metrics/vmalert
|
||||
victoria-metrics/vmagent
|
||||
victoria-metrics/victoriametrics-cluster
|
||||
victoria-metrics/backupmanager
|
||||
victoria-metrics/victoriametrics
|
||||
victoria-metrics/operator
|
||||
ingress/controller-detail
|
||||
ingress/controllers
|
||||
ingress/namespaces
|
||||
@@ -18,31 +7,8 @@ ingress/namespace-detail
|
||||
db/cloudnativepg
|
||||
db/maria-db
|
||||
db/redis
|
||||
main/controller
|
||||
main/namespaces
|
||||
main/capacity-planning
|
||||
main/ntp
|
||||
main/namespace
|
||||
main/pod
|
||||
main/volumes
|
||||
main/node
|
||||
main/nodes
|
||||
control-plane/control-plane-status
|
||||
control-plane/deprecated-resources
|
||||
control-plane/dns-coredns
|
||||
control-plane/kube-etcd
|
||||
kubevirt/kubevirt-control-plane
|
||||
flux/flux-control-plane
|
||||
flux/flux-stats
|
||||
kafka/strimzi-kafka
|
||||
goldpinger/goldpinger
|
||||
clickhouse/altinity-clickhouse-operator-dashboard
|
||||
storage/linstor
|
||||
seaweedfs/seaweedfs
|
||||
hubble/overview
|
||||
hubble/dns-namespace
|
||||
hubble/l7-http-metrics
|
||||
hubble/network-overview
|
||||
nats/nats-jetstream
|
||||
nats/nats-server
|
||||
mongodb/mongodb-overview
|
||||
|
||||
@@ -14,3 +14,21 @@ spec:
|
||||
url: http://grafana-dashboards.cozy-grafana-operator.svc/{{ . }}.json
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- if eq .Release.Namespace "tenant-root" }}
|
||||
{{- range (split "\n" (.Files.Get "dashboards-infra.list")) }}
|
||||
{{- $parts := split "/" . }}
|
||||
{{- if eq (len $parts) 2 }}
|
||||
---
|
||||
apiVersion: grafana.integreatly.org/v1beta1
|
||||
kind: GrafanaDashboard
|
||||
metadata:
|
||||
name: {{ $parts._0 }}-{{ $parts._1 }}
|
||||
spec:
|
||||
folder: {{ $parts._0 }}
|
||||
instanceSelector:
|
||||
matchLabels:
|
||||
dashboards: grafana
|
||||
url: http://grafana-dashboards.cozy-grafana-operator.svc/{{ . }}.json
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -8,7 +8,7 @@ spec:
|
||||
access: proxy
|
||||
type: victoriametrics-logs-datasource
|
||||
name: vlogs-{{ .name }}
|
||||
url: http://vlogs-{{ .name }}.{{ $.Release.Namespace }}.svc:9428
|
||||
url: http://vlselect-{{ .name }}.{{ $.Release.Namespace }}.svc:9471
|
||||
instanceSelector:
|
||||
matchLabels:
|
||||
dashboards: grafana
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
{{- range .Values.logsStorages }}
|
||||
apiVersion: operator.victoriametrics.com/v1beta1
|
||||
kind: VLogs
|
||||
---
|
||||
apiVersion: operator.victoriametrics.com/v1
|
||||
kind: VLCluster
|
||||
metadata:
|
||||
name: {{ .name }}
|
||||
spec:
|
||||
@@ -9,14 +10,23 @@ spec:
|
||||
apps.cozystack.io/application.group: apps.cozystack.io
|
||||
apps.cozystack.io/application.kind: Monitoring
|
||||
apps.cozystack.io/application.name: {{ $.Release.Name }}
|
||||
image:
|
||||
tag: v1.17.0-victorialogs
|
||||
storage:
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .storage }}
|
||||
storageClassName: {{ .storageClassName }}
|
||||
accessModes: [ReadWriteOnce]
|
||||
retentionPeriod: "{{ .retentionPeriod }}"
|
||||
removePvcAfterDelete: true
|
||||
vlinsert:
|
||||
replicaCount: 2
|
||||
resources: {}
|
||||
vlselect:
|
||||
replicaCount: 2
|
||||
resources: {}
|
||||
vlstorage:
|
||||
retentionPeriod: {{ .retentionPeriod | quote }}
|
||||
replicaCount: 2
|
||||
resources: {}
|
||||
storage:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
{{- with .storageClassName }}
|
||||
storageClassName: {{ . }}
|
||||
{{- end }}
|
||||
resources:
|
||||
requests:
|
||||
storage: {{ .storage }}
|
||||
{{- end }}
|
||||
|
||||
@@ -87,3 +87,92 @@ spec:
|
||||
memory: 8Gi
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- range .Values.logsStorages }}
|
||||
---
|
||||
apiVersion: autoscaling.k8s.io/v1
|
||||
kind: VerticalPodAutoscaler
|
||||
metadata:
|
||||
name: vpa-vlinsert-{{ .name }}
|
||||
spec:
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: vlinsert-{{ .name }}
|
||||
updatePolicy:
|
||||
updateMode: Auto
|
||||
resourcePolicy:
|
||||
containerPolicies:
|
||||
- containerName: vlinsert
|
||||
minAllowed:
|
||||
{{- if and .vlinsert .vlinsert.minAllowed }}
|
||||
{{- toYaml .vlinsert.minAllowed | nindent 10 }}
|
||||
{{- else }}
|
||||
cpu: 25m
|
||||
memory: 64Mi
|
||||
{{- end }}
|
||||
maxAllowed:
|
||||
{{- if and .vlinsert .vlinsert.maxAllowed }}
|
||||
{{- toYaml .vlinsert.maxAllowed | nindent 10 }}
|
||||
{{- else }}
|
||||
cpu: 2000m
|
||||
memory: 4Gi
|
||||
{{- end }}
|
||||
---
|
||||
apiVersion: autoscaling.k8s.io/v1
|
||||
kind: VerticalPodAutoscaler
|
||||
metadata:
|
||||
name: vpa-vlselect-{{ .name }}
|
||||
spec:
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
name: vlselect-{{ .name }}
|
||||
updatePolicy:
|
||||
updateMode: Auto
|
||||
resourcePolicy:
|
||||
containerPolicies:
|
||||
- containerName: vlselect
|
||||
minAllowed:
|
||||
{{- if and .vlselect .vlselect.minAllowed }}
|
||||
{{- toYaml .vlselect.minAllowed | nindent 10 }}
|
||||
{{- else }}
|
||||
cpu: 25m
|
||||
memory: 64Mi
|
||||
{{- end }}
|
||||
maxAllowed:
|
||||
{{- if and .vlselect .vlselect.maxAllowed }}
|
||||
{{- toYaml .vlselect.maxAllowed | nindent 10 }}
|
||||
{{- else }}
|
||||
cpu: 4000m
|
||||
memory: 8Gi
|
||||
{{- end }}
|
||||
---
|
||||
apiVersion: autoscaling.k8s.io/v1
|
||||
kind: VerticalPodAutoscaler
|
||||
metadata:
|
||||
name: vpa-vlstorage-{{ .name }}
|
||||
spec:
|
||||
targetRef:
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
name: vlstorage-{{ .name }}
|
||||
updatePolicy:
|
||||
updateMode: Auto
|
||||
resourcePolicy:
|
||||
containerPolicies:
|
||||
- containerName: vlstorage
|
||||
minAllowed:
|
||||
{{- if and .vlstorage .vlstorage.minAllowed }}
|
||||
{{- toYaml .vlstorage.minAllowed | nindent 10 }}
|
||||
{{- else }}
|
||||
cpu: 25m
|
||||
memory: 64Mi
|
||||
{{- end }}
|
||||
maxAllowed:
|
||||
{{- if and .vlstorage .vlstorage.maxAllowed }}
|
||||
{{- toYaml .vlstorage.maxAllowed | nindent 10 }}
|
||||
{{- else }}
|
||||
cpu: 4000m
|
||||
memory: 8Gi
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -44,7 +44,7 @@ logsStorages:
|
||||
- name: generic
|
||||
retentionPeriod: "1"
|
||||
storage: 10Gi
|
||||
storageClassName: replicated
|
||||
storageClassName: ""
|
||||
alerta:
|
||||
storage: 10Gi
|
||||
storageClassName: ""
|
||||
|
||||
@@ -20,5 +20,10 @@
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
*.md
|
||||
*.md.gotmpl
|
||||
CHANGELOG.md
|
||||
_changelog.md
|
||||
_index.md
|
||||
e2e/
|
||||
lint/
|
||||
tests/
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
dependencies:
|
||||
- name: victoria-metrics-common
|
||||
repository: https://victoriametrics.github.io/helm-charts
|
||||
version: 0.0.42
|
||||
version: 0.0.46
|
||||
- name: crds
|
||||
repository: ""
|
||||
version: 0.0.*
|
||||
digest: sha256:d186ad6f54d64a2f828cd80a136e06dcf1f30dbc8ae94964bb9b166ee32eb30e
|
||||
generated: "2025-03-19T09:59:22.84209872Z"
|
||||
digest: sha256:43d3d210a9d1a2234e6c56518f8c477125a5ad5e8ed08d46209528f19acd9c89
|
||||
generated: "2025-12-23T12:58:01.428960334Z"
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
annotations:
|
||||
artifacthub.io/category: monitoring-logging
|
||||
artifacthub.io/changes: |
|
||||
- updates operator to [v0.55.0](https://github.com/VictoriaMetrics/operator/releases/tag/v0.55.0) version
|
||||
- updates operator to [v0.68.1](https://github.com/VictoriaMetrics/operator/releases/tag/v0.68.1) version
|
||||
artifacthub.io/license: Apache-2.0
|
||||
artifacthub.io/links: |
|
||||
- name: Sources
|
||||
@@ -9,12 +9,16 @@ annotations:
|
||||
- name: Charts repo
|
||||
url: https://victoriametrics.github.io/helm-charts/
|
||||
- name: Docs
|
||||
url: https://docs.victoriametrics.com/operator
|
||||
url: https://docs.victoriametrics.com/operator/
|
||||
- name: Changelog
|
||||
url: https://docs.victoriametrics.com/operator/changelog
|
||||
url: https://docs.victoriametrics.com/operator/changelog/
|
||||
artifacthub.io/operator: "true"
|
||||
artifacthub.io/readme: |
|
||||
# VictoriaMetrics Operator Helm chart
|
||||
|
||||
Chart documentation is available [here](https://docs.victoriametrics.com/helm/victoria-metrics-operator/)
|
||||
apiVersion: v2
|
||||
appVersion: v0.55.0
|
||||
appVersion: v0.68.1
|
||||
dependencies:
|
||||
- name: victoria-metrics-common
|
||||
repository: https://victoriametrics.github.io/helm-charts
|
||||
@@ -23,7 +27,7 @@ dependencies:
|
||||
name: crds
|
||||
repository: ""
|
||||
version: 0.0.*
|
||||
description: Victoria Metrics Operator
|
||||
description: VictoriaMetrics Operator
|
||||
home: https://github.com/VictoriaMetrics/operator
|
||||
icon: https://avatars.githubusercontent.com/u/43720803?s=200&v=4
|
||||
keywords:
|
||||
@@ -42,4 +46,4 @@ sources:
|
||||
- https://github.com/VictoriaMetrics/helm-charts
|
||||
- https://github.com/VictoriaMetrics/operator
|
||||
type: application
|
||||
version: 0.44.0
|
||||
version: 0.59.1
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
# VictoriaMetrics Operator Helm chart
|
||||
|
||||
Chart documentation is available [here](https://docs.victoriametrics.com/helm/victoria-metrics-operator/)
|
||||
@@ -1,7 +1,7 @@
|
||||
# Release notes for version 0.44.0
|
||||
# Release notes for version 0.59.1
|
||||
|
||||
**Release date:** 02 Apr 2025
|
||||
**Release date:** 03 Mar 2026
|
||||
|
||||
 
|
||||
 
|
||||
|
||||
- updates operator to [v0.55.0](https://github.com/VictoriaMetrics/operator/releases/tag/v0.55.0) version
|
||||
- updates operator to [v0.68.1](https://github.com/VictoriaMetrics/operator/releases/tag/v0.68.1) version
|
||||
|
||||
@@ -0,0 +1,9 @@
|
||||
---
|
||||
build:
|
||||
list: never
|
||||
publishResources: false
|
||||
render: never
|
||||
sitemap:
|
||||
disable: true
|
||||
---
|
||||
# Subchart for VictoriaMetrics Operator CRDs
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user